From marcin.juszkiewicz at linaro.org Mon Jan 2 08:36:23 2023 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 2 Jan 2023 09:36:23 +0100 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: <39eb8caf-9c2c-2bfe-ef2f-4e124e4fdd88@linaro.org> W dniu 29.12.2022 o 10:58, Micha? Nasiadka pisze: > Hello Koalas, > > I?d like to propose Bartosz Bezak as a core reviewer for Kolla, > Kolla-Ansible, Kayobe and ansible-collection-kolla. +2 from me From gmann at ghanshyammann.com Mon Jan 2 18:45:48 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 02 Jan 2023 10:45:48 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 4 at 1600 UTC Message-ID: <18573cdd98e.10450f502215756.4447083195608302191@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2023 Jan 4, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Jan 3 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From pierre at stackhpc.com Mon Jan 2 18:52:47 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 2 Jan 2023 19:52:47 +0100 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: +2 for Kayobe. On Thu, 29 Dec 2022 at 11:13, Micha? Nasiadka wrote: > Hello Koalas, > > I?d like to propose Bartosz Bezak as a core reviewer for Kolla, > Kolla-Ansible, Kayobe and ansible-collection-kolla. > > Bartosz has recently went through release preparations and release process > itself for all mentioned repositories, has been a great deal of help in > meeting the cycle trailing projects deadline. > In addition to that, he?s been the main author of Ubuntu Jammy and EL9 > (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as > fixing various bugs amongst all four repositories. > > Bartosz also brings OVN knowledge, which will make the review process for > those patches better (and improve our overall review velocity, which hasn?t > been great recently). > > Kind regards, > Michal Nasiadka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Jan 3 01:31:48 2023 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 2 Jan 2023 20:31:48 -0500 Subject: [cloudkitty] Instances billing based on tags Message-ID: Folks, We have an AWS project and in a single project we run multiple customers so for billing we use tags. In short every vm instance has a tag (customer name) and that way we can produce bills for each customer. Recently I am playing with openstack cloudkitty and it works with standard cases like project based billing. But does it support tags based billing just similar to what i have explained in the above aws example? -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Tue Jan 3 09:11:04 2023 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 3 Jan 2023 10:11:04 +0100 Subject: [neutron] bug deputy report for week of 2022-12-26 Message-ID: Hi, During the winter holidays we had hardly any new bugs reported: Medium: * https://bugs.launchpad.net/neutron/+bug/2000634 [OVN] Maintenance task for availability zones changes failing proposed fix: https://review.opendev.org/c/openstack/neutron/+/868746 Incomplete: * https://bugs.launchpad.net/neutron/+bug/2000495 See error "HashRingIsEmpty" when create instance waiting for more information Cheers, Bence -------------- next part -------------- An HTML attachment was scrubbed... URL: From akahat at redhat.com Tue Jan 3 12:16:44 2023 From: akahat at redhat.com (Amol Kahat) Date: Tue, 3 Jan 2023 17:46:44 +0530 Subject: [TripleO] Tripleo-ci-centos-9-content-provider jobs are failing in check line Message-ID: Hello All, We are investigating centos-9-content-provider check failure, related to build-test-packages. Jobs link[1] Details are in the related launchpad bug[2]. [1] https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-9-content-provider&skip=0 [2] https://bugs.launchpad.net/tripleo/+bug/2000897/ -- *Amol Kahat* Software Engineer *Red Hat India Pvt. Ltd. Pune, India.* akahat at redhat.com B764 E6F8 F4C1 A1AF 816C 6840 FDD3 BA6C 832D 7715 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jan 3 13:11:24 2023 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 3 Jan 2023 10:11:24 -0300 Subject: [cloudkitty][horizon] change service name list in GUI In-Reply-To: References: Message-ID: As far as I know, these metrics are loaded from the processed data types by CloudKitty. Therefore, it is not fixed in CloudKitty dashboard or CloudKitty itself. The "network.incoming.bytes.rate" is probably an altname used to represent the use of the "rate:xxx" aggregation method in the Gnocchi backend to calculate the rate of change of a metric. Therefore, it has nothing to do with the metric itself collected by Ceilometer. For instance, you can check the archive-policy used for the metric in question. On Sat, Dec 31, 2022 at 6:53 PM Satish Patel wrote: > Folks, > > I am trying to configure rating for network data bytes in/out but what I > have seen is that the ceilometer uses a different name than what cloudkitty > GUI drop down-menu has. > > Ceilometer using "network.outgoing.bytes" and "network.incoming.bytes" > > Cloudkitty GUI drop-down list has "network.outgoing.bytes.rate" & > "network.incoming.bytes.rate" > > I have tried to change it in cloudkitty metrics.yml file to match it with > the ceilometer but the rating is not calculating until I change that in > cloudkitty. > > Please advise how to change it in cloudkitty. > > > > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jan 3 13:13:07 2023 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 3 Jan 2023 10:13:07 -0300 Subject: [cloudkitty] Instances billing based on tags In-Reply-To: References: Message-ID: You can do that. Basically, you can start collecting those attributes you want for billing (e.g. tags) via Ceilometer dynamic pollster (that is the easiest way to achieve this). Then, you need to configure the resource type in Gnocchi to store this extra attribute, and of course, configure CloudKitty to collect/use it. Both in the metrics.yml and then in the hashmap or Pyscript rules. On Mon, Jan 2, 2023 at 11:01 PM Satish Patel wrote: > Folks, > > We have an AWS project and in a single project we run multiple customers > so for billing we use tags. In short every vm instance has a tag (customer > name) and that way we can produce bills for each customer. > > Recently I am playing with openstack cloudkitty and it works with standard > cases like project based billing. But does it support tags based billing > just similar to what i have explained in the above aws example? > > > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Tue Jan 3 13:57:10 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 3 Jan 2023 14:57:10 +0100 Subject: Kolla-ansible Yoga Magnum csi-cinder-controllerplugin-0 CrashLoopBackOff Message-ID: <2e0ba691-53ef-3513-8424-763d2ba89a5b@me.com> Dear all, I have a strange issue with Magnum on Yoga deployed by kolla-ansible. I noticed this on a prod cluster and so I deployed a fresh cluster from scratch and here I'm facing the very same issue. When deploying a a new K8s cluster the csi-cinder-controllerplugin-0 pod is in CrashLoopBackOff state. kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-56448757b9-62lxw 1/1 Running 0 11d kube-system coredns-56448757b9-6mvtr 1/1 Running 0 11d kube-system csi-cinder-controllerplugin-0 4/5 CrashLoopBackOff 3339 (92s ago) 11d kube-system csi-cinder-nodeplugin-88ttn 2/2 Running 0 11d kube-system csi-cinder-nodeplugin-tnxpt 2/2 Running 0 11d kube-system dashboard-metrics-scraper-67f57ff746-2vhgb 1/1 Running 0 11d kube-system k8s-keystone-auth-5djxh 1/1 Running 0 11d kube-system kube-dns-autoscaler-6d5b5dc777-mm7qs 1/1 Running 0 11d kube-system kube-flannel-ds-795hj 1/1 Running 0 11d kube-system kube-flannel-ds-p76rf 1/1 Running 0 11d kube-system kubernetes-dashboard-7b88d986b4-5bhnf 1/1 Running 0 11d kube-system magnum-metrics-server-6c4c77844b-jbpml 1/1 Running 0 11d kube-system npd-sqsbr 1/1 Running 0 11d kube-system openstack-cloud-controller-manager-c5prb 1/1 Running 0 11d I searched the mailing list and noticed that I reported the very same issue last year: https://lists.openstack.org/pipermail/openstack-discuss/2022-May/028517.html but then I was either able to fix it or not able to reproduce it. I searched the web and found other users having the same issue: https://storyboard.openstack.org/#!/story/2010023 https://github.com/kubernetes/cloud-provider-openstack/issues/1845 They suggest to change the csi_snapshotter_tag to v4.0.0. I haven't tried it yet, but I have the feeling that something in my deployment is not correct since it wants to use v1.2.2 which is a very old version. Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 26m (x3382 over 12d) kubelet Pulling image "quay.io/k8scsi/csi-snapshotter:v1.2.2" Warning BackOff 81s (x79755 over 12d) kubelet Back-off restarting failed container kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME k8s-admin-test-old-ygahdfhciitx-master-0 Ready master 12d v1.23.3 10.0.0.132 172.28.4.124 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12 k8s-admin-test-old-ygahdfhciitx-node-0 Ready 12d v1.23.3 10.0.0.160 172.28.4.128 Fedora CoreOS 35.20220410.3.1 5.16.18-200.fc35.x86_64 docker://20.10.12 I deployed on Rocky Linux 8.7 using latest Kolla-ansible for Yoga: [vagrant at seed ~]$ kolla-ansible --version 14.7.1 [vagrant at seed ~]$ cat /etc/os-release NAME="Rocky Linux" VERSION="8.7 (Green Obsidian)" ID="rocky" ID_LIKE="rhel centos fedora" VERSION_ID="8.7" PLATFORM_ID="platform:el8" PRETTY_NAME="Rocky Linux 8.7 (Green Obsidian)" ANSI_COLOR="0;32" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:rocky:rocky:8:GA" HOME_URL="https://rockylinux.org/" BUG_REPORT_URL="https://bugs.rockylinux.org/" ROCKY_SUPPORT_PRODUCT="Rocky-Linux-8" ROCKY_SUPPORT_PRODUCT_VERSION="8.7" REDHAT_SUPPORT_PRODUCT="Rocky Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.7" Can you please help or clarify? Best Regards, Oliver From stephenfin at redhat.com Tue Jan 3 14:54:17 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 03 Jan 2023 14:54:17 +0000 Subject: [nova] Do openstack support USB passthrough In-Reply-To: References: Message-ID: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: > Hi, all > > I want to ask if openstack support USB passthrough now? > > Or if I want the instance to recognize the USB flash drive on the > host, do you have any suggestions? > > Thanks, > Han This isn't supported in nova and probably never will be. The closest you can get is to passthrough an entire USB controller as suggested by this blog [1], but that's really a hack and I 100% would not use it in production. Stephen [1] https://egallen.com/openstack-usb-passthrough/ From stephenfin at redhat.com Tue Jan 3 14:57:56 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 03 Jan 2023 14:57:56 +0000 Subject: Nova libvirt/kvm sound device In-Reply-To: References: Message-ID: On Tue, 2022-12-27 at 17:40 +0200, Zakhar Kirpichenko wrote: > Hi!? > > I'd like to have the following configuration added to every guest on a > specific host managed by Nova and libvirt/kvm:? > > ? ? > ? ? ?
function='0x0'/> > ? ? > > When I add the device manually to instance?xml, it works as intended but the > instance configuration gets overwritten on instance stop/start or hard reboot > via Nova. Modifying libvirt's XML behind nova's back is a big no-no. You break the contract between the two. If you wanted audio support, you'd need to add this support to nova itself. This would require a spec, quite a bit of coding, and would not be backported. tbh, it's also hard to see this being prioritized since audio support for cloud-based VMs is a rather unusual request. If you wanted to persue this approach though, feel free to reach out on IRC (#openstack-nova on OFTC) and we can guide you. Stephen > > What is the currently supported / proper way to add a virtual sound device > without having to modify libvirt or Nova code? I would appreciate any advice.? > > Best regards,? > Zakhar From michal.arbet at ultimum.io Tue Jan 3 15:32:08 2023 From: michal.arbet at ultimum.io (Michal Arbet) Date: Tue, 3 Jan 2023 16:32:08 +0100 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: +2 From me. Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook po 2. 1. 2023 v 20:03 odes?latel Pierre Riteau napsal: > +2 for Kayobe. > > On Thu, 29 Dec 2022 at 11:13, Micha? Nasiadka wrote: > >> Hello Koalas, >> >> I?d like to propose Bartosz Bezak as a core reviewer for Kolla, >> Kolla-Ansible, Kayobe and ansible-collection-kolla. >> >> Bartosz has recently went through release preparations and release >> process itself for all mentioned repositories, has been a great deal of >> help in meeting the cycle trailing projects deadline. >> In addition to that, he?s been the main author of Ubuntu Jammy and EL9 >> (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as >> fixing various bugs amongst all four repositories. >> >> Bartosz also brings OVN knowledge, which will make the review process for >> those patches better (and improve our overall review velocity, which hasn?t >> been great recently). >> >> Kind regards, >> Michal Nasiadka >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jan 3 16:14:27 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 3 Jan 2023 17:14:27 +0100 Subject: [nova][placement] Specs freeze will be on Jan 12th Message-ID: Hey, As we didn't have a quorum for today's meeting, I prefer to move our deadline for freezing the Antelope accepted specs by one week, instead of this Thursday. This means we will continue to review open specs until January the 12th. That being said, as a reminder, our Feature Freeze is on February 16th [1] which means we will only have *five* weeks for reviewing all the implementations, so please start to work on your feature branch and don't wait for your spec to be accepted or we wouldn't have time for reviewing all the open features. Thanks, -Sylvain [1] https://releases.openstack.org/antelope/schedule.html#a-ff -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Tue Jan 3 16:25:00 2023 From: jahson.babel at cc.in2p3.fr (Jahson Babel) Date: Tue, 3 Jan 2023 17:25:00 +0100 Subject: [ops] QOS on flavor breaking live migration from CentOS 7 to 8 Message-ID: Hello, I'm trying to live migrate some VMs from CentOS 7 to Rocky 8. Everything run smoothly when there is no extra specs on flavors but things getting more complicated when those are fixed. Especially when using quota:vif_burst for QOS. I know that we aren't supposed to use this for QOS now but it's an old cluster and it was done that way at the time. So VMs kinda have all those specs tied to them. When live migrate a VM this show up in the nova's logs : driver.py _live_migration_operation nova.virt.libvirt.driver? Live Migration failure: internal error: Child process (tc class add dev tapxxxxxxxx-xx parent 1: classid 1:1 htb rate 250000kbps ceil 2000000kbps burst 60000000kb quantum 21333) unexpected exit status 1: Illegal "burst" This bug cover the problem : https://bugs.launchpad.net/nova/+bug/1960840 So it's seems to be a normal behavior. Plus I forgot to mention that I'm on OpenStack Train version and the file mentioned in the launchpad is not present for this version. By using Rocky 8 I have to use an updated libvirt that won't accept the burst parameter we used to set. All available versions of libvirt on Rocky 8 have changed behavior concerning the burst parameter. I've done some testing to make things works including removing the extra_specs on flavors and in the DB, removing it through libvirt and trying to modify tc rules used by a VM but it didn't worked. I have not tried yet to patch Nova or Libvirt but I don't really know where to look for. The only thing that did work was to resize the VM to an identical flavor without the extra_specs. But this induce a complete reboot of the VM. I would like, if possible, to be able to live migrate the VMs which is quite easier. Is it possible to remove the extra_specs on the VMs and then live migrate ? Or should I just plan to resize/reboot all VMs without those extra_specs ? Any advise will be appreciated. Thank you for any help, Best regards. Jahson -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4270 bytes Desc: S/MIME Cryptographic Signature URL: From smooney at redhat.com Tue Jan 3 19:03:23 2023 From: smooney at redhat.com (Sean Mooney) Date: Tue, 03 Jan 2023 19:03:23 +0000 Subject: [nova] Do openstack support USB passthrough In-Reply-To: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> References: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> Message-ID: <765850d1c170a510d34dafb4253ab97528829351.camel@redhat.com> On Tue, 2023-01-03 at 14:54 +0000, Stephen Finucane wrote: > On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: > > Hi, all > > > > I want to ask if openstack support USB passthrough now? > > > > Or if I want the instance to recognize the USB flash drive on the > > host, do you have any suggestions? > > > > Thanks, > > Han > > This isn't supported in nova and probably never will be. The closest you can get > is to passthrough an entire USB controller as suggested by this blog [1], but > that's really a hack and I 100% would not use it in production. ya so we have discussed usb passthough supprot a few times and its somethign nova could add but there has neither been the demand or desire to add it stongly enough in the core team to actully do it. the shorted path to enableing usb passthoug would likely be to add support to cyborg and then add support for that ot nova. i am perhaps the most open of the nova cores to supporting usb passthough since i have wanted to add it in the past but if we were to support it it would have to be similar to howe we support pci passhtough. static provisioning and likely only of stateless devices which rules out usb falsh drives. usb gps recivers for precision time stamping was one of the usecause raised in the past which we were somewhat open too the other was usb programmers/debuggers forh cases when vms where used in industral test and automation workflows. as stephen said the only way to do it today is to piggyback on pci passthough and passhtough a usb contoller not a single device. if we were to ever support this in nova directly i would proably extend the pci tracker or support other buses like usb or use the generic resouce table created for persistent memory to model the devices. in eitehr case we would want this capablity to be placement native from the start if we added this capablity so it would be more and less work then you might imagine to do this right. less work if we maintain the requirement for statless devices only (ie no usb flash drives) more if you also need to handel multi tenancy and move operation include data copying, erasure and or encypetion. i would not expect this to change in the next few release unless multiple operators provide feedback that this is a broadly deired capablity. with out a top level generic device api for mutple type of devices (vgpu, usb, pci) that was decoupled form the flaovr or an abstraction like the cyborg device-profile or pci alias it is hard to see a clean way to model this in our api. that is why enabling it in cyborg and then extneding nova ot support device profiles with a device type of usb is the simplar solution form a nova perspecitve but that is non trivial from an operational perspective as you requrie cyborg to utilise the feature. doing it via a usb_alias in the flavor has all the draw backs of the pci_alias, static configuration that must match on all compute nodes and futher proliferation of flavor explostion. this is one of the reasons we have not added this in the past. the work to do it in the libvirt driver would not be hard but the maintaince and operational overhead of using it for operators is non trivial. > > Stephen > > [1] https://egallen.com/openstack-usb-passthrough/ > > From smooney at redhat.com Tue Jan 3 19:20:50 2023 From: smooney at redhat.com (Sean Mooney) Date: Tue, 03 Jan 2023 19:20:50 +0000 Subject: [ops] QOS on flavor breaking live migration from CentOS 7 to 8 In-Reply-To: References: Message-ID: hi yes this is a know issue. so the simple answer is resize all affected vms instead of live migrating them the longer answer is we have been dissing this internally at redhat on and off for some time now. https://bugs.launchpad.net/nova/+bug/1960840 is one example where this happens. there is another case for the cpu based quotas that happens when going form rhel/centos 8->9 basically in the 8->9 change the cgroups implemantion changes form v1 to v2 https://bugzilla.redhat.com/show_bug.cgi?id=2035518 when adressing that we did not have a good universal solution for instnace that hardcoded a value that was incompatible with the cgroups_v2 api in the kernel except resize. in https://review.opendev.org/c/openstack/nova/+/824048/ we removed automatically adding the cpu_shares cgroup option to enable booting vms with more then 8 cpus we did not come up with any option other then resize for the other quotas that were in a similar situation. the one option that we considerd possibel to do was extend nova-mange to allow the embeded flaour to be updated this would be similar to what we did to enable the image property to be modifed for chaing machine types. https://docs.openstack.org/nova/latest/cli/nova-manage.html#image-property-commands we didcussed at the time that while we did not want to allow falvor extra specs to be modifed we might recondier that if the quota issue forced our hand or we had a similar need due to foces beyond our contol. i.e. we needed to provide a way beyond resize e.g. due ot operating system changes. what make image properties and flavor extra spec different is that image proerties can only be updated by rebuild which is a destructive operation. extra specs are upsted by resize which is not a destructive operation. that is one of the reasons we have special considertion to image properties and did not do the same for extra specs. if we allow the same for flavor extra specs you would still have to stop the instance, make the change and then migrate the instnace resize automates that so it is generall a better fit. we were also conceren that adding it to nova manage would result in it being abused to modify instnace in ways that were either invalid for the host(changing the numa toplogy, adding traits/resouce request not trackedcxd in placemnt) or otherwise break the instnace in weird ways. that could happen via image properites too but its less likely. On Tue, 2023-01-03 at 17:25 +0100, Jahson Babel wrote: > Hello, > > I'm trying to live migrate some VMs from CentOS 7 to Rocky 8. > Everything run smoothly when there is no extra specs on flavors but > things getting more complicated when those are fixed. Especially when > using quota:vif_burst for QOS. > I know that we aren't supposed to use this for QOS now but it's an old > cluster and it was done that way at the time. So VMs kinda have all > those specs tied to them. > > When live migrate a VM this show up in the nova's logs : > driver.py _live_migration_operation nova.virt.libvirt.driver? Live > Migration failure: internal error: Child process (tc class add dev > tapxxxxxxxx-xx parent 1: classid 1:1 htb rate 250000kbps ceil > 2000000kbps burst 60000000kb quantum 21333) unexpected exit status 1: > Illegal "burst" > This bug cover the problem : https://bugs.launchpad.net/nova/+bug/1960840 > So it's seems to be a normal behavior. Plus I forgot to mention that I'm > on OpenStack Train version and the file mentioned in the launchpad is > not present for this version. > By using Rocky 8 I have to use an updated libvirt that won't accept the > burst parameter we used to set. All available versions of libvirt on > Rocky 8 have changed behavior concerning the burst parameter. > > I've done some testing to make things works including removing the > extra_specs on flavors and in the DB, removing it through libvirt and > trying to modify tc rules used by a VM but it didn't worked. > I have not tried yet to patch Nova or Libvirt but I don't really know > where to look for. > The only thing that did work was to resize the VM to an identical flavor > without the extra_specs. But this induce a complete reboot of the VM. I > would like, if possible, to be able to live migrate the VMs which is > quite easier. > > Is it possible to remove the extra_specs on the VMs and then live > migrate ? Or should I just plan to resize/reboot all VMs without those > extra_specs ? > Any advise will be appreciated. > > Thank you for any help, > Best regards. > > Jahson From satish.txt at gmail.com Tue Jan 3 20:23:19 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 3 Jan 2023 15:23:19 -0500 Subject: [cloudkitty] Instances billing based on tags In-Reply-To: References: Message-ID: Wow! very interesting, I will poke around and see how it's feasible. Very curious how it will represent that data in horizon UI. Currently I am seeing rates based on project_id so assuming it will show based on customer_id. correct? On Tue, Jan 3, 2023 at 8:13 AM Rafael Weing?rtner < rafaelweingartner at gmail.com> wrote: > You can do that. Basically, you can start collecting those attributes you > want for billing (e.g. tags) via Ceilometer dynamic pollster (that is the > easiest way to achieve this). Then, you need to configure the resource type > in Gnocchi to store this extra attribute, and of course, configure > CloudKitty to collect/use it. Both in the metrics.yml and then in the > hashmap or Pyscript rules. > > On Mon, Jan 2, 2023 at 11:01 PM Satish Patel wrote: > >> Folks, >> >> We have an AWS project and in a single project we run multiple customers >> so for billing we use tags. In short every vm instance has a tag (customer >> name) and that way we can produce bills for each customer. >> >> Recently I am playing with openstack cloudkitty and it works with >> standard cases like project based billing. But does it support tags based >> billing just similar to what i have explained in the above aws example? >> >> >> > > -- > Rafael Weing?rtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Tue Jan 3 20:27:03 2023 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 3 Jan 2023 15:27:03 -0500 Subject: [nova][keystone] workload identity? Message-ID: Hi folks: I?m wondering if there?s anyone who?s had some thought or perhaps some work/progress/thoughts on workload identity (example service accounts for VMs) ? is that something that?s really far away for us? Has someone thought about outlining what?s needed? Thanks Mohammed -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jan 3 20:39:18 2023 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 3 Jan 2023 17:39:18 -0300 Subject: [cloudkitty] Instances billing based on tags In-Reply-To: References: Message-ID: It is showing metrics based on the scope configured. I guess you have set the scopes to be mapped as project IDs. If you want other attribute to represent the scope, you need to change that in CloudKitty. On Tue, Jan 3, 2023 at 5:23 PM Satish Patel wrote: > Wow! very interesting, I will poke around and see how it's feasible. Very > curious how it will represent that data in horizon UI. Currently I am > seeing rates based on project_id so assuming it will show based on > customer_id. correct? > > > On Tue, Jan 3, 2023 at 8:13 AM Rafael Weing?rtner < > rafaelweingartner at gmail.com> wrote: > >> You can do that. Basically, you can start collecting those attributes you >> want for billing (e.g. tags) via Ceilometer dynamic pollster (that is the >> easiest way to achieve this). Then, you need to configure the resource type >> in Gnocchi to store this extra attribute, and of course, configure >> CloudKitty to collect/use it. Both in the metrics.yml and then in the >> hashmap or Pyscript rules. >> >> On Mon, Jan 2, 2023 at 11:01 PM Satish Patel >> wrote: >> >>> Folks, >>> >>> We have an AWS project and in a single project we run multiple customers >>> so for billing we use tags. In short every vm instance has a tag (customer >>> name) and that way we can produce bills for each customer. >>> >>> Recently I am playing with openstack cloudkitty and it works with >>> standard cases like project based billing. But does it support tags based >>> billing just similar to what i have explained in the above aws example? >>> >>> >>> >> >> -- >> Rafael Weing?rtner >> > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Jan 4 01:39:29 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 4 Jan 2023 08:39:29 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ Message-ID: Hello guys. I took time to search for this question but I can't find the answer. I have an Openstack private cloud and I use an AZ to a department. For example, AZ-IT for IT department AZ-Sale for Sale department... I will prepare 2 storage backends for each AZ. My goal is that when users launch an instance by choosing AZ then It will use only the backend for this AZ. Would Openstack support my goal? Thanks for reading my email. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jan 4 01:48:08 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 03 Jan 2023 17:48:08 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 4 at 1600 UTC In-Reply-To: <18573cdd98e.10450f502215756.4447083195608302191@ghanshyammann.com> References: <18573cdd98e.10450f502215756.4447083195608302191@ghanshyammann.com> Message-ID: <1857a76dd5d.d4c0fb53294894.2449631818714075413@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Jan 4 at 1600 UTC. Location: Zoom Video Meeting: https://us06web.zoom.us/j/87108541765?pwd=emlXVXg4QUxrUTlLNDZ2TTllWUM3Zz09 * Roll call * Follow up on past action items * Gate health check * 2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker * Cleanup of PyPI maintainer list for OpenStack Projects ** There are other maintainers present along with 'openstackci', A few examples: *** https://pypi.org/project/murano/ *** https://pypi.org/project/glance/ ** More new maintainers are being added without knowledge to OpenStack and by skipping our contribution process *** Example: https://github.com/openstack/xstatic-font-awesome/pull/2 * Mistral situation ** Release team proposing it to mark its release deprecated *** https://review.opendev.org/c/openstack/governance/+/866562 ** New volunteers from OVHCloud are added in Mistral core team. *** https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031421.html * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 02 Jan 2023 10:45:48 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2023 Jan 4, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Jan 3 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From kennelson11 at gmail.com Wed Jan 4 03:08:34 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 3 Jan 2023 21:08:34 -0600 Subject: 2023 PTGs + Virtual Team Signup In-Reply-To: References: Message-ID: Hello Everyone! Wanted to resurface this email in your inboxes in case you were out on vacation when I sent the original! Email ptg at openinfra.dev if you have any questions! -Kendall Nelson (diablo_rojo) On Wed, Dec 14, 2022 at 8:22 AM Kendall Nelson wrote: > Hello Everyone! > > The next PTG on the calendar will be our usual virtual format and it will > take place March 27 -31 right after the 2023.1 release. The project signup > survey is already ready to go! If your team is interested in participating, > sign them up here[1]! Registration is also open now[2] > > We have also heard many requests from the contributor community to bring > back in person events and also heard that with global travel reduction, it > would be better travel wise for the community at large to combine it with > the OpenInfra Summit[3]. So, an in-person PTG will take place Wednesday and > Thursday! June 14 -15th. OpenInfra Summit registration will be required to > participate, and contributor pricing and travel support[4] will be > available. > > We will also be planning another virtual PTG in the latter half of the > year. The exact date is still being finalized. > > This will give you three opportunities to get your team together in 2023: > two virtual and one in-person. > > > - > > Virtual: March 27-31 > - > > In- Person: June 14-15 collocated with the OpenInfra Summit > - > > Virtual: TBD, but second half of the year > > > As any project in our community, you are free to take advantage of all or > some of the PTGs this coming year. I, for one, hope to see you and your > project at as many as make sense for your team! > > To make sure we are as inclusive as possible I will continue to encourage > team moderators to write summaries of their discussions to gather and > promote after the events as well. > > Stay tuned for more info as we get closer, but the team signup survey[1] > has opened for the first virtual PTG and is ready for you! > > -Kendall (diablo_rojo) > > [1] https://openinfrafoundation.formstack.com/forms/march2023_vptg_survey > > [2] https://openinfra-ptg.eventbrite.com > > [3]https://openinfra.dev/summit/vancouver-2023 > > [4] https://openinfrafoundation.formstack.com/forms/openinfra_tsp > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jan 4 06:53:37 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 4 Jan 2023 12:23:37 +0530 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: Hi, >From the description, I'm assuming the instances will be boot from volume. In that case, you will need to create a volume type for each backend and you can use 'extra_specs' properties in the volume type to assign a volume type to a particular AZ. In this case, if you're already creating one backend per AZ then a volume type linked to that backend should be good. Now you will need to create a bootable volume and launch an instance with it. Again, the instance should be launched in the AZ as used in the volume type to support your use case. Also if you want to restrict volumes of a particular AZ to be attached to the instance of the same AZ, you can use the config option *cross_az_attach*[1] which will allow/disallow cross AZ attachments. Hope that helps. [1] https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach Thanks Rajat Dhasmana On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i wrote: > Hello guys. > I took time to search for this question but I can't find the answer. > > I have an Openstack private cloud and I use an AZ to a department. > For example, > AZ-IT for IT department > AZ-Sale for Sale department... > > I will prepare 2 storage backends for each AZ. > > My goal is that when users launch an instance by choosing AZ then It will > use only the backend for this AZ. > > Would Openstack support my goal? > > Thanks for reading my email. > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Jan 4 07:31:37 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 4 Jan 2023 14:31:37 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: Thanks for the answer. But I cannot find the way to configure the storage backend per AZ, Would you give me some suggestions? Nguyen Huu Khoi On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana wrote: > Hi, > > From the description, I'm assuming the instances will be boot from volume. > In that case, you will need to create a volume type for each backend and > you can use 'extra_specs' properties in the volume type to assign a volume > type to a particular AZ. In this case, if you're already creating one > backend per AZ then a volume type linked to that backend should be good. > Now you will need to create a bootable volume and launch an instance with > it. Again, the instance should be launched in the AZ as used in the volume > type to support your use case. > Also if you want to restrict volumes of a particular AZ to be attached to > the instance of the same AZ, you can use the config option > *cross_az_attach*[1] which will allow/disallow cross AZ attachments. > Hope that helps. > > [1] > https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach > > Thanks > Rajat Dhasmana > > On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i > wrote: > >> Hello guys. >> I took time to search for this question but I can't find the answer. >> >> I have an Openstack private cloud and I use an AZ to a department. >> For example, >> AZ-IT for IT department >> AZ-Sale for Sale department... >> >> I will prepare 2 storage backends for each AZ. >> >> My goal is that when users launch an instance by choosing AZ then It will >> use only the backend for this AZ. >> >> Would Openstack support my goal? >> >> Thanks for reading my email. >> >> Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wchy1001 at gmail.com Wed Jan 4 07:47:56 2023 From: wchy1001 at gmail.com (chunyang wu) Date: Wed, 4 Jan 2023 15:47:56 +0800 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: +2 from me? Micha? Nasiadka ?2022?12?29??? 18:03??? > Hello Koalas, > > I?d like to propose Bartosz Bezak as a core reviewer for Kolla, > Kolla-Ansible, Kayobe and ansible-collection-kolla. > > Bartosz has recently went through release preparations and release process > itself for all mentioned repositories, has been a great deal of help in > meeting the cycle trailing projects deadline. > In addition to that, he?s been the main author of Ubuntu Jammy and EL9 > (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as > fixing various bugs amongst all four repositories. > > Bartosz also brings OVN knowledge, which will make the review process for > those patches better (and improve our overall review velocity, which hasn?t > been great recently). > > Kind regards, > Michal Nasiadka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jan 4 08:00:00 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 4 Jan 2023 08:00:00 +0000 Subject: [cinder] Bug Report from 01-04-2023 Message-ID: This is a bug report from 12-21-2022 to 01-04-2023. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/2000436 "250 unit test failures under Python 3.11." Unassigned. - https://bugs.launchpad.net/cinder/+bug/2000489 "Dell Powerstore won't delete volume when storage migrate." Unassigned. Medium - https://bugs.launchpad.net/cinder/+bug/2000724 "cinder should not send external events when extending online is called by glance." In Progress. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jan 4 08:03:30 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 4 Jan 2023 13:33:30 +0530 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i wrote: > Thanks for the answer. > But I cannot find the way to configure the storage backend per AZ, Would > you give me some suggestions? > It totally depends on the deployment method you're using. It could be either tripleo, ansible etc and every deployment method should provide a way to set an availability zone for a volume backend. I'm not a deployment expert but a specific deployment team needs to be consulted for the same. > Nguyen Huu Khoi > > > On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana wrote: > >> Hi, >> >> From the description, I'm assuming the instances will be boot from >> volume. In that case, you will need to create a volume type for each >> backend and you can use 'extra_specs' properties in the volume type to >> assign a volume type to a particular AZ. In this case, if you're already >> creating one backend per AZ then a volume type linked to that backend >> should be good. >> Now you will need to create a bootable volume and launch an instance with >> it. Again, the instance should be launched in the AZ as used in the volume >> type to support your use case. >> Also if you want to restrict volumes of a particular AZ to be attached to >> the instance of the same AZ, you can use the config option >> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >> Hope that helps. >> >> [1] >> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >> >> Thanks >> Rajat Dhasmana >> >> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i >> wrote: >> >>> Hello guys. >>> I took time to search for this question but I can't find the answer. >>> >>> I have an Openstack private cloud and I use an AZ to a department. >>> For example, >>> AZ-IT for IT department >>> AZ-Sale for Sale department... >>> >>> I will prepare 2 storage backends for each AZ. >>> >>> My goal is that when users launch an instance by choosing AZ then It >>> will use only the backend for this AZ. >>> >>> Would Openstack support my goal? >>> >>> Thanks for reading my email. >>> >>> Nguyen Huu Khoi >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Jan 4 08:04:54 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 4 Jan 2023 15:04:54 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: Ok, thanks for the clarification. :) Nguyen Huu Khoi On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana wrote: > > > On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i > wrote: > >> Thanks for the answer. >> But I cannot find the way to configure the storage backend per AZ, Would >> you give me some suggestions? >> > > It totally depends on the deployment method you're using. It could be > either tripleo, ansible etc and every deployment method should provide a > way to set an availability zone for a volume backend. I'm not a deployment > expert but a specific deployment team needs to be consulted for the same. > > >> Nguyen Huu Khoi >> >> >> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >> wrote: >> >>> Hi, >>> >>> From the description, I'm assuming the instances will be boot from >>> volume. In that case, you will need to create a volume type for each >>> backend and you can use 'extra_specs' properties in the volume type to >>> assign a volume type to a particular AZ. In this case, if you're already >>> creating one backend per AZ then a volume type linked to that backend >>> should be good. >>> Now you will need to create a bootable volume and launch an instance >>> with it. Again, the instance should be launched in the AZ as used in the >>> volume type to support your use case. >>> Also if you want to restrict volumes of a particular AZ to be attached >>> to the instance of the same AZ, you can use the config option >>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>> Hope that helps. >>> >>> [1] >>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>> >>> Thanks >>> Rajat Dhasmana >>> >>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>> nguyenhuukhoinw at gmail.com> wrote: >>> >>>> Hello guys. >>>> I took time to search for this question but I can't find the answer. >>>> >>>> I have an Openstack private cloud and I use an AZ to a department. >>>> For example, >>>> AZ-IT for IT department >>>> AZ-Sale for Sale department... >>>> >>>> I will prepare 2 storage backends for each AZ. >>>> >>>> My goal is that when users launch an instance by choosing AZ then It >>>> will use only the backend for this AZ. >>>> >>>> Would Openstack support my goal? >>>> >>>> Thanks for reading my email. >>>> >>>> Nguyen Huu Khoi >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From saphi070 at gmail.com Wed Jan 4 08:58:29 2023 From: saphi070 at gmail.com (Sa Pham) Date: Wed, 4 Jan 2023 15:58:29 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: You have to run cinder-volume service for each AZ. And in your configuration of cinder-volume you need to specify storage_availability_zone for that zone. With nova-compute, you have to create a host aggregate with an availability zone option for these compute nodes. On Wed, Jan 4, 2023 at 3:42 PM Nguy?n H?u Kh?i wrote: > Ok, thanks for the clarification. :) > Nguyen Huu Khoi > > > On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana wrote: > >> >> >> On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i >> wrote: >> >>> Thanks for the answer. >>> But I cannot find the way to configure the storage backend per AZ, Would >>> you give me some suggestions? >>> >> >> It totally depends on the deployment method you're using. It could be >> either tripleo, ansible etc and every deployment method should provide a >> way to set an availability zone for a volume backend. I'm not a deployment >> expert but a specific deployment team needs to be consulted for the same. >> >> >>> Nguyen Huu Khoi >>> >>> >>> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >>> wrote: >>> >>>> Hi, >>>> >>>> From the description, I'm assuming the instances will be boot from >>>> volume. In that case, you will need to create a volume type for each >>>> backend and you can use 'extra_specs' properties in the volume type to >>>> assign a volume type to a particular AZ. In this case, if you're already >>>> creating one backend per AZ then a volume type linked to that backend >>>> should be good. >>>> Now you will need to create a bootable volume and launch an instance >>>> with it. Again, the instance should be launched in the AZ as used in the >>>> volume type to support your use case. >>>> Also if you want to restrict volumes of a particular AZ to be attached >>>> to the instance of the same AZ, you can use the config option >>>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>>> Hope that helps. >>>> >>>> [1] >>>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>>> >>>> Thanks >>>> Rajat Dhasmana >>>> >>>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>>> nguyenhuukhoinw at gmail.com> wrote: >>>> >>>>> Hello guys. >>>>> I took time to search for this question but I can't find the answer. >>>>> >>>>> I have an Openstack private cloud and I use an AZ to a department. >>>>> For example, >>>>> AZ-IT for IT department >>>>> AZ-Sale for Sale department... >>>>> >>>>> I will prepare 2 storage backends for each AZ. >>>>> >>>>> My goal is that when users launch an instance by choosing AZ then It >>>>> will use only the backend for this AZ. >>>>> >>>>> Would Openstack support my goal? >>>>> >>>>> Thanks for reading my email. >>>>> >>>>> Nguyen Huu Khoi >>>>> >>>> -- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582 -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Jan 4 09:00:09 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 4 Jan 2023 16:00:09 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: OK, I got it. Many thanks to my countryman :) Nguyen Huu Khoi On Wed, Jan 4, 2023 at 3:58 PM Sa Pham wrote: > You have to run cinder-volume service for each AZ. And in your > configuration of cinder-volume you need to specify > storage_availability_zone for that zone. > > With nova-compute, you have to create a host aggregate with an > availability zone option for these compute nodes. > > > > On Wed, Jan 4, 2023 at 3:42 PM Nguy?n H?u Kh?i > wrote: > >> Ok, thanks for the clarification. :) >> Nguyen Huu Khoi >> >> >> On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana >> wrote: >> >>> >>> >>> On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i < >>> nguyenhuukhoinw at gmail.com> wrote: >>> >>>> Thanks for the answer. >>>> But I cannot find the way to configure the storage backend per AZ, >>>> Would you give me some suggestions? >>>> >>> >>> It totally depends on the deployment method you're using. It could be >>> either tripleo, ansible etc and every deployment method should provide a >>> way to set an availability zone for a volume backend. I'm not a deployment >>> expert but a specific deployment team needs to be consulted for the same. >>> >>> >>>> Nguyen Huu Khoi >>>> >>>> >>>> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> From the description, I'm assuming the instances will be boot from >>>>> volume. In that case, you will need to create a volume type for each >>>>> backend and you can use 'extra_specs' properties in the volume type to >>>>> assign a volume type to a particular AZ. In this case, if you're already >>>>> creating one backend per AZ then a volume type linked to that backend >>>>> should be good. >>>>> Now you will need to create a bootable volume and launch an instance >>>>> with it. Again, the instance should be launched in the AZ as used in the >>>>> volume type to support your use case. >>>>> Also if you want to restrict volumes of a particular AZ to be attached >>>>> to the instance of the same AZ, you can use the config option >>>>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>>>> Hope that helps. >>>>> >>>>> [1] >>>>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>>>> >>>>> Thanks >>>>> Rajat Dhasmana >>>>> >>>>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>>>> nguyenhuukhoinw at gmail.com> wrote: >>>>> >>>>>> Hello guys. >>>>>> I took time to search for this question but I can't find the answer. >>>>>> >>>>>> I have an Openstack private cloud and I use an AZ to a department. >>>>>> For example, >>>>>> AZ-IT for IT department >>>>>> AZ-Sale for Sale department... >>>>>> >>>>>> I will prepare 2 storage backends for each AZ. >>>>>> >>>>>> My goal is that when users launch an instance by choosing AZ then It >>>>>> will use only the backend for this AZ. >>>>>> >>>>>> Would Openstack support my goal? >>>>>> >>>>>> Thanks for reading my email. >>>>>> >>>>>> Nguyen Huu Khoi >>>>>> >>>>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Wed Jan 4 09:17:06 2023 From: jahson.babel at cc.in2p3.fr (Jahson Babel) Date: Wed, 4 Jan 2023 10:17:06 +0100 Subject: [ops] QOS on flavor breaking live migration from CentOS 7 to 8 In-Reply-To: References: Message-ID: <4067ef7a-d3ce-b63f-21d5-a5334b077599@cc.in2p3.fr> Hello, Alright, thank you for all those pieces of information, past and futur with rhel 9. And the history behind this behavior. It's really interesting. I was hoping a tricky manipulation could have done the trick. But I'm a mere fool and at least I know what I have to do now. Thanks again for your detailed answer. Have a nice day. Jahson On 03/01/2023 20:20, Sean Mooney wrote: > hi yes this is a know issue. > > so the simple answer is resize all affected vms instead of live migrating them > the longer answer is we have been dissing this internally at redhat on and off for > some time now. > https://bugs.launchpad.net/nova/+bug/1960840 is one example where this happens. > > there is another case for the cpu based quotas that happens when going form rhel/centos 8->9 > basically in the 8->9 change the cgroups implemantion changes form v1 to v2 > https://bugzilla.redhat.com/show_bug.cgi?id=2035518 > > when adressing that we did not have a good universal solution for instnace that hardcoded a value that > was incompatible with the cgroups_v2 api in the kernel except resize. > > in https://review.opendev.org/c/openstack/nova/+/824048/ we removed automatically adding the > cpu_shares cgroup option to enable booting vms with more then 8 cpus > > we did not come up with any option other then resize for the other quotas that were in a similar situation. > the one option that we considerd possibel to do was extend nova-mange to allow the embeded flaour to be updated > this would be similar to what we did to enable the image property to be modifed for chaing machine types. > > https://docs.openstack.org/nova/latest/cli/nova-manage.html#image-property-commands > > we didcussed at the time that while we did not want to allow falvor extra specs to be modifed we might recondier that > if the quota issue forced our hand or we had a similar need due to foces beyond our contol. i.e. we needed to provide a way beyond > resize e.g. due ot operating system changes. what make image properties and flavor extra spec different is that image proerties can > only be updated by rebuild which is a destructive operation. extra specs are upsted by resize which is not a destructive operation. > that is one of the reasons we have special considertion to image properties and did not do the same for extra specs. > > if we allow the same for flavor extra specs you would still have to stop the instance, make the change and then migrate the instnace > resize automates that so it is generall a better fit. we were also conceren that adding it to nova manage would result in it being abused > to modify instnace in ways that were either invalid for the host(changing the numa toplogy, adding traits/resouce request not trackedcxd in placemnt) > or otherwise break the instnace in weird ways. that could happen via image properites too but its less likely. > > > > On Tue, 2023-01-03 at 17:25 +0100, Jahson Babel wrote: >> Hello, >> >> I'm trying to live migrate some VMs from CentOS 7 to Rocky 8. >> Everything run smoothly when there is no extra specs on flavors but >> things getting more complicated when those are fixed. Especially when >> using quota:vif_burst for QOS. >> I know that we aren't supposed to use this for QOS now but it's an old >> cluster and it was done that way at the time. So VMs kinda have all >> those specs tied to them. >> >> When live migrate a VM this show up in the nova's logs : >> driver.py _live_migration_operation nova.virt.libvirt.driver? Live >> Migration failure: internal error: Child process (tc class add dev >> tapxxxxxxxx-xx parent 1: classid 1:1 htb rate 250000kbps ceil >> 2000000kbps burst 60000000kb quantum 21333) unexpected exit status 1: >> Illegal "burst" >> This bug cover the problem : https://bugs.launchpad.net/nova/+bug/1960840 >> So it's seems to be a normal behavior. Plus I forgot to mention that I'm >> on OpenStack Train version and the file mentioned in the launchpad is >> not present for this version. >> By using Rocky 8 I have to use an updated libvirt that won't accept the >> burst parameter we used to set. All available versions of libvirt on >> Rocky 8 have changed behavior concerning the burst parameter. >> >> I've done some testing to make things works including removing the >> extra_specs on flavors and in the DB, removing it through libvirt and >> trying to modify tc rules used by a VM but it didn't worked. >> I have not tried yet to patch Nova or Libvirt but I don't really know >> where to look for. >> The only thing that did work was to resize the VM to an identical flavor >> without the extra_specs. But this induce a complete reboot of the VM. I >> would like, if possible, to be able to live migrate the VMs which is >> quite easier. >> >> Is it possible to remove the extra_specs on the VMs and then live >> migrate ? Or should I just plan to resize/reboot all VMs without those >> extra_specs ? >> Any advise will be appreciated. >> >> Thank you for any help, >> Best regards. >> >> Jahson -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4270 bytes Desc: S/MIME Cryptographic Signature URL: From senrique at redhat.com Wed Jan 4 09:26:10 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 4 Jan 2023 09:26:10 +0000 Subject: [cinder] Unit test failures under Python 3.11 - mocks can no longer be provided as the specs for other Mocks Message-ID: Hi, Since python3.11 mocks can no longer be provided as the specs for other Mocks. As a result, an already-mocked object cannot be passed to mock.Mock(). This can uncover bugs in tests since these Mock-derived Mocks will always pass certain tests (e.g. isinstance) and built-in assert functions (e.g. assert_called_once_with) will unconditionally pass.[1] There's a bug report to track this issue in Cinder [2] but I think this may affect other projects too. I've reproduce the error and most drivers fail with: ``` unittest.mock.InvalidSpecError: Cannot spec a Mock object. [object=] ``` Cheers, Sofia [1] https://github.com/python/cpython/issues/87644 [2] https://bugs.launchpad.net/cinder/+bug/2000436 -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Wed Jan 4 09:36:53 2023 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 4 Jan 2023 09:36:53 +0000 Subject: [nova] Do openstack support USB passthrough In-Reply-To: <765850d1c170a510d34dafb4253ab97528829351.camel@redhat.com> References: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> <765850d1c170a510d34dafb4253ab97528829351.camel@redhat.com> Message-ID: <26E5F9C6-D083-44E4-B5EC-A5362A79072D@cern.ch> > On 3 Jan 2023, at 20:03, Sean Mooney wrote: > > On Tue, 2023-01-03 at 14:54 +0000, Stephen Finucane wrote: >> On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: >>> Hi, all >>> >>> I want to ask if openstack support USB passthrough now? >>> >>> Or if I want the instance to recognize the USB flash drive on the >>> host, do you have any suggestions? >>> >>> Thanks, >>> Han >> >> This isn't supported in nova and probably never will be. The closest you can get >> is to passthrough an entire USB controller as suggested by this blog [1], but >> that's really a hack and I 100% would not use it in production. > > ya so we have discussed usb passthough supprot a few times and its somethign nova could add but there has > neither been the demand or desire to add it stongly enough in the core team to actully do it. > > the shorted path to enableing usb passthoug would likely be to add support to cyborg and then add support for that ot nova. > i am perhaps the most open of the nova cores to supporting usb passthough since i have wanted to add it in the past but > if we were to support it it would have to be similar to howe we support pci passhtough. static provisioning and likely only of > stateless devices which rules out usb falsh drives. > > usb gps recivers for precision time stamping was one of the usecause raised in the past which we were somewhat open too > the other was usb programmers/debuggers forh cases when vms where used in industral test and automation workflows. > > as stephen said the only way to do it today is to piggyback on pci passthough and passhtough a usb contoller not a single device. > > if we were to ever support this in nova directly i would proably extend the pci tracker or support other buses like usb or use the generic > resouce table created for persistent memory to model the devices. in eitehr case we would want this capablity to be placement native from the > start if we added this capablity so it would be more and less work then you might imagine to do this right. > less work if we maintain the requirement for statless devices only (ie no usb flash drives) more if you also need to handel multi tenancy and > move operation include data copying, erasure and or encypetion. > > i would not expect this to change in the next few release unless multiple operators provide feedback that this is a broadly deired capablity. > with out a top level generic device api for mutple type of devices (vgpu, usb, pci) that was decoupled form the flaovr or an abstraction like > the cyborg device-profile or pci alias it is hard to see a clean way to model this in our api. that is why enabling it in cyborg and then extneding > nova ot support device profiles with a device type of usb is the simplar solution form a nova perspecitve but that is non trivial from an operational > perspective as you requrie cyborg to utilise the feature. doing it via a usb_alias in the flavor has all the draw backs of the pci_alias, static > configuration that must match on all compute nodes and futher proliferation of flavor explostion. this is one of the reasons we have not added this in > the past. the work to do it in the libvirt driver would not be hard but the maintaince and operational overhead of using it for operators is non > trivial. > >> >> Stephen >> >> [1] https://egallen.com/openstack-usb-passthrough/ >> >> One intermedia option may be to use a usb over ip driver in the guest. I think we used this for one of our use cases (a license server with a dongle). Tim > > From jpodivin at redhat.com Wed Jan 4 10:36:33 2023 From: jpodivin at redhat.com (Jiri Podivin) Date: Wed, 4 Jan 2023 11:36:33 +0100 Subject: [cinder] Unit test failures under Python 3.11 - mocks can no longer be provided as the specs for other Mocks In-Reply-To: References: Message-ID: This is a good catch. We should get a hold of this before it creeps on us in CI. Maybe we should open it in shared backlog? On Wed, Jan 4, 2023 at 11:00 AM Sofia Enriquez wrote: > Hi, > > Since python3.11 mocks can no longer be provided as the specs for other > Mocks. As a result, an already-mocked object cannot be passed to > mock.Mock(). This can uncover bugs in tests since these Mock-derived Mocks > will always pass certain tests (e.g. isinstance) and built-in assert > functions (e.g. assert_called_once_with) will unconditionally pass.[1] > > There's a bug report to track this issue in Cinder [2] but I think this > may affect other projects too. > > I've reproduce the error and most drivers fail with: > ``` > unittest.mock.InvalidSpecError: Cannot spec a Mock object. [object= name='mock.client.HPE3ParClient' id='139657768087760'>] > ``` > > Cheers, > Sofia > > [1] https://github.com/python/cpython/issues/87644 > [2] https://bugs.launchpad.net/cinder/+bug/2000436 > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Wed Jan 4 11:48:27 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 04 Jan 2023 11:48:27 +0000 Subject: Don't set 'py_modules=[]' unless you know what you're doing Message-ID: <2d94b23b3afc53770dfa6e66cc356da84dd70b50.camel@redhat.com> Bit of PSA. I've a seen a number of tox4 fixup-related patches where people are adding the following to the 'setup.py' file: py_modules=[] If you're thinking about doing this, please stop. If you recently did this, consider proposing a patch to it again [*]. It's not doing what you think it is. This value should only be set for projects that do not actually distribute any Python code as part of their sdist (for example: puppet modules). Assuming you're not in this category, what you're actually seeing is a bug [1] which requires a fix in pbr [2]. This is a fix for pbr that means the pbr machinery will correctly function under tox4. This should be released very soon. Stephen [*] This obviously can't merge until the pbr fix is released. [1] https://github.com/tox-dev/tox/issues/2712 [2] https://review.opendev.org/c/openstack/pbr/+/869082 From akahat at redhat.com Wed Jan 4 13:07:57 2023 From: akahat at redhat.com (Amol Kahat) Date: Wed, 4 Jan 2023 18:37:57 +0530 Subject: [TripleO] Tripleo-ci-centos-9-content-provider jobs are failing in check line Message-ID: Hello All, Centos-9 content provider jobs are failing in the check line. We are investigating failures related to build-containers[1]. Please hold your recheck until the issue[2] is fixed. [1] https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-9-content-provider&skip=0 [2] https://bugs.launchpad.net/tripleo/+bug/2001626 Thanks -- *Amol Kahat* Software Engineer *Red Hat India Pvt. Ltd. Pune, India.* akahat at redhat.com B764 E6F8 F4C1 A1AF 816C 6840 FDD3 BA6C 832D 7715 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Wed Jan 4 13:51:43 2023 From: abishop at redhat.com (Alan Bishop) Date: Wed, 4 Jan 2023 05:51:43 -0800 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: On Wed, Jan 4, 2023 at 1:09 AM Sa Pham wrote: > You have to run cinder-volume service for each AZ. And in your > configuration of cinder-volume you need to specify > storage_availability_zone for that zone. > Alternatively, you can run a single cinder-volume service with multiple backends, and use the backend_availability_zone option [1] to specify each backend's AZ. The backend_availability_zone overrides the storage_availability_zone for that backend. [1] https://github.com/openstack/cinder/blob/d55a004e524f752c228a4a7bda5d24d4223325de/cinder/volume/driver.py#L239 Alan > With nova-compute, you have to create a host aggregate with an > availability zone option for these compute nodes. > > > > On Wed, Jan 4, 2023 at 3:42 PM Nguy?n H?u Kh?i > wrote: > >> Ok, thanks for the clarification. :) >> Nguyen Huu Khoi >> >> >> On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana >> wrote: >> >>> >>> >>> On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i < >>> nguyenhuukhoinw at gmail.com> wrote: >>> >>>> Thanks for the answer. >>>> But I cannot find the way to configure the storage backend per AZ, >>>> Would you give me some suggestions? >>>> >>> >>> It totally depends on the deployment method you're using. It could be >>> either tripleo, ansible etc and every deployment method should provide a >>> way to set an availability zone for a volume backend. I'm not a deployment >>> expert but a specific deployment team needs to be consulted for the same. >>> >>> >>>> Nguyen Huu Khoi >>>> >>>> >>>> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> From the description, I'm assuming the instances will be boot from >>>>> volume. In that case, you will need to create a volume type for each >>>>> backend and you can use 'extra_specs' properties in the volume type to >>>>> assign a volume type to a particular AZ. In this case, if you're already >>>>> creating one backend per AZ then a volume type linked to that backend >>>>> should be good. >>>>> Now you will need to create a bootable volume and launch an instance >>>>> with it. Again, the instance should be launched in the AZ as used in the >>>>> volume type to support your use case. >>>>> Also if you want to restrict volumes of a particular AZ to be attached >>>>> to the instance of the same AZ, you can use the config option >>>>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>>>> Hope that helps. >>>>> >>>>> [1] >>>>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>>>> >>>>> Thanks >>>>> Rajat Dhasmana >>>>> >>>>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>>>> nguyenhuukhoinw at gmail.com> wrote: >>>>> >>>>>> Hello guys. >>>>>> I took time to search for this question but I can't find the answer. >>>>>> >>>>>> I have an Openstack private cloud and I use an AZ to a department. >>>>>> For example, >>>>>> AZ-IT for IT department >>>>>> AZ-Sale for Sale department... >>>>>> >>>>>> I will prepare 2 storage backends for each AZ. >>>>>> >>>>>> My goal is that when users launch an instance by choosing AZ then It >>>>>> will use only the backend for this AZ. >>>>>> >>>>>> Would Openstack support my goal? >>>>>> >>>>>> Thanks for reading my email. >>>>>> >>>>>> Nguyen Huu Khoi >>>>>> >>>>> > > -- > Sa Pham Dang > Skype: great_bn > Phone/Telegram: 0986.849.582 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From celiker.kerem at icloud.com Wed Jan 4 08:48:00 2023 From: celiker.kerem at icloud.com (=?utf-8?Q?Kerem_=C3=87eliker?=) Date: Wed, 4 Jan 2023 11:48:00 +0300 Subject: openstack-discuss Digest, Vol 51, Issue 6 In-Reply-To: References: Message-ID: <197174BB-0E05-4F84-AA1E-3579E336573B@icloud.com> Hello Stephen, Yes, OpenStack does support USB passthrough. You can pass through USB devices to instances running in OpenStack by using the nova.virt.libvirt.vif.LibvirtGenericVIFDriver driver and the qemu:commandline option in the nova.conf file. Best, Kerem ?eliker keremceliker.medium.com Sent from my iPhone > On 4 Jan 2023, at 02:25, openstack-discuss-request at lists.openstack.org wrote: > > ?Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: [nova] Do openstack support USB passthrough (Sean Mooney) > 2. Re: [ops] QOS on flavor breaking live migration from CentOS 7 > to 8 (Sean Mooney) > 3. Re: [cloudkitty] Instances billing based on tags (Satish Patel) > 4. [nova][keystone] workload identity? (Mohammed Naser) > 5. Re: [cloudkitty] Instances billing based on tags > (Rafael Weing?rtner) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 03 Jan 2023 19:03:23 +0000 > From: Sean Mooney > To: Stephen Finucane , ??? > , openstack-discuss > > Subject: Re: [nova] Do openstack support USB passthrough > Message-ID: > <765850d1c170a510d34dafb4253ab97528829351.camel at redhat.com> > Content-Type: text/plain; charset="UTF-8" > >> On Tue, 2023-01-03 at 14:54 +0000, Stephen Finucane wrote: >>> On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: >>> Hi, all >>> >>> I want to ask if openstack support USB passthrough now? >>> >>> Or if I want the instance to recognize the USB flash drive on the >>> host, do you have any suggestions? >>> >>> Thanks, >>> Han >> >> This isn't supported in nova and probably never will be. The closest you can get >> is to passthrough an entire USB controller as suggested by this blog [1], but >> that's really a hack and I 100% would not use it in production. > > ya so we have discussed usb passthough supprot a few times and its somethign nova could add but there has > neither been the demand or desire to add it stongly enough in the core team to actully do it. > > the shorted path to enableing usb passthoug would likely be to add support to cyborg and then add support for that ot nova. > i am perhaps the most open of the nova cores to supporting usb passthough since i have wanted to add it in the past but > if we were to support it it would have to be similar to howe we support pci passhtough. static provisioning and likely only of > stateless devices which rules out usb falsh drives. > > usb gps recivers for precision time stamping was one of the usecause raised in the past which we were somewhat open too > the other was usb programmers/debuggers forh cases when vms where used in industral test and automation workflows. > > as stephen said the only way to do it today is to piggyback on pci passthough and passhtough a usb contoller not a single device. > > if we were to ever support this in nova directly i would proably extend the pci tracker or support other buses like usb or use the generic > resouce table created for persistent memory to model the devices. in eitehr case we would want this capablity to be placement native from the > start if we added this capablity so it would be more and less work then you might imagine to do this right. > less work if we maintain the requirement for statless devices only (ie no usb flash drives) more if you also need to handel multi tenancy and > move operation include data copying, erasure and or encypetion. > > i would not expect this to change in the next few release unless multiple operators provide feedback that this is a broadly deired capablity. > with out a top level generic device api for mutple type of devices (vgpu, usb, pci) that was decoupled form the flaovr or an abstraction like > the cyborg device-profile or pci alias it is hard to see a clean way to model this in our api. that is why enabling it in cyborg and then extneding > nova ot support device profiles with a device type of usb is the simplar solution form a nova perspecitve but that is non trivial from an operational > perspective as you requrie cyborg to utilise the feature. doing it via a usb_alias in the flavor has all the draw backs of the pci_alias, static > configuration that must match on all compute nodes and futher proliferation of flavor explostion. this is one of the reasons we have not added this in > the past. the work to do it in the libvirt driver would not be hard but the maintaince and operational overhead of using it for operators is non > trivial. > >> >> Stephen >> >> [1] https://egallen.com/openstack-usb-passthrough/ >> >> > > > > > ------------------------------ > > Message: 2 > Date: Tue, 03 Jan 2023 19:20:50 +0000 > From: Sean Mooney > To: Jahson Babel , > openstack-discuss at lists.openstack.org > Subject: Re: [ops] QOS on flavor breaking live migration from CentOS 7 > to 8 > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > hi yes this is a know issue. > > so the simple answer is resize all affected vms instead of live migrating them > the longer answer is we have been dissing this internally at redhat on and off for > some time now. > https://bugs.launchpad.net/nova/+bug/1960840 is one example where this happens. > > there is another case for the cpu based quotas that happens when going form rhel/centos 8->9 > basically in the 8->9 change the cgroups implemantion changes form v1 to v2 > https://bugzilla.redhat.com/show_bug.cgi?id=2035518 > > when adressing that we did not have a good universal solution for instnace that hardcoded a value that > was incompatible with the cgroups_v2 api in the kernel except resize. > > in https://review.opendev.org/c/openstack/nova/+/824048/ we removed automatically adding the > cpu_shares cgroup option to enable booting vms with more then 8 cpus > > we did not come up with any option other then resize for the other quotas that were in a similar situation. > the one option that we considerd possibel to do was extend nova-mange to allow the embeded flaour to be updated > this would be similar to what we did to enable the image property to be modifed for chaing machine types. > > https://docs.openstack.org/nova/latest/cli/nova-manage.html#image-property-commands > > we didcussed at the time that while we did not want to allow falvor extra specs to be modifed we might recondier that > if the quota issue forced our hand or we had a similar need due to foces beyond our contol. i.e. we needed to provide a way beyond > resize e.g. due ot operating system changes. what make image properties and flavor extra spec different is that image proerties can > only be updated by rebuild which is a destructive operation. extra specs are upsted by resize which is not a destructive operation. > that is one of the reasons we have special considertion to image properties and did not do the same for extra specs. > > if we allow the same for flavor extra specs you would still have to stop the instance, make the change and then migrate the instnace > resize automates that so it is generall a better fit. we were also conceren that adding it to nova manage would result in it being abused > to modify instnace in ways that were either invalid for the host(changing the numa toplogy, adding traits/resouce request not trackedcxd in placemnt) > or otherwise break the instnace in weird ways. that could happen via image properites too but its less likely. > > > >> On Tue, 2023-01-03 at 17:25 +0100, Jahson Babel wrote: >> Hello, >> >> I'm trying to live migrate some VMs from CentOS 7 to Rocky 8. >> Everything run smoothly when there is no extra specs on flavors but >> things getting more complicated when those are fixed. Especially when >> using quota:vif_burst for QOS. >> I know that we aren't supposed to use this for QOS now but it's an old >> cluster and it was done that way at the time. So VMs kinda have all >> those specs tied to them. >> >> When live migrate a VM this show up in the nova's logs : >> driver.py _live_migration_operation nova.virt.libvirt.driver? Live >> Migration failure: internal error: Child process (tc class add dev >> tapxxxxxxxx-xx parent 1: classid 1:1 htb rate 250000kbps ceil >> 2000000kbps burst 60000000kb quantum 21333) unexpected exit status 1: >> Illegal "burst" >> This bug cover the problem : https://bugs.launchpad.net/nova/+bug/1960840 >> So it's seems to be a normal behavior. Plus I forgot to mention that I'm >> on OpenStack Train version and the file mentioned in the launchpad is >> not present for this version. >> By using Rocky 8 I have to use an updated libvirt that won't accept the >> burst parameter we used to set. All available versions of libvirt on >> Rocky 8 have changed behavior concerning the burst parameter. >> >> I've done some testing to make things works including removing the >> extra_specs on flavors and in the DB, removing it through libvirt and >> trying to modify tc rules used by a VM but it didn't worked. >> I have not tried yet to patch Nova or Libvirt but I don't really know >> where to look for. >> The only thing that did work was to resize the VM to an identical flavor >> without the extra_specs. But this induce a complete reboot of the VM. I >> would like, if possible, to be able to live migrate the VMs which is >> quite easier. >> >> Is it possible to remove the extra_specs on the VMs and then live >> migrate ? Or should I just plan to resize/reboot all VMs without those >> extra_specs ? >> Any advise will be appreciated. >> >> Thank you for any help, >> Best regards. >> >> Jahson > > > > > ------------------------------ > > Message: 3 > Date: Tue, 3 Jan 2023 15:23:19 -0500 > From: Satish Patel > To: Rafael Weing?rtner > Cc: OpenStack Discuss > Subject: Re: [cloudkitty] Instances billing based on tags > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Wow! very interesting, I will poke around and see how it's feasible. Very > curious how it will represent that data in horizon UI. Currently I am > seeing rates based on project_id so assuming it will show based on > customer_id. correct? > > > On Tue, Jan 3, 2023 at 8:13 AM Rafael Weing?rtner < > rafaelweingartner at gmail.com> wrote: > >> You can do that. Basically, you can start collecting those attributes you >> want for billing (e.g. tags) via Ceilometer dynamic pollster (that is the >> easiest way to achieve this). Then, you need to configure the resource type >> in Gnocchi to store this extra attribute, and of course, configure >> CloudKitty to collect/use it. Both in the metrics.yml and then in the >> hashmap or Pyscript rules. >> >>> On Mon, Jan 2, 2023 at 11:01 PM Satish Patel wrote: >>> >>> Folks, >>> >>> We have an AWS project and in a single project we run multiple customers >>> so for billing we use tags. In short every vm instance has a tag (customer >>> name) and that way we can produce bills for each customer. >>> >>> Recently I am playing with openstack cloudkitty and it works with >>> standard cases like project based billing. But does it support tags based >>> billing just similar to what i have explained in the above aws example? >>> >>> >>> >> >> -- >> Rafael Weing?rtner >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 4 > Date: Tue, 3 Jan 2023 15:27:03 -0500 > From: Mohammed Naser > To: OpenStack Discuss > Subject: [nova][keystone] workload identity? > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hi folks: > > I?m wondering if there?s anyone who?s had some thought or perhaps some > work/progress/thoughts on workload identity (example service accounts for > VMs) ? is that something that?s really far away for us? Has someone > thought about outlining what?s needed? > > Thanks > Mohammed > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Message: 5 > Date: Tue, 3 Jan 2023 17:39:18 -0300 > From: Rafael Weing?rtner > To: Satish Patel > Cc: OpenStack Discuss > Subject: Re: [cloudkitty] Instances billing based on tags > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > It is showing metrics based on the scope configured. I guess you have set > the scopes to be mapped as project IDs. If you want other attribute to > represent the scope, you need to change that in CloudKitty. > >> On Tue, Jan 3, 2023 at 5:23 PM Satish Patel wrote: >> >> Wow! very interesting, I will poke around and see how it's feasible. Very >> curious how it will represent that data in horizon UI. Currently I am >> seeing rates based on project_id so assuming it will show based on >> customer_id. correct? >> >> >> On Tue, Jan 3, 2023 at 8:13 AM Rafael Weing?rtner < >> rafaelweingartner at gmail.com> wrote: >> >>> You can do that. Basically, you can start collecting those attributes you >>> want for billing (e.g. tags) via Ceilometer dynamic pollster (that is the >>> easiest way to achieve this). Then, you need to configure the resource type >>> in Gnocchi to store this extra attribute, and of course, configure >>> CloudKitty to collect/use it. Both in the metrics.yml and then in the >>> hashmap or Pyscript rules. >>> >>> On Mon, Jan 2, 2023 at 11:01 PM Satish Patel >>> wrote: >>> >>>> Folks, >>>> >>>> We have an AWS project and in a single project we run multiple customers >>>> so for billing we use tags. In short every vm instance has a tag (customer >>>> name) and that way we can produce bills for each customer. >>>> >>>> Recently I am playing with openstack cloudkitty and it works with >>>> standard cases like project based billing. But does it support tags based >>>> billing just similar to what i have explained in the above aws example? >>>> >>>> >>>> >>> >>> -- >>> Rafael Weing?rtner >>> >> > > -- > Rafael Weing?rtner > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > > > ------------------------------ > > End of openstack-discuss Digest, Vol 51, Issue 6 > ************************************************ -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Jan 4 14:30:37 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 4 Jan 2023 06:30:37 -0800 Subject: [nova] Do openstack support USB passthrough In-Reply-To: <26E5F9C6-D083-44E4-B5EC-A5362A79072D@cern.ch> References: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> <765850d1c170a510d34dafb4253ab97528829351.camel@redhat.com> <26E5F9C6-D083-44E4-B5EC-A5362A79072D@cern.ch> Message-ID: On Wed, Jan 4, 2023 at 2:05 AM Tim Bell wrote: > > > > On 3 Jan 2023, at 20:03, Sean Mooney wrote: > > > > On Tue, 2023-01-03 at 14:54 +0000, Stephen Finucane wrote: > >> On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: > >>> Hi, all > >>> > >>> I want to ask if openstack support USB passthrough now? > >>> > >>> Or if I want the instance to recognize the USB flash drive on the > >>> host, do you have any suggestions? > >>> > >>> Thanks, > >>> Han > >> > >> This isn't supported in nova and probably never will be. The closest > you can get > >> is to passthrough an entire USB controller as suggested by this blog > [1], but > >> that's really a hack and I 100% would not use it in production. > > > > ya so we have discussed usb passthough supprot a few times and its > somethign nova could add but there has > > neither been the demand or desire to add it stongly enough in the core > team to actully do it. > > > > the shorted path to enableing usb passthoug would likely be to add > support to cyborg and then add support for that ot nova. > > i am perhaps the most open of the nova cores to supporting usb > passthough since i have wanted to add it in the past but > > if we were to support it it would have to be similar to howe we support > pci passhtough. static provisioning and likely only of > > stateless devices which rules out usb falsh drives. > > > > usb gps recivers for precision time stamping was one of the usecause > raised in the past which we were somewhat open too > > the other was usb programmers/debuggers forh cases when vms where used > in industral test and automation workflows. > > > > as stephen said the only way to do it today is to piggyback on pci > passthough and passhtough a usb contoller not a single device. > > > > if we were to ever support this in nova directly i would proably extend > the pci tracker or support other buses like usb or use the generic > > resouce table created for persistent memory to model the devices. in > eitehr case we would want this capablity to be placement native from the > > start if we added this capablity so it would be more and less work then > you might imagine to do this right. > > less work if we maintain the requirement for statless devices only (ie > no usb flash drives) more if you also need to handel multi tenancy and > > move operation include data copying, erasure and or encypetion. > > > > i would not expect this to change in the next few release unless > multiple operators provide feedback that this is a broadly deired capablity. > > with out a top level generic device api for mutple type of devices > (vgpu, usb, pci) that was decoupled form the flaovr or an abstraction like > > the cyborg device-profile or pci alias it is hard to see a clean way to > model this in our api. that is why enabling it in cyborg and then extneding > > nova ot support device profiles with a device type of usb is the simplar > solution form a nova perspecitve but that is non trivial from an > operational > > perspective as you requrie cyborg to utilise the feature. doing it via a > usb_alias in the flavor has all the draw backs of the pci_alias, static > > configuration that must match on all compute nodes and futher > proliferation of flavor explostion. this is one of the reasons we have not > added this in > > the past. the work to do it in the libvirt driver would not be hard but > the maintaince and operational overhead of using it for operators is non > > trivial. > > > >> > >> Stephen > >> > >> [1] https://egallen.com/openstack-usb-passthrough/ > >> > >> > > One intermedia option may be to use a usb over ip driver in the guest. I > think we used this for one of our use cases (a license server with a > dongle). > > Tim > > > > > > > I didn't know such a thing existed! Are there any links or blog posts out there which are recommended? Every time I've ever had to pass USB through to a VM, it was purely because of a hardware key for licensing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Wed Jan 4 15:05:26 2023 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Wed, 4 Jan 2023 15:05:26 +0000 Subject: [nova] Do openstack support USB passthrough In-Reply-To: References: <096211a0638c6e2fb6b488e20c87e132f81ee947.camel@redhat.com> <765850d1c170a510d34dafb4253ab97528829351.camel@redhat.com> <26E5F9C6-D083-44E4-B5EC-A5362A79072D@cern.ch> Message-ID: Our docs for USB over IP are based on https://developer.ridgerun.com/wiki/index.php?title=How_to_setup_and_use_USB/IP HTH, Arne ________________________________ Von: Julia Kreger Gesendet: Mittwoch, 4. Januar 2023, 15:50 An: Tim Bell Cc: Sean Mooney ; openstack-discuss ; Stephen Finucane ; ??? Betreff: Re: [nova] Do openstack support USB passthrough On Wed, Jan 4, 2023 at 2:05 AM Tim Bell > wrote: > On 3 Jan 2023, at 20:03, Sean Mooney > wrote: > > On Tue, 2023-01-03 at 14:54 +0000, Stephen Finucane wrote: >> On Mon, 2022-12-26 at 10:04 +0000, ??? wrote: >>> Hi, all >>> >>> I want to ask if openstack support USB passthrough now? >>> >>> Or if I want the instance to recognize the USB flash drive on the >>> host, do you have any suggestions? >>> >>> Thanks, >>> Han >> >> This isn't supported in nova and probably never will be. The closest you can get >> is to passthrough an entire USB controller as suggested by this blog [1], but >> that's really a hack and I 100% would not use it in production. > > ya so we have discussed usb passthough supprot a few times and its somethign nova could add but there has > neither been the demand or desire to add it stongly enough in the core team to actully do it. > > the shorted path to enableing usb passthoug would likely be to add support to cyborg and then add support for that ot nova. > i am perhaps the most open of the nova cores to supporting usb passthough since i have wanted to add it in the past but > if we were to support it it would have to be similar to howe we support pci passhtough. static provisioning and likely only of > stateless devices which rules out usb falsh drives. > > usb gps recivers for precision time stamping was one of the usecause raised in the past which we were somewhat open too > the other was usb programmers/debuggers forh cases when vms where used in industral test and automation workflows. > > as stephen said the only way to do it today is to piggyback on pci passthough and passhtough a usb contoller not a single device. > > if we were to ever support this in nova directly i would proably extend the pci tracker or support other buses like usb or use the generic > resouce table created for persistent memory to model the devices. in eitehr case we would want this capablity to be placement native from the > start if we added this capablity so it would be more and less work then you might imagine to do this right. > less work if we maintain the requirement for statless devices only (ie no usb flash drives) more if you also need to handel multi tenancy and > move operation include data copying, erasure and or encypetion. > > i would not expect this to change in the next few release unless multiple operators provide feedback that this is a broadly deired capablity. > with out a top level generic device api for mutple type of devices (vgpu, usb, pci) that was decoupled form the flaovr or an abstraction like > the cyborg device-profile or pci alias it is hard to see a clean way to model this in our api. that is why enabling it in cyborg and then extneding > nova ot support device profiles with a device type of usb is the simplar solution form a nova perspecitve but that is non trivial from an operational > perspective as you requrie cyborg to utilise the feature. doing it via a usb_alias in the flavor has all the draw backs of the pci_alias, static > configuration that must match on all compute nodes and futher proliferation of flavor explostion. this is one of the reasons we have not added this in > the past. the work to do it in the libvirt driver would not be hard but the maintaince and operational overhead of using it for operators is non > trivial. > >> >> Stephen >> >> [1] https://egallen.com/openstack-usb-passthrough/ >> >> One intermedia option may be to use a usb over ip driver in the guest. I think we used this for one of our use cases (a license server with a dongle). Tim > > I didn't know such a thing existed! Are there any links or blog posts out there which are recommended? Every time I've ever had to pass USB through to a VM, it was purely because of a hardware key for licensing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jan 4 15:30:29 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 4 Jan 2023 21:00:29 +0530 Subject: [cinder][drivers] third-party CI for os-brick changes In-Reply-To: References: Message-ID: Reminder for driver vendors! We've 2 new drivers proposed for Antelope, 1) HPE XP: https://review.opendev.org/c/openstack/cinder/+/815582 2) Fungible NVMe TCP: https://review.opendev.org/c/openstack/cinder/+/849143 Note for new and old driver vendors: Along with running the third party CI on cinder gate, we also mandate the CI to be run on os-brick gate since that acts as a surety that your driver works with the existing connector (and might not break later after os-brick release). Thanks Rajat Dhasmana On Thu, Aug 18, 2022 at 2:18 AM Brian Rosmaita wrote: > To all third-party CI maintainers, > > As you are aware, cinder third-party CI systems are required to run on > all cinder changes. However, the os-brick library used in cinder CI > testing is the latest appropriate *released* version of os-brick. > > Thus, it is possible for changes to be happening in os-brick development > that might impact the functionality of your driver. If you aren't > testing os-brick changes, you won't find out about these until *after* > the next os-brick release, which is bad news all around. > > Therefore, at last week's cinder midcycle [0], the cinder project team > agreed to require that cinder third-party CI systems run on all os-brick > changes in addition to all cinder changes. This is a nice-to-have for > the current (Zed) development cycle, but will be required in order for a > driver to be considered 'supported' in the 2023.1 (Antelope) release [1]. > > If you have comments or concerns about this policy, please reply on the > list to this email or put an item on the agenda [2] for the cinder > weekly meeting. > > > [0] https://etherpad.opendev.org/p/cinder-zed-midcycles > [1] > > https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#What_changes_should_I_test_on.3F > [2] https://etherpad.opendev.org/p/cinder-zed-meetings > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Jan 4 16:32:53 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 4 Jan 2023 23:32:53 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: Thanks, I'll check them out. On Wed, Jan 4, 2023, 8:51 PM Alan Bishop wrote: > > > On Wed, Jan 4, 2023 at 1:09 AM Sa Pham wrote: > >> You have to run cinder-volume service for each AZ. And in your >> configuration of cinder-volume you need to specify >> storage_availability_zone for that zone. >> > > Alternatively, you can run a single cinder-volume service with multiple > backends, and use the backend_availability_zone option [1] to specify each > backend's AZ. The backend_availability_zone overrides the > storage_availability_zone for that backend. > > [1] > https://github.com/openstack/cinder/blob/d55a004e524f752c228a4a7bda5d24d4223325de/cinder/volume/driver.py#L239 > > Alan > > >> With nova-compute, you have to create a host aggregate with an >> availability zone option for these compute nodes. >> >> >> >> On Wed, Jan 4, 2023 at 3:42 PM Nguy?n H?u Kh?i >> wrote: >> >>> Ok, thanks for the clarification. :) >>> Nguyen Huu Khoi >>> >>> >>> On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana >>> wrote: >>> >>>> >>>> >>>> On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i < >>>> nguyenhuukhoinw at gmail.com> wrote: >>>> >>>>> Thanks for the answer. >>>>> But I cannot find the way to configure the storage backend per AZ, >>>>> Would you give me some suggestions? >>>>> >>>> >>>> It totally depends on the deployment method you're using. It could be >>>> either tripleo, ansible etc and every deployment method should provide a >>>> way to set an availability zone for a volume backend. I'm not a deployment >>>> expert but a specific deployment team needs to be consulted for the same. >>>> >>>> >>>>> Nguyen Huu Khoi >>>>> >>>>> >>>>> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >>>>> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> From the description, I'm assuming the instances will be boot from >>>>>> volume. In that case, you will need to create a volume type for each >>>>>> backend and you can use 'extra_specs' properties in the volume type to >>>>>> assign a volume type to a particular AZ. In this case, if you're already >>>>>> creating one backend per AZ then a volume type linked to that backend >>>>>> should be good. >>>>>> Now you will need to create a bootable volume and launch an instance >>>>>> with it. Again, the instance should be launched in the AZ as used in the >>>>>> volume type to support your use case. >>>>>> Also if you want to restrict volumes of a particular AZ to be >>>>>> attached to the instance of the same AZ, you can use the config option >>>>>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>>>>> Hope that helps. >>>>>> >>>>>> [1] >>>>>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>>>>> >>>>>> Thanks >>>>>> Rajat Dhasmana >>>>>> >>>>>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>>>>> nguyenhuukhoinw at gmail.com> wrote: >>>>>> >>>>>>> Hello guys. >>>>>>> I took time to search for this question but I can't find the answer. >>>>>>> >>>>>>> I have an Openstack private cloud and I use an AZ to a department. >>>>>>> For example, >>>>>>> AZ-IT for IT department >>>>>>> AZ-Sale for Sale department... >>>>>>> >>>>>>> I will prepare 2 storage backends for each AZ. >>>>>>> >>>>>>> My goal is that when users launch an instance by choosing AZ then It >>>>>>> will use only the backend for this AZ. >>>>>>> >>>>>>> Would Openstack support my goal? >>>>>>> >>>>>>> Thanks for reading my email. >>>>>>> >>>>>>> Nguyen Huu Khoi >>>>>>> >>>>>> >> >> -- >> Sa Pham Dang >> Skype: great_bn >> Phone/Telegram: 0986.849.582 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin at openstack.org Wed Jan 4 18:46:46 2023 From: erin at openstack.org (Erin Disney) Date: Wed, 4 Jan 2023 12:46:46 -0600 Subject: The Next OpenStack Release Name Is... Message-ID: <594FE115-7B29-4566-8145-004DB21E8187@openstack.org> Hey everyone- Voting for the upcoming OpenStack B release name has ended and we have a winner... BOBCAT! Thanks to everyone who voted and helped us pick the next name. Thanks, Erin Erin Disney Event Marketing OpenInfra Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From haiwu.us at gmail.com Thu Jan 5 02:54:25 2023 From: haiwu.us at gmail.com (hai wu) Date: Wed, 4 Jan 2023 20:54:25 -0600 Subject: [nova][neutron][openstacksdk] Any API way to tell the difference for openstack ports created manually or automatically? Message-ID: I could not find any API way to tell the difference between openstack ports created automatically when a new instance got created, and openstack ports manually created for port reservation purposes. There's 'preserve_on_delete' attribute in the openstack database for each vm instance, and if it is set to 'true', then the port was manually created, if not, then it was automatically created. But there's no API to retrieve this. Am I missing something? Hai From gmann at ghanshyammann.com Thu Jan 5 04:08:21 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Jan 2023 20:08:21 -0800 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing Message-ID: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> Hello Everyone, As you might know, tox4 broke almost all the projects tox.ini and so do master as well as stable branches gate. On the master, we need to fix it as we should use tox4 in testing. To fix the stable branches gate, either we need to backport all the fixes which include some more issues fixes[1] or we can pin the tox<4. We discussed it in today's TC meeting and it is better to keep testing the stable branch with the tox version that they were released with and not with the tox4. Even in future, there might be the cases where latest tox might introduce more incompatible changes. By considering all these factors, it is better to pin tox<4 for stable branches (<=stable/zed) testing. I have prepared the patch to pin it in the common job/template, feel free to comment with your feedback/opinion: - https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/867849/3 also, tested with nova/cinder stable branches: - https://review.opendev.org/q/I0ca55abf9975c5a3f9713ac5dd5be39083e04554 - https://review.opendev.org/q/I300e7804a27d08ecd239d1a7faaf2aaf3e07b9ee You can also test it in your project stable branches in case any different syntax in tox.ini causing the tox upgrade to the latest. [1] https://github.com/tox-dev/tox/issues/2712 -gmann From skaplons at redhat.com Thu Jan 5 09:08:17 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 05 Jan 2023 10:08:17 +0100 Subject: [nova][neutron][openstacksdk] Any API way to tell the difference for openstack ports created manually or automatically? In-Reply-To: References: Message-ID: <3936341.UyvNKhiCoM@p1> Hi, Dnia czwartek, 5 stycznia 2023 03:54:25 CET hai wu pisze: > I could not find any API way to tell the difference between openstack > ports created automatically when a new instance got created, and > openstack ports manually created for port reservation purposes. > > There's 'preserve_on_delete' attribute in the openstack database for > each vm instance, and if it is set to 'true', then the port was > manually created, if not, then it was automatically created. But > there's no API to retrieve this. > > Am I missing something? > > Hai > > You are no missing anything. There is no way to check that using API (both Neutron nor Nova). -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Thu Jan 5 12:32:01 2023 From: smooney at redhat.com (Sean Mooney) Date: Thu, 05 Jan 2023 12:32:01 +0000 Subject: [nova][neutron][openstacksdk] Any API way to tell the difference for openstack ports created manually or automatically? In-Reply-To: <3936341.UyvNKhiCoM@p1> References: <3936341.UyvNKhiCoM@p1> Message-ID: On Thu, 2023-01-05 at 10:08 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia czwartek, 5 stycznia 2023 03:54:25 CET hai wu pisze: > > I could not find any API way to tell the difference between openstack > > ports created automatically when a new instance got created, and > > openstack ports manually created for port reservation purposes. > > > > There's 'preserve_on_delete' attribute in the openstack database for > > each vm instance, and if it is set to 'true', then the port was > > manually created, if not, then it was automatically created. But > > there's no API to retrieve this. > > > > Am I missing something? > > > > Hai > > > > > > You are no missing anything. There is no way to check that using API (both Neutron nor Nova). ya i think you can tell from the nova db if it was created by nova by corralating the virutal interface table and request spec for the instance but its not somethign you can tell form the api or form neutron's perspective. there are some cases where preserve_on_delete can be lost too. namely if neutron breaks and the network info cache is currpted and teh ports are removed form the cache when the cache is forcefully rebuilt the value of preserve_on_delete may not be preserved. https://bugs.launchpad.net/nova/+bug/1834463 we also have https://bugs.launchpad.net/nova/+bug/1976545 one of the possible fixes for the later https://review.opendev.org/c/openstack/nova/+/844326 would ideally involve a new neutron api extention to model delete_on_detach im not sure if that is related to why you asked this question? can you expalin why you are trying to determin if it was manually created vs created by nova? > From haiwu.us at gmail.com Thu Jan 5 14:36:13 2023 From: haiwu.us at gmail.com (hai wu) Date: Thu, 5 Jan 2023 08:36:13 -0600 Subject: [nova][neutron][openstacksdk] Any API way to tell the difference for openstack ports created manually or automatically? In-Reply-To: References: <3936341.UyvNKhiCoM@p1> Message-ID: Thanks for confirming. The reason why I asked this question is due to VM deletion. By default, deleting a VM instance means its single openstack port associated with the VM (created automatically by nova), would be auto deleted, there's no extra work needed for that; But if the associated openstack port was manually created for this VM instance, then it seems it would NOT remove this openstack port automatically. I would like to ensure the cleanup of such ports upon VM deletion. There's connection.network.delete_port() function from openstacksdk, and I could retrieve all associated ports for a particular vm instance, so it is possible to just delete all associated ports for the instance upon VM deletion call, but there might be some conflicts, meaning that for ports auto created by nova, those will be purged by openstack automatically. How to ensure there would be no conflicts upon vm deletion call, and ensure all associated ports would be purged without error? On Thu, Jan 5, 2023 at 6:32 AM Sean Mooney wrote: > > On Thu, 2023-01-05 at 10:08 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Dnia czwartek, 5 stycznia 2023 03:54:25 CET hai wu pisze: > > > I could not find any API way to tell the difference between openstack > > > ports created automatically when a new instance got created, and > > > openstack ports manually created for port reservation purposes. > > > > > > There's 'preserve_on_delete' attribute in the openstack database for > > > each vm instance, and if it is set to 'true', then the port was > > > manually created, if not, then it was automatically created. But > > > there's no API to retrieve this. > > > > > > Am I missing something? > > > > > > Hai > > > > > > > > > > You are no missing anything. There is no way to check that using API (both Neutron nor Nova). > ya i think you can tell from the nova db if it was created by nova by corralating the virutal interface table and request spec for the instance > but its not somethign you can tell form the api or form neutron's perspective. > > there are some cases where preserve_on_delete can be lost too. namely if neutron breaks and the network info cache is currpted and teh ports are > removed form the cache when the cache is forcefully rebuilt the value of preserve_on_delete may not be preserved. > https://bugs.launchpad.net/nova/+bug/1834463 we also have https://bugs.launchpad.net/nova/+bug/1976545 > one of the possible fixes for the later https://review.opendev.org/c/openstack/nova/+/844326 would ideally involve a new neutron api extention to > model delete_on_detach > > im not sure if that is related to why you asked this question? > can you expalin why you are trying to determin if it was manually created vs created by nova? > > > > From smooney at redhat.com Thu Jan 5 15:02:34 2023 From: smooney at redhat.com (Sean Mooney) Date: Thu, 05 Jan 2023 15:02:34 +0000 Subject: [nova][neutron][openstacksdk] Any API way to tell the difference for openstack ports created manually or automatically? In-Reply-To: References: <3936341.UyvNKhiCoM@p1> Message-ID: <966783a3a07e7b04d97850ea2e95b091e9e1f870.camel@redhat.com> On Thu, 2023-01-05 at 08:36 -0600, hai wu wrote: > Thanks for confirming. The reason why I asked this question is due to > VM deletion. By default, deleting a VM instance means its single > openstack port associated with the VM (created automatically by nova), > would be auto deleted, there's no extra work needed for that; But if > the associated openstack port was manually created for this VM > instance, then it seems it would NOT remove this openstack port > automatically. > correct for what its worth in general we have been pushing peopel to a workflow where they create teh port first and avoid the automatic creation fo a port via nova. passing a network or subnet is simpler but all of the non trivial cases such as seting a vnic type or properly supproting QOS need the other workflow. if it was not for legacy users depending on the automatic cleanup i woudl presonally prefer to make nova created port behave like manulaly created ports. we could do that in an api microverion but it never felt imporant enought to change. > I would like to ensure the cleanup of such ports upon > VM deletion. There's connection.network.delete_port() function from > openstacksdk, and I could retrieve all associated ports for a > particular vm instance, so it is possible to just delete all > associated ports for the instance upon VM deletion call, but there > might be some conflicts, meaning that for ports auto created by nova, > those will be purged by openstack automatically. you should not get a conflict if you delete the port while the vm deletion is happening either you or nova will get a 404 neutron will also send a network_vif_deleted event to nova to notify it and nova would try an detach the port technially this could race since they do not share the same lock https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L8166-L8168 vs https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3187 but it should not prevent the delete of the vm or ports form completeing. > How to ensure there > would be no conflicts upon vm deletion call, and ensure all associated > ports would be purged without error? the best way to day is to get the list of ports assocated with the vm. delete the vm and then after the delete is complete delete any remaining ports by looping over the list and handelign the 404 for ports already deleted by nova. if we were to extend neutron with a new delete_on_detach extension we could automate this more cleanly but since that does not exist this is left to the user. > > On Thu, Jan 5, 2023 at 6:32 AM Sean Mooney wrote: > > > > On Thu, 2023-01-05 at 10:08 +0100, Slawek Kaplonski wrote: > > > Hi, > > > > > > Dnia czwartek, 5 stycznia 2023 03:54:25 CET hai wu pisze: > > > > I could not find any API way to tell the difference between openstack > > > > ports created automatically when a new instance got created, and > > > > openstack ports manually created for port reservation purposes. > > > > > > > > There's 'preserve_on_delete' attribute in the openstack database for > > > > each vm instance, and if it is set to 'true', then the port was > > > > manually created, if not, then it was automatically created. But > > > > there's no API to retrieve this. > > > > > > > > Am I missing something? > > > > > > > > Hai > > > > > > > > > > > > > > You are no missing anything. There is no way to check that using API (both Neutron nor Nova). > > ya i think you can tell from the nova db if it was created by nova by corralating the virutal interface table and request spec for the instance > > but its not somethign you can tell form the api or form neutron's perspective. > > > > there are some cases where preserve_on_delete can be lost too. namely if neutron breaks and the network info cache is currpted and teh ports are > > removed form the cache when the cache is forcefully rebuilt the value of preserve_on_delete may not be preserved. > > https://bugs.launchpad.net/nova/+bug/1834463 we also have https://bugs.launchpad.net/nova/+bug/1976545 > > one of the possible fixes for the later https://review.opendev.org/c/openstack/nova/+/844326 would ideally involve a new neutron api extention to > > model delete_on_detach > > > > im not sure if that is related to why you asked this question? > > can you expalin why you are trying to determin if it was manually created vs created by nova? > > > > > > > > From johnsomor at gmail.com Thu Jan 5 23:51:47 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Jan 2023 15:51:47 -0800 Subject: [designate] New project: designate-tlds In-Reply-To: <03875da6-626c-cdd2-bca3-bda2a24fde1e@debian.org> References: <03875da6-626c-cdd2-bca3-bda2a24fde1e@debian.org> Message-ID: Hi Thomas, I see how this could be useful for other deployments that are allowing any TLD in Designate. I think this is something we could add to a "designate/contrib" directory and install with the package, using the "extras" capability to only install the script and unique requirements if this feature is needed. The crontab can be bundled using data_files. I do have a few comments about the code that I would make on the patch for designate: 1. We are moving away from using the legacy Designate client for python bindings, instead preferring to standardize on using the OpenStack SDK. I would prefer the code to use OpenStack SDK. 2. We would want some basic test coverage so we can maintain it. 3. I would like to see a slightly expanded README file that talks a bit more about the configuration file expectations and use case. 4. nit: We can probably condense the HTTP proxy setting down into one configuration setting, if defined it's used, if not don't use a proxy. Michael On Thu, Dec 15, 2022 at 2:29 AM Thomas Goirand wrote: > > Hi, > > We wrote this: > https://salsa.debian.org/openstack-team/services/designate-tlds > > The interesting code bits are in: > https://salsa.debian.org/openstack-team/services/designate-tlds/-/blob/debian/zed/designate_tlds/tlds.py > > What it does is download the TLD list from > https://publicsuffix.org/list/public_suffix_list.dat using requests > (with an optional proxy), compare it to the list of TLDs in Designate, > and fix the difference. > > It's by default setup in a cron every week. Basically, it's just apt-get > install designate-tlds, configure keystone_authtoken in > /etc/designate-tlds/designate-tlds.conf and set dry_run=false, and > you're done! Note I also wrote a patch for puppet-designate [1] to > support it. > > Moving forward I see 2 solutions: > 1- we continue to maintain this separately from Designate > 2- our code gets integrated into Designate itself. > > Designate team: are you interested for option 2? > > Cheers, > > Thomas Goirand (zigo) > > [1] > https://salsa.debian.org/openstack-team/puppet/puppet-module-designate/-/blob/debian/zed/debian/patches/add_designate_tlds_config.patch > From johnsomor at gmail.com Fri Jan 6 00:11:15 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Jan 2023 16:11:15 -0800 Subject: [designate] Implemeting PTR record restrictions In-Reply-To: References: Message-ID: Hi Thomas, Currently the only way to block this is to exclude the reverse pointer zones that may be assigned in the cloud from the TLD list, or pre-creating the in-addr.arpa. zone(s) under the service account that will be used by the neutron extension to create the PTR records. This of course has the downside of not allowing the projects to see their PTR records in designate (they are owned by the service account). We are currently working on "shared zones" [1] which may allow (in the future) a neutron extension to "share" a classless PTR zone with the project. I have started a write up of this use case [2] and I am planning to propose a summit talk covering "shared zones" and classless PTR delegation. There has been interest from multiple clouds for this feature. Michael [1] https://review.opendev.org/c/openstack/designate/+/726334 [2] https://review.opendev.org/c/openstack/designate/+/856866 On Thu, Dec 15, 2022 at 2:23 AM Thomas Goirand wrote: > > Hi, > > We implemented this scenario for our public cloud: > https://docs.openstack.org/neutron/latest/admin/config-dns-int-ext-serv.html#use-case-3b-the-dns-domain-ports-extension > > This is currently in production in beta-mode at Infomaniak's public cloud. > > We did that, because we want our customers to be able to set any domain > name or PTR for the IPs they own. > > However, we discovered that there's no restriction on what zone > customers can set. For example, if customer A owns the IP 203.0.113.9, > customer B can do "openstack zone create 9.113.0.203.in-addr.arpa.", > preventing customer A to set their PTR record. > > Is there currently a way to fix this? Or maybe a spec to implement the > correct restrictions? What is the way to fix this problem in a public > cloud env? > > Cheers, > > Thomas Goirand (zigo) > From zakhar at gmail.com Fri Jan 6 08:52:36 2023 From: zakhar at gmail.com (Zakhar Kirpichenko) Date: Fri, 6 Jan 2023 10:52:36 +0200 Subject: Nova libvirt/kvm sound device In-Reply-To: References: Message-ID: Hi Stephen, Many thanks for your response! I figured that it wasn't a simple task to add sound support to Nova. The reason for having sound support is quite simple though: we'd like to provide sound to VDI instances, which is trivial without Nova. /Z On Tue, 3 Jan 2023 at 16:58, Stephen Finucane wrote: > On Tue, 2022-12-27 at 17:40 +0200, Zakhar Kirpichenko wrote: > > Hi! > > > > I'd like to have the following configuration added to every guest on a > > specific host managed by Nova and libvirt/kvm: > > > > > >
> function='0x0'/> > > > > > > When I add the device manually to instance xml, it works as intended but > the > > instance configuration gets overwritten on instance stop/start or hard > reboot > > via Nova. > > Modifying libvirt's XML behind nova's back is a big no-no. You break the > contract between the two. If you wanted audio support, you'd need to add > this > support to nova itself. This would require a spec, quite a bit of coding, > and > would not be backported. tbh, it's also hard to see this being prioritized > since > audio support for cloud-based VMs is a rather unusual request. If you > wanted to > persue this approach though, feel free to reach out on IRC > (#openstack-nova on > OFTC) and we can guide you. > > Stephen > > > > > What is the currently supported / proper way to add a virtual sound > device > > without having to modify libvirt or Nova code? I would appreciate any > advice. > > > > Best regards, > > Zakhar > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at stackhpc.com Fri Jan 6 10:08:16 2023 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 6 Jan 2023 10:08:16 +0000 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: <53ffa6c0-affe-7d56-a4c1-beff09e5b63b@stackhpc.com> On 29/12/2022 09:58, Micha? Nasiadka wrote: > Hello Koalas, > > I?d like to propose Bartosz Bezak as a core reviewer for Kolla, Kolla-Ansible, Kayobe and ansible-collection-kolla. A firm +2 from me > > Bartosz has recently went through release preparations and release process itself for all mentioned repositories, has been a great deal of help in meeting the cycle trailing projects deadline. > In addition to that, he?s been the main author of Ubuntu Jammy and EL9 (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as fixing various bugs amongst all four repositories. > > Bartosz also brings OVN knowledge, which will make the review process for those patches better (and improve our overall review velocity, which hasn?t been great recently). > > Kind regards, > Michal Nasiadka From mnasiadka at gmail.com Fri Jan 6 12:00:33 2023 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Fri, 6 Jan 2023 13:00:33 +0100 Subject: [kolla] Propose Bartosz Bezak for core reviewer In-Reply-To: References: Message-ID: I?ve only seen positive reviews, so I went ahead and added Bartosz to proper Gerrit groups. Thanks! Michal W dniu czw., 29.12.2022 o 10:59 Micha? Nasiadka napisa?(a): > Hello Koalas, > > I?d like to propose Bartosz Bezak as a core reviewer for Kolla, > Kolla-Ansible, Kayobe and ansible-collection-kolla. > > Bartosz has recently went through release preparations and release process > itself for all mentioned repositories, has been a great deal of help in > meeting the cycle trailing projects deadline. > In addition to that, he?s been the main author of Ubuntu Jammy and EL9 > (Rocky Linux 9 to be precise) support in Kayobe for Zed release, as well as > fixing various bugs amongst all four repositories. > > Bartosz also brings OVN knowledge, which will make the review process for > those patches better (and improve our overall review velocity, which hasn?t > been great recently). > > Kind regards, > Michal Nasiadka -- Micha? Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jan 6 13:42:45 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 6 Jan 2023 08:42:45 -0500 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> Message-ID: <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> Apologies for top posting, but in addition the change gmann has proposed, I believe that you'll need to change your tox.ini file to pin tox <4. I ran into this working on [0] yesterday, where cinderclient functional tests are devstack-based, and at some point during devstack install someone [1] pip-installs tox unconstrained. The zuul ensure_tox role only ensures that tox is present. The ensure_tox_version var has a slightly misleading name in that it is only used when the role decides it needs to install tox, and then it uses the value of that var; it doesn't ensure that the available tox is that version. I've verified that the 'requires = tox<4' trick in [0] works when the tox being called is >=4 [2]; tox creates a virtualenv in .tox/.tox and installs tox<4 in there, and then runs your testenvs using the tox you required in your tox.ini. cheers, brian [0] https://review.opendev.org/c/openstack/python-cinderclient/+/869263 [1] not naming any names here; also this same situation will happen if the test node image already contains tox4 [2] it works in tox 3 too, sometime after tox 3.18.0. Hopefully it will continue to work in tox 4, though they way they're introducing bad regressions (e.g., [3]), I guess I should say that i've verified that it works through 4.2.2 (4.2.4 was released yesterday). [3] https://github.com/tox-dev/tox/issues/2811 On 1/4/23 11:08 PM, Ghanshyam Mann wrote: > Hello Everyone, > > As you might know, tox4 broke almost all the projects tox.ini and so do master as well > as stable branches gate. On the master, we need to fix it as we should use tox4 in testing. > To fix the stable branches gate, either we need to backport all the fixes which include some > more issues fixes[1] or we can pin the tox<4. > > We discussed it in today's TC meeting and it is better to keep testing the stable branch > with the tox version that they were released with and not with the tox4. Even in future, > there might be the cases where latest tox might introduce more incompatible changes. > By considering all these factors, it is better to pin tox<4 for stable branches (<=stable/zed) > testing. > > I have prepared the patch to pin it in the common job/template, feel free to comment > with your feedback/opinion: > - https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/867849/3 > > also, tested with nova/cinder stable branches: > - https://review.opendev.org/q/I0ca55abf9975c5a3f9713ac5dd5be39083e04554 > - https://review.opendev.org/q/I300e7804a27d08ecd239d1a7faaf2aaf3e07b9ee > > You can also test it in your project stable branches in case any different syntax in tox.ini > causing the tox upgrade to the latest. > > [1] https://github.com/tox-dev/tox/issues/2712 > > -gmann > > From thierry at openstack.org Fri Jan 6 14:50:32 2023 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Jan 2023 15:50:32 +0100 Subject: [release] Release countdown for week R-10, Jan 9 - 13 Message-ID: <7f959b75-6284-8e1d-b41d-2cbd33557bda@openstack.org> Development Focus ----------------- We are now past the antelope-2 milestone, and entering the last development phase of the cycle. Teams should be focused on implementing planned work for the cycle. Now is a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to the end of the release cycle, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (February 9). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (February 16) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, February 16. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (March 2) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (March 16) Upcoming Deadlines & Dates -------------------------- Non-client library freeze: February 9 (R-6 week) Client library freeze: February 16 (R-5 week) antelope-3 milestone: February 16 (R-5 week) Final 2023.1 Antelope release: March 22nd, 2023 -- Thierry Carrez (ttx) From fungi at yuggoth.org Fri Jan 6 14:49:43 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Jan 2023 14:49:43 +0000 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> Message-ID: <20230106144942.vxlj37az6ibdqi6d@yuggoth.org> On 2023-01-06 08:42:45 -0500 (-0500), Brian Rosmaita wrote: [...] > The zuul ensure_tox role only ensures that tox is present. The > ensure_tox_version var has a slightly misleading name in that it is only > used when the role decides it needs to install tox, and then it uses the > value of that var; it doesn't ensure that the available tox is that version. [...] And even if it did, it would only be able to do so at the point at which it was invoked, so if something reinstalled/upgraded tox after that point it would still be different than whatever you set in ensure_tox_version. Still, it might be possible to make ensure_tox_version enforce a specific version if set which would catch circumstances like the one you observed where something running in the job before the ensure-tox role is invoked installed a different version globally. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bgibizer at redhat.com Fri Jan 6 14:53:51 2023 From: bgibizer at redhat.com (Balazs Gibizer) Date: Fri, 06 Jan 2023 15:53:51 +0100 Subject: [cinder] Unit test failures under Python 3.11 - mocks can no longer be provided as the specs for other Mocks In-Reply-To: References: Message-ID: On Wed, Jan 4 2023 at 09:26:10 AM +00:00:00, Sofia Enriquez wrote: > Hi, > > Since python3.11 mocks can no longer be provided as the specs for > other Mocks. As a result, an already-mocked object cannot be passed > to mock.Mock(). This can uncover bugs in tests since these > Mock-derived Mocks will always pass certain tests (e.g. isinstance) > and built-in assert functions (e.g. assert_called_once_with) will > unconditionally pass.[1] > > There's a bug report to track this issue in Cinder [2] but I think > this may affect other projects too. > > I've reproduce the error and most drivers fail with: > ``` > unittest.mock.InvalidSpecError: Cannot spec a Mock object. > [object=] > ``` > Nova went through this and removed double mocking during Zed. Here are the patches: https://review.opendev.org/q/topic:unittest.mock+%2522double+mocking%2522 Maybe you can use it for ideas how to deal with different double mocking scenarios: Cheers, gibi > Cheers, > Sofia > > [1] > [2] > -- > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > > @RedHat Red Hat > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgibizer at redhat.com Fri Jan 6 15:03:52 2023 From: bgibizer at redhat.com (Balazs Gibizer) Date: Fri, 06 Jan 2023 16:03:52 +0100 Subject: [ci][all]tox.tox_env.python.api.NoInterpreter - gate is blocked Message-ID: Hi, It seems that all the tox based jobs are blocked right now on master with the following error: tox.tox_env.python.api.NoInterpreter: could not find python interpreter matching any of the specs It is due to a recent bugfix in tox that uncovered another bug[1]. There is a fix proposed in tox [2]. [1] [2] -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jan 6 16:35:56 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Jan 2023 16:35:56 +0000 Subject: [ci][all]tox.tox_env.python.api.NoInterpreter - gate is blocked In-Reply-To: References: Message-ID: <20230106163555.j56qltntu32vterw@yuggoth.org> On 2023-01-06 16:03:52 +0100 (+0100), Balazs Gibizer wrote: [...] > There is a fix proposed in tox [...] Which is now merged, so should appear in the next release (barring any reverts). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Jan 6 16:36:36 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jan 2023 08:36:36 -0800 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <20230106144942.vxlj37az6ibdqi6d@yuggoth.org> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> <20230106144942.vxlj37az6ibdqi6d@yuggoth.org> Message-ID: <6c3b57f3-2225-43e9-8023-a2e557dddf0f@app.fastmail.com> On Fri, Jan 6, 2023, at 6:49 AM, Jeremy Stanley wrote: > On 2023-01-06 08:42:45 -0500 (-0500), Brian Rosmaita wrote: > [...] >> The zuul ensure_tox role only ensures that tox is present. The >> ensure_tox_version var has a slightly misleading name in that it is only >> used when the role decides it needs to install tox, and then it uses the >> value of that var; it doesn't ensure that the available tox is that version. > [...] > > And even if it did, it would only be able to do so at the point at > which it was invoked, so if something reinstalled/upgraded tox after > that point it would still be different than whatever you set in > ensure_tox_version. Still, it might be possible to make > ensure_tox_version enforce a specific version if set which would > catch circumstances like the one you observed where something > running in the job before the ensure-tox role is invoked installed a > different version globally. Doing so would just mask that you've improperly installed tox elsewhere (a bug itself). I think the current behavior is correct because it isn't letting us get away with that. Elsewhere we have tools (devstack) that install tempest at least 3 redundant times for every tempest job. Its a waste of effort and we shouldn't make it easier for ourselves to fall into those traps. > -- > Jeremy Stanley > > Attachments: > * signature.asc From gmann at ghanshyammann.com Fri Jan 6 18:41:43 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Jan 2023 10:41:43 -0800 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> Message-ID: <18588638aa6.be66d732145969.2670297230032600681@ghanshyammann.com> ---- On Fri, 06 Jan 2023 05:42:45 -0800 Brian Rosmaita wrote --- > Apologies for top posting, but in addition the change gmann has > proposed, I believe that you'll need to change your tox.ini file to pin > tox <4. I ran into this working on [0] yesterday, where cinderclient > functional tests are devstack-based, and at some point during devstack > install someone [1] pip-installs tox unconstrained. > > The zuul ensure_tox role only ensures that tox is present. The > ensure_tox_version var has a slightly misleading name in that it is only > used when the role decides it needs to install tox, and then it uses the > value of that var; it doesn't ensure that the available tox is that version. > > I've verified that the 'requires = tox<4' trick in [0] works when the > tox being called is >=4 [2]; tox creates a virtualenv in .tox/.tox and > installs tox<4 in there, and then runs your testenvs using the tox you > required in your tox.ini. I saw in the log that it is using the ensure-tox role from devstack/playbooks/tox/run-both.yaml - https://zuul.opendev.org/t/openstack/build/c957db6323dc4b42bee07f6b709fb3ad/log/job-output.txt#1182 Which is run after pre-yaml where we pinned tox<4 via ensure_tox_version but missed doing it in run-both.yaml. Testing it by pinning in run-both.yaml also. run-both - https://review.opendev.org/c/openstack/python-cinderclient/+/869494 -gmann > > cheers, > brian > > [0] https://review.opendev.org/c/openstack/python-cinderclient/+/869263 > [1] not naming any names here; also this same situation will happen if > the test node image already contains tox4 > [2] it works in tox 3 too, sometime after tox 3.18.0. Hopefully it will > continue to work in tox 4, though they way they're introducing bad > regressions (e.g., [3]), I guess I should say that i've verified that it > works through 4.2.2 (4.2.4 was released yesterday). > [3] https://github.com/tox-dev/tox/issues/2811 > > On 1/4/23 11:08 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > As you might know, tox4 broke almost all the projects tox.ini and so do master as well > > as stable branches gate. On the master, we need to fix it as we should use tox4 in testing. > > To fix the stable branches gate, either we need to backport all the fixes which include some > > more issues fixes[1] or we can pin the tox<4. > > > > We discussed it in today's TC meeting and it is better to keep testing the stable branch > > with the tox version that they were released with and not with the tox4. Even in future, > > there might be the cases where latest tox might introduce more incompatible changes. > > By considering all these factors, it is better to pin tox<4 for stable branches (<=stable/zed) > > testing. > > > > I have prepared the patch to pin it in the common job/template, feel free to comment > > with your feedback/opinion: > > - https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/867849/3 > > > > also, tested with nova/cinder stable branches: > > - https://review.opendev.org/q/I0ca55abf9975c5a3f9713ac5dd5be39083e04554 > > - https://review.opendev.org/q/I300e7804a27d08ecd239d1a7faaf2aaf3e07b9ee > > > > You can also test it in your project stable branches in case any different syntax in tox.ini > > causing the tox upgrade to the latest. > > > > [1] https://github.com/tox-dev/tox/issues/2712 > > > > -gmann > > > > > > > From helena at openstack.org Fri Jan 6 19:07:54 2023 From: helena at openstack.org (Helena Spease) Date: Fri, 6 Jan 2023 13:07:54 -0600 Subject: The OpenInfra Summit CFP is closing soon! Message-ID: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> Hi Everyone! The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is closing in just a few days[1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. The CFP closes January 10, 2023, at 11:59 p.m. PT. See what that is in your timezone [3] We are also now accepting submissions for Forum sessions [4]! Looking for other resources? Find information on registration, sponsorships, travel support and visa requests at https://openinfra.dev/summit/ If you have any questions feel free to reach out :) Cheers, Helena [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ [3] https://www.timeanddate.com/worldclock/fixedtime.html?msg=2023+OpenInfra+Summit+CFP+Closes&iso=20230110T2359&p1=137 [4] https://cfp.openinfra.dev/app/vancouver-2023/20/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jan 6 21:57:21 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Jan 2023 13:57:21 -0800 Subject: [tc][mistral][release] Propose to deprecate Mistral In-Reply-To: <184f3100360.c0a0b1db544780.2572463728293836752@ghanshyammann.com> References: <184f3100360.c0a0b1db544780.2572463728293836752@ghanshyammann.com> Message-ID: <1858916a8c6.faed9be4149277.6084514264883190725@ghanshyammann.com> ---- On Thu, 08 Dec 2022 10:47:03 -0800 Ghanshyam Mann wrote --- > ---- On Thu, 08 Dec 2022 01:22:43 -0800 Oleg Ovcharuk wrote --- > > Hi everyone, I'm one of mistral core team members.As was mentioned,?mistral is?also used by other companies (Netcracker).?As the community was kinda quiet, we were focused on our personal mistral fork, also we had not enough human resources to keep it up with upstream.But now things have changed - as you may notice by gerrit, we started to push our bugfixes/improvements to upstream and we have a huge list to complete. > > It's an amazing coincidence that the topic about deprecating mistral was started *after* two companies that are interested in mistral returned for community work. > > I'm also happy that someone is ready to take responsibility to perform PTL stuff and we will help them as much as we can.Let me know if you have any questions. > > Thanks, Oleg for your response. It is good to see more than one companies are interested to maintain the Mistral. > As the next step, let's do this: > > - Do the required work to release it. You can talk to the release team about pending things. Accordingly, we can decide on this patch https://review.opendev.org/c/openstack/governance/+/866562 > - Oleg or any other existing core member can onboard Axel and Arnaud in the maintainer list (what all resp you would like to give it up to the existing core members of Mistral). > - As Mistral is in the DPL model now, if you guys wanted to be moved to the PTL model, you can do it any time. These are the requirement[1] and example[2] > > Ping us in #openstack-tc IRC channel for any query/help you need. Hi Mistral team, As the Mistral gate is fixed now and things are better. Is there any plan to release it for m-2. Accordingly, we can decide on the - https://review.opendev.org/c/openstack/governance/+/866562 -gmann > > [1] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html#process-for-opting-in-to-distributed-leadership > [2] https://review.opendev.org/c/openstack/governance/+/829037 > > -gmann > > > Best regards,Oleg Ovcharuk > > ??, 8 ???. 2022 ?. ? 11:46, Axel Vanzaghi axel.vanzaghi at ovhcloud.com>: > > Hello, > > > > > > We (me and my employer) are really committing to do this work, it has been discussed and we agreed on this internally. > > > > > > Thing is, this project is currently vital for us so we will maintain it, either we do it with the community or not. We know it's a serious amount of work, more than if we keep it for us, but we think it would be better for everyone if we give back to the community. > > > > > > We also know we are not the only ones using it, someone has seen the discussion about its deprecation and us proposing to maintain it [1], and sent us a mail to tell us he has use cases, and even features and improvements. I'll ask him to stand up in this thread. > > > > > > Regards, > > Axel > > > > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > > > > > > From: Jay Faulkner jay at gr-oss.io> > > Sent: Wednesday, December 7, 2022 5:06:14 PM > > To: Arnaud Morin > > Cc: El?d Ill?s; openstack-discuss at lists.openstack.org > > Subject: Re: [tc][mistral][release] Propose to deprecate Mistral?Maintaining an entire project, especially catching it up after it's been neglected, is a serious amount of work. Are you (and your employer) committing to do this work? Are there any other interested parties that could keep Mistral maintained if you were to move on? Just wanting to ensure we're going to have the project setup for long-term support, given we promise each release will be supported for years to come. > > > > Thanks,Jay > > > > On Wed, Dec 7, 2022 at 5:34 AM Arnaud Morin arnaud.morin at gmail.com> wrote: > > Hey all, > > > > With Axel [1], we propose to maintain the Mistral development. > > > > This is new to us, so we will need help from the community but we really > > want to be fully commited so mistral will continue beeing maintained > > under openinfra. > > > > If you think we are too late for antelope, this can maybe happen for the > > next release? > > > > Cheers, > > > > [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-December/031417.html > > > > On 05.12.22 - 10:52, El?d Ill?s wrote: > > > Hi, > > > > > > Mistral projects are unfortunately not actively maintained and caused hard times in latest official series releases for release management team. Thus we discussed this and decided to propose to deprecate Mistral [1] to avoid broken releases and last minute debugging of gate issues (which usually fall to release management team), and remove mistral projects from 2023.1 Antelope release [2]. > > > > > > We would like to ask @TC to evaluate the situation and review our patches (deprecation [1] + removal from the official release [2]). > > > > > > Thanks in advance, > > > > > > El?d Ill?s > > > irc: elodilles @ #openstack-release > > > > > > [1] https://review.opendev.org/c/openstack/governance/+/866562 > > > [2] https://review.opendev.org/c/openstack/releases/+/865577 > > > > > > From gmann at ghanshyammann.com Fri Jan 6 23:44:57 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Jan 2023 15:44:57 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2023 Jan 06: Reading: 5 min Message-ID: <185897929d6.e8fc0861150666.1365770702271302295@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Jan 04. Most of the meeting discussions are summarized in this email. Meeting recordings are available @ https://www.youtube.com/watch?v=4vR7iStJZe0 and summary logs are available @ https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-04-16.00.log.html * The next TC weekly meeting will be on Jan 11 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Jan 10. 2. What we completed this week: ========================= * Nothing specific for this week. 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[2]. Open Reviews ----------------- * Two open reviews for ongoing activities[3]. Cleanup of PyPI maintainer list for OpenStack Projects ---------------------------------------------------------------- Clarkb reported this last week and we had an initial discussion in the TC meeting. xstatic-font-awesome repo which is under the Horizon project has non-OpenStack core as maintainers in PyPI and recently a new maintainer is added[4] without going through or knowing by the OpenStack Horizon PTL. While checking other deliverables, I found there are other maintainers present in many of the deliverables. A few examples are https://pypi.org/project/murano/ https://pypi.org/project/glance/ To avoid two sets of maintainers for OpenStack deliverables (one OpenStack and one external), we should clean this up. 'openstackci' is maintainers on the PyPI side which can be kept for all the OpenStack deliverables. If any external maintainers want to maintain it external to OpenStack and OpenStack project is ok for that then we can discuss that option also. For example, xstatic-* repo can be a good example to handover to the external maintainer if the Horizon team agrees. We will discuss it in the next TC meeting also. Meanwhile, I will reach out to the Horizon team and find more about the xstatic-* repos. IMPORTANT: Tox 4 failure ------------------------------- As you might have seen the gate failure due to tox4 (even the latest release too), we discussed in TC meeting[5] and after testing[6] we pinned tox<4 for the stable branch. Pinning is done in the common job in openstack-zuul-jobs repo [7]. But there are some cases where this global pinning is not enough and the latest tox is installed by pip. For example, Brian reported one case[8]. In that case, you can explicitly pin tox in the tox.ini file otherwise global pinning should work fine. For the master, we need to fix the failure. Hoping we do not get more new failures on every new tox release (for example, the missing interpreter issue which is fixed by stephenfin today). Mistral release and more maintainers ------------------------------------------- Mistral gate is green and things are merging there[9] so I feel it is ready for release. But I have sent an email reply on ML[10] to know more about it from the Mistral team. Adjutant release and more maintainers ---------------------------------------------- Adjutant is in the Inactive project list. But I have seen Dale (PTL) has fixed the gate and merged the things[11]. I will ping Dale about the release status and accordingly we can decide on its status change if needed. Project updates ------------------- * Add Cinder Huawei charm[12] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions [2] https://etherpad.opendev.org/p/tc-2023.1-tracker [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://github.com/openstack/xstatic-font-awesome/pull/2 [5] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031668.html [6] https://review.opendev.org/q/topic:tox4-pin-testing [7] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/867849/4 [8] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031678.html [9] https://review.opendev.org/q/project:openstack/mistral [10] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031687.html [11] https://review.opendev.org/q/project:openstack/adjutant [12] https://review.opendev.org/c/openstack/governance/+/867588 [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From gmann at ghanshyammann.com Fri Jan 6 23:56:03 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Jan 2023 15:56:03 -0800 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <18588638aa6.be66d732145969.2670297230032600681@ghanshyammann.com> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> <18588638aa6.be66d732145969.2670297230032600681@ghanshyammann.com> Message-ID: <1858983546e.12a5124a3150802.4554188631529639561@ghanshyammann.com> ---- On Fri, 06 Jan 2023 10:41:43 -0800 Ghanshyam Mann wrote --- > ---- On Fri, 06 Jan 2023 05:42:45 -0800 Brian Rosmaita wrote --- > > Apologies for top posting, but in addition the change gmann has > > proposed, I believe that you'll need to change your tox.ini file to pin > > tox <4. I ran into this working on [0] yesterday, where cinderclient > > functional tests are devstack-based, and at some point during devstack > > install someone [1] pip-installs tox unconstrained. > > > > The zuul ensure_tox role only ensures that tox is present. The > > ensure_tox_version var has a slightly misleading name in that it is only > > used when the role decides it needs to install tox, and then it uses the > > value of that var; it doesn't ensure that the available tox is that version. > > > > I've verified that the 'requires = tox<4' trick in [0] works when the > > tox being called is >=4 [2]; tox creates a virtualenv in .tox/.tox and > > installs tox<4 in there, and then runs your testenvs using the tox you > > required in your tox.ini. > > I saw in the log that it is using the ensure-tox role from devstack/playbooks/tox/run-both.yaml > - https://zuul.opendev.org/t/openstack/build/c957db6323dc4b42bee07f6b709fb3ad/log/job-output.txt#1182 > > Which is run after pre-yaml where we pinned tox<4 via ensure_tox_version but missed > doing it in run-both.yaml. Testing it by pinning in run-both.yaml also. Pinning in run-both.yaml playbook did not fix the python-cinderclient issue and pinning tox<4 in tox.ini is the way forward for this case. -gmann > > - https://review.opendev.org/c/openstack/python-cinderclient/+/869494 > > -gmann > > > > > cheers, > > brian > > > > [0] https://review.opendev.org/c/openstack/python-cinderclient/+/869263 > > [1] not naming any names here; also this same situation will happen if > > the test node image already contains tox4 > > [2] it works in tox 3 too, sometime after tox 3.18.0. Hopefully it will > > continue to work in tox 4, though they way they're introducing bad > > regressions (e.g., [3]), I guess I should say that i've verified that it > > works through 4.2.2 (4.2.4 was released yesterday). > > [3] https://github.com/tox-dev/tox/issues/2811 > > > > On 1/4/23 11:08 PM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > As you might know, tox4 broke almost all the projects tox.ini and so do master as well > > > as stable branches gate. On the master, we need to fix it as we should use tox4 in testing. > > > To fix the stable branches gate, either we need to backport all the fixes which include some > > > more issues fixes[1] or we can pin the tox<4. > > > > > > We discussed it in today's TC meeting and it is better to keep testing the stable branch > > > with the tox version that they were released with and not with the tox4. Even in future, > > > there might be the cases where latest tox might introduce more incompatible changes. > > > By considering all these factors, it is better to pin tox<4 for stable branches (<=stable/zed) > > > testing. > > > > > > I have prepared the patch to pin it in the common job/template, feel free to comment > > > with your feedback/opinion: > > > - https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/867849/3 > > > > > > also, tested with nova/cinder stable branches: > > > - https://review.opendev.org/q/I0ca55abf9975c5a3f9713ac5dd5be39083e04554 > > > - https://review.opendev.org/q/I300e7804a27d08ecd239d1a7faaf2aaf3e07b9ee > > > > > > You can also test it in your project stable branches in case any different syntax in tox.ini > > > causing the tox upgrade to the latest. > > > > > > [1] https://github.com/tox-dev/tox/issues/2712 > > > > > > -gmann > > > > > > > > > > > > > > From cboylan at sapwetik.org Sat Jan 7 02:12:15 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Jan 2023 18:12:15 -0800 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: <1858983546e.12a5124a3150802.4554188631529639561@ghanshyammann.com> References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> <18588638aa6.be66d732145969.2670297230032600681@ghanshyammann.com> <1858983546e.12a5124a3150802.4554188631529639561@ghanshyammann.com> Message-ID: On Fri, Jan 6, 2023, at 3:56 PM, Ghanshyam Mann wrote: > ---- On Fri, 06 Jan 2023 10:41:43 -0800 Ghanshyam Mann wrote --- > > ---- On Fri, 06 Jan 2023 05:42:45 -0800 Brian Rosmaita wrote --- > > > Apologies for top posting, but in addition the change gmann has > > > proposed, I believe that you'll need to change your tox.ini file > to pin > > > tox <4. I ran into this working on [0] yesterday, where > cinderclient > > > functional tests are devstack-based, and at some point during > devstack > > > install someone [1] pip-installs tox unconstrained. > > > > > > The zuul ensure_tox role only ensures that tox is present. The > > > ensure_tox_version var has a slightly misleading name in that it > is only > > > used when the role decides it needs to install tox, and then it > uses the > > > value of that var; it doesn't ensure that the available tox is > that version. > > > > > > I've verified that the 'requires = tox<4' trick in [0] works when > the > > > tox being called is >=4 [2]; tox creates a virtualenv in > .tox/.tox and > > > installs tox<4 in there, and then runs your testenvs using the > tox you > > > required in your tox.ini. > > > > I saw in the log that it is using the ensure-tox role from > devstack/playbooks/tox/run-both.yaml > > - > https://zuul.opendev.org/t/openstack/build/c957db6323dc4b42bee07f6b709fb3ad/log/job-output.txt#1182 > > > > Which is run after pre-yaml where we pinned tox<4 via > ensure_tox_version but missed > > doing it in run-both.yaml. Testing it by pinning in run-both.yaml > also. > > Pinning in run-both.yaml playbook did not fix the python-cinderclient > issue and pinning tox<4 in > tox.ini is the way forward for this case. I don't think this is a proper fix. This goes back to the concern I already mentioned on this thread. The correct way to fix this is to ensure we aren't installing tox multiple times with the final install being the version we want. We should ensure we install it once with the correct version. The reason the python-cinderclient change failed is that devstack is blindly installing tox here: https://opendev.org/openstack/devstack/src/branch/master/lib/neutron_plugins/ovn_agent#L369-L370 which is installing latest tox per this log: https://zuul.opendev.org/t/openstack/build/961c429cd9fc4d649e8714aba67f052d/log/job-output.txt#9211-9279. The problem with adding requires = tox<4 in tox.ini is that this will cause tox to install a new tox in a new venv unnecessarily simply to run the target under an older tox. If we fix devstack instead then we can install tox once and everything should work. > > -gmann From nguyenhuukhoinw at gmail.com Sun Jan 8 04:12:32 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 8 Jan 2023 11:12:32 +0700 Subject: [openstack][cinder] Assign each storage backend to each AZ In-Reply-To: References: Message-ID: Hello guys, I can control my logic that will separating AZ to Cinder for each Department by using @Alan Bishop method, but we need create a new volume and adding metadata as key="RESKEY:availability_zones" and value="az_name"; and key="volume_backend_name" and value="your backend in AZ" Thanks All for helping. Nguyen Huu Khoi On Wed, Jan 4, 2023 at 11:32 PM Nguy?n H?u Kh?i wrote: > Thanks, I'll check them out. > > On Wed, Jan 4, 2023, 8:51 PM Alan Bishop wrote: > >> >> >> On Wed, Jan 4, 2023 at 1:09 AM Sa Pham wrote: >> >>> You have to run cinder-volume service for each AZ. And in your >>> configuration of cinder-volume you need to specify >>> storage_availability_zone for that zone. >>> >> >> Alternatively, you can run a single cinder-volume service with multiple >> backends, and use the backend_availability_zone option [1] to specify each >> backend's AZ. The backend_availability_zone overrides the >> storage_availability_zone for that backend. >> >> [1] >> https://github.com/openstack/cinder/blob/d55a004e524f752c228a4a7bda5d24d4223325de/cinder/volume/driver.py#L239 >> >> Alan >> >> >>> With nova-compute, you have to create a host aggregate with an >>> availability zone option for these compute nodes. >>> >>> >>> >>> On Wed, Jan 4, 2023 at 3:42 PM Nguy?n H?u Kh?i < >>> nguyenhuukhoinw at gmail.com> wrote: >>> >>>> Ok, thanks for the clarification. :) >>>> Nguyen Huu Khoi >>>> >>>> >>>> On Wed, Jan 4, 2023 at 3:03 PM Rajat Dhasmana >>>> wrote: >>>> >>>>> >>>>> >>>>> On Wed, Jan 4, 2023 at 1:01 PM Nguy?n H?u Kh?i < >>>>> nguyenhuukhoinw at gmail.com> wrote: >>>>> >>>>>> Thanks for the answer. >>>>>> But I cannot find the way to configure the storage backend per AZ, >>>>>> Would you give me some suggestions? >>>>>> >>>>> >>>>> It totally depends on the deployment method you're using. It could be >>>>> either tripleo, ansible etc and every deployment method should provide a >>>>> way to set an availability zone for a volume backend. I'm not a deployment >>>>> expert but a specific deployment team needs to be consulted for the same. >>>>> >>>>> >>>>>> Nguyen Huu Khoi >>>>>> >>>>>> >>>>>> On Wed, Jan 4, 2023 at 1:53 PM Rajat Dhasmana >>>>>> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> From the description, I'm assuming the instances will be boot from >>>>>>> volume. In that case, you will need to create a volume type for each >>>>>>> backend and you can use 'extra_specs' properties in the volume type to >>>>>>> assign a volume type to a particular AZ. In this case, if you're already >>>>>>> creating one backend per AZ then a volume type linked to that backend >>>>>>> should be good. >>>>>>> Now you will need to create a bootable volume and launch an instance >>>>>>> with it. Again, the instance should be launched in the AZ as used in the >>>>>>> volume type to support your use case. >>>>>>> Also if you want to restrict volumes of a particular AZ to be >>>>>>> attached to the instance of the same AZ, you can use the config option >>>>>>> *cross_az_attach*[1] which will allow/disallow cross AZ attachments. >>>>>>> Hope that helps. >>>>>>> >>>>>>> [1] >>>>>>> https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach >>>>>>> >>>>>>> Thanks >>>>>>> Rajat Dhasmana >>>>>>> >>>>>>> On Wed, Jan 4, 2023 at 7:31 AM Nguy?n H?u Kh?i < >>>>>>> nguyenhuukhoinw at gmail.com> wrote: >>>>>>> >>>>>>>> Hello guys. >>>>>>>> I took time to search for this question but I can't find the answer. >>>>>>>> >>>>>>>> I have an Openstack private cloud and I use an AZ to a department. >>>>>>>> For example, >>>>>>>> AZ-IT for IT department >>>>>>>> AZ-Sale for Sale department... >>>>>>>> >>>>>>>> I will prepare 2 storage backends for each AZ. >>>>>>>> >>>>>>>> My goal is that when users launch an instance by choosing AZ then >>>>>>>> It will use only the backend for this AZ. >>>>>>>> >>>>>>>> Would Openstack support my goal? >>>>>>>> >>>>>>>> Thanks for reading my email. >>>>>>>> >>>>>>>> Nguyen Huu Khoi >>>>>>>> >>>>>>> >>> >>> -- >>> Sa Pham Dang >>> Skype: great_bn >>> Phone/Telegram: 0986.849.582 >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Sun Jan 8 05:30:37 2023 From: vincentlee676 at gmail.com (vincent lee) Date: Sat, 7 Jan 2023 23:30:37 -0600 Subject: Accessing databases of one OpenStack component from another Message-ID: Hi all, I have a working OpenStack in the yoga version and am trying to customize the zun component. In terms of deployment, I am using Kolla-ansible to deploy the OpenStack. I am trying to access blazar databases from the zun container. However, I have no clue of how the actual flow goes. I hope to receive some directions or suggestions to get me started. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sun Jan 8 07:48:27 2023 From: amonster369 at gmail.com (A Monster) Date: Sun, 8 Jan 2023 08:48:27 +0100 Subject: add a new service to a pre-deployed openstack private cloud Message-ID: I've done an openstack deployment using kolla ansible, and after deploying I want to add new services but without having to redo the deployment from scratch in order to keep both the configuration and the data already in use, is this doable ? as I couldn't find anything in the kolla-ansible docs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Sun Jan 8 08:15:12 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 8 Jan 2023 09:15:12 +0100 Subject: add a new service to a pre-deployed openstack private cloud In-Reply-To: References: Message-ID: Hi, Sure. It is doable. Just make your changes to globals.yml, add the required config files and run e.g. for MAGNUM: kolla-ansible -i deploy --tags common,horizon,magnum https://docs.openstack.org/kolla-ansible/latest/reference/containers/magnum-guide.html? In that case deploy will not re-deploy your cluster, but pull in the needed containers for magnum and configure it. > On 8. Jan 2023, at 08:48, A Monster wrote: > > I've done an openstack deployment using kolla ansible, and after deploying I want to add new services but without having to redo the deployment from scratch in order to keep both the configuration and the data already in use, > is this doable ? as I couldn't find anything in the kolla-ansible docs. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon.ico Type: image/vnd.microsoft.icon Size: 338 bytes Desc: not available URL: From oliver.weinmann at me.com Sun Jan 8 08:21:52 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 8 Jan 2023 09:21:52 +0100 Subject: add a new service to a pre-deployed openstack private cloud In-Reply-To: References: Message-ID: You don?t need to make any changes to your inventory files. Just keep using your initial inventory and you will be fine. > On 8. Jan 2023, at 09:19, A Monster wrote: > > Thank you for your quick response, > do I need to change the hosts in my inventory files and keep only the nodes on which these new services will be deployed on, or is it done automatically through the roles defined ? > > On Sun, 8 Jan 2023 at 09:15, Oliver Weinmann > wrote: >> Hi, >> >> Sure. It is doable. Just make your changes to globals.yml, add the required config files and run e.g. for MAGNUM: >> >> kolla-ansible -i deploy --tags common,horizon,magnum >> Magnum - Container cluster service ? kolla-ansible 15.1.0.dev19 documentation >> docs.openstack.org >> >> Magnum - Container cluster service ? kolla-ansible 15.1.0.dev19 documentation >> docs.openstack.org ? >> >> In that case deploy will not re-deploy your cluster, but pull in the needed containers for magnum and configure it. >> >>> On 8. Jan 2023, at 08:48, A Monster > wrote: >>> >>> I've done an openstack deployment using kolla ansible, and after deploying I want to add new services but without having to redo the deployment from scratch in order to keep both the configuration and the data already in use, >>> is this doable ? as I couldn't find anything in the kolla-ansible docs. >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon.ico Type: image/vnd.microsoft.icon Size: 338 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sun Jan 8 09:06:40 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 8 Jan 2023 10:06:40 +0100 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux Message-ID: Hi, Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga to Zed since we have to do OS upgrade also??? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Sun Jan 8 09:37:49 2023 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Sun, 8 Jan 2023 10:37:49 +0100 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux In-Reply-To: References: Message-ID: Hello, We?re working on backporting RL9 support to Yoga, it should show up in coming weeks. Best regards, Michal W dniu niedz., 8.01.2023 o 10:27 wodel youchi napisa?(a): > Hi, > > Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 > only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga > to Zed since we have to do OS upgrade also??? > > Regards. > -- Micha? Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sun Jan 8 10:02:53 2023 From: wodel.youchi at gmail.com (wodel youchi) Date: Sun, 8 Jan 2023 11:02:53 +0100 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux In-Reply-To: References: Message-ID: Great, thanks! Le dim. 8 janv. 2023 ? 10:38, Micha? Nasiadka a ?crit : > Hello, > > We?re working on backporting RL9 support to Yoga, it should show up in > coming weeks. > > Best regards, > Michal > > W dniu niedz., 8.01.2023 o 10:27 wodel youchi > napisa?(a): > >> Hi, >> >> Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 >> only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga >> to Zed since we have to do OS upgrade also??? >> >> Regards. >> > -- > Micha? Nasiadka > mnasiadka at gmail.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jan 8 19:30:52 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 8 Jan 2023 14:30:52 -0500 Subject: [openstack-ansible] git clone fatal: the remote end hung up unexpectedly Message-ID: Folks, Trying to deploy an openstack-ansible Zed release but noticed strange behavior with git clone. failed: [localhost] (item={'name': 'galera_server', 'scm': 'git', 'src': 'https://opendev.org/openstack/openstack-ansible-galera_server', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-12-12'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": {"name": "galera_server", "scm": "git", "shallow_since": "2022-12-12", "src": "https://opendev.org/openstack/openstack-ansible-galera_server", "trackbranch": "master", "version": "master"}, "msg": "Failed to download remote objects and refs: fatal: error in object: unshallow e04aeacc58c196c4fb3d49116cb93fa74f7fba31\nfatal: the remote end hung up unexpectedly\n"} Full logs: https://paste.opendev.org/show/bUjuekKPzknE5IC8zbPa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jan 8 19:52:03 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 8 Jan 2023 20:52:03 +0100 Subject: [openstack-ansible] git clone fatal: the remote end hung up unexpectedly In-Reply-To: References: Message-ID: Hey, Can you also attach the output of the previous task as well? The error you pasted highly likely related to the depth that is used during clone. Default depth of repos that are cloned is 20. So in case more then 20 commits were made to the repo since the release you are using, git will fall to unhallow. However, parallel git clone should not be affected by that, but I assume it still fails for some reason. But it's hard to say why without output of previous task as well ??, 8 ???. 2023 ?., 20:46 Satish Patel : > Folks, > > Trying to deploy an openstack-ansible Zed release but noticed strange > behavior with git clone. > > failed: [localhost] (item={'name': 'galera_server', 'scm': 'git', 'src': 'https://opendev.org/openstack/openstack-ansible-galera_server', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-12-12'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": {"name": "galera_server", "scm": "git", "shallow_since": "2022-12-12", "src": "https://opendev.org/openstack/openstack-ansible-galera_server", "trackbranch": "master", "version": "master"}, "msg": "Failed to download remote objects and refs: fatal: error in object: unshallow e04aeacc58c196c4fb3d49116cb93fa74f7fba31\nfatal: the remote end hung up unexpectedly\n"} > > > Full logs: https://paste.opendev.org/show/bUjuekKPzknE5IC8zbPa/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Jan 8 20:09:53 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 8 Jan 2023 15:09:53 -0500 Subject: [openstack-ansible] git clone fatal: the remote end hung up unexpectedly In-Reply-To: References: Message-ID: Hi, Here is the full output of previous task : https://paste.opendev.org/show/bXer4dyXV5911o8aWBrU/ Only Zed has issues but if I switch to yoga/xena all working great! Very curious how zed CI jobs are passing? On Sun, Jan 8, 2023 at 3:04 PM Dmitriy Rabotyagov wrote: > Hey, > > Can you also attach the output of the previous task as well? > > The error you pasted highly likely related to the depth that is used > during clone. Default depth of repos that are cloned is 20. So in case more > then 20 commits were made to the repo since the release you are using, git > will fall to unhallow. > > However, parallel git clone should not be affected by that, but I assume > it still fails for some reason. But it's hard to say why without output of > previous task as well > > ??, 8 ???. 2023 ?., 20:46 Satish Patel : > >> Folks, >> >> Trying to deploy an openstack-ansible Zed release but noticed strange >> behavior with git clone. >> >> failed: [localhost] (item={'name': 'galera_server', 'scm': 'git', 'src': 'https://opendev.org/openstack/openstack-ansible-galera_server', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-12-12'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": {"name": "galera_server", "scm": "git", "shallow_since": "2022-12-12", "src": "https://opendev.org/openstack/openstack-ansible-galera_server", "trackbranch": "master", "version": "master"}, "msg": "Failed to download remote objects and refs: fatal: error in object: unshallow e04aeacc58c196c4fb3d49116cb93fa74f7fba31\nfatal: the remote end hung up unexpectedly\n"} >> >> >> Full logs: https://paste.opendev.org/show/bUjuekKPzknE5IC8zbPa/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Sun Jan 8 20:42:07 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 8 Jan 2023 21:42:07 +0100 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux In-Reply-To: References: Message-ID: Hi, That is a good question. I?m also running yoga on rocky 8 and due to some problems with yoga I would like to upgrade to zed too soon. I have created a very simple staging deployment on a single ESXi host with 3 controllers and 2 compute nodes with the same config that I use in the production cluster. This lets me try the upgrade path. I assume while there is the possibility to upgrade from rocky 8 to 9, I wouldn?t do that. Instead I would do a fresh install of rocky9. I can only think of the docs not being 100% accurate and you can run yoga on rocky9 too. I will give it a try. Cheers, Oliver Von meinem iPhone gesendet > Am 08.01.2023 um 10:25 schrieb wodel youchi : > > ? > Hi, > > Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga to Zed since we have to do OS upgrade also??? > > Regards. From nguyenhuukhoinw at gmail.com Mon Jan 9 00:12:34 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Jan 2023 07:12:34 +0700 Subject: [Nova][Horizon] Message-ID: Hello guys. Is there any way to assign AZ to a specified project? After searching, I cannot find any answer. Example. Sale project will only see Sale AZ to select. Tech project will only see Tech AZ to select Thank you. Regards Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jan 9 09:21:40 2023 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Jan 2023 10:21:40 +0100 Subject: [largescale-sig] Next meeting: Jan 11, 15utc Message-ID: Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20230111T15 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From arnaud.morin at gmail.com Mon Jan 9 09:38:19 2023 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 9 Jan 2023 09:38:19 +0000 Subject: The OpenInfra Summit CFP is closing soon! In-Reply-To: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> References: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> Message-ID: Hey Helena, We are preparing some talk submissions, but it seems the "Social Summary (280 chars)" is not accepting 280 chars, but only 100. Is it normal behavior? Cheers, Arnaud. On 06.01.23 - 13:07, Helena Spease wrote: > Hi Everyone! > > The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is closing in just a few days[1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. > > The CFP closes January 10, 2023, at 11:59 p.m. PT. See what that is in your timezone [3] > > We are also now accepting submissions for Forum sessions [4]! Looking for other resources? Find information on registration, sponsorships, travel support and visa requests at https://openinfra.dev/summit/ > > If you have any questions feel free to reach out :) > > Cheers, > Helena > > [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations > [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ > [3] https://www.timeanddate.com/worldclock/fixedtime.html?msg=2023+OpenInfra+Summit+CFP+Closes&iso=20230110T2359&p1=137 > [4] https://cfp.openinfra.dev/app/vancouver-2023/20/ > > From Danny.Webb at thehutgroup.com Mon Jan 9 09:50:02 2023 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Mon, 9 Jan 2023 09:50:02 +0000 Subject: [Nova][Horizon] In-Reply-To: References: Message-ID: If you want to do this you'd have to use host aggregates rather than AZs I think. Setup a host aggregate that is then mapped to specific flavors which are RBAC'd to specific projects. ________________________________ From: Nguy?n H?u Kh?i Sent: 09 January 2023 00:12 To: OpenStack Discuss Subject: [Nova][Horizon] CAUTION: This email originates from outside THG ________________________________ Hello guys. Is there any way to assign AZ to a specified project? After searching, I cannot find any answer. Example. Sale project will only see Sale AZ to select. Tech project will only see Tech AZ to select Thank you. Regards Nguyen Huu Khoi Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon Jan 9 09:50:27 2023 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 9 Jan 2023 09:50:27 +0000 Subject: The OpenInfra Summit CFP is closing soon! In-Reply-To: References: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> Message-ID: I found this: https://github.com/OpenStackweb/summit-api/blob/main/tests/schema.sql#L9991 Not sure this is the root cause, but maybe the field is wrong in the DB? We also have this: https://github.com/OpenStackweb/summit-api/blob/main/app/Http/Controllers/Apis/Protected/Summit/Factories/SummitEventValidationRulesFactory.php#L43 So it should be good on API side On 09.01.23 - 09:38, Arnaud Morin wrote: > Hey Helena, > > We are preparing some talk submissions, but it seems the > "Social Summary (280 chars)" is not accepting 280 chars, but only 100. > Is it normal behavior? > > Cheers, > Arnaud. > > > On 06.01.23 - 13:07, Helena Spease wrote: > > Hi Everyone! > > > > The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is closing in just a few days[1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. > > > > The CFP closes January 10, 2023, at 11:59 p.m. PT. See what that is in your timezone [3] > > > > We are also now accepting submissions for Forum sessions [4]! Looking for other resources? Find information on registration, sponsorships, travel support and visa requests at https://openinfra.dev/summit/ > > > > If you have any questions feel free to reach out :) > > > > Cheers, > > Helena > > > > [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations > > [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ > > [3] https://www.timeanddate.com/worldclock/fixedtime.html?msg=2023+OpenInfra+Summit+CFP+Closes&iso=20230110T2359&p1=137 > > [4] https://cfp.openinfra.dev/app/vancouver-2023/20/ > > > > From nguyenhuukhoinw at gmail.com Mon Jan 9 10:00:41 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Jan 2023 17:00:41 +0700 Subject: [Nova][Horizon] In-Reply-To: References: Message-ID: Thank you very much for the information. On Mon, Jan 9, 2023, 4:50 PM Danny Webb wrote: > If you want to do this you'd have to use host aggregates rather than AZs I > think. Setup a host aggregate that is then mapped to specific flavors > which are RBAC'd to specific projects. > ------------------------------ > *From:* Nguy?n H?u Kh?i > *Sent:* 09 January 2023 00:12 > *To:* OpenStack Discuss > *Subject:* [Nova][Horizon] > > > * CAUTION: This email originates from outside THG * > ------------------------------ > Hello guys. > Is there any way to assign AZ to a specified project? After searching, I > cannot find any answer. > > Example. > > Sale project will only see Sale AZ to select. > Tech project will only see Tech AZ to select > > Thank you. Regards > Nguyen Huu Khoi > > *Danny Webb* > Principal OpenStack Engineer > Danny.Webb at thehutgroup.com > [image: THG Ingenuity Logo] > www.thg.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gibi at redhat.com Mon Jan 9 10:34:35 2023 From: gibi at redhat.com (Balazs Gibizer) Date: Mon, 09 Jan 2023 11:34:35 +0100 Subject: [ci][all]tox.tox_env.python.api.NoInterpreter - gate is blocked In-Reply-To: <20230106163555.j56qltntu32vterw@yuggoth.org> References: <20230106163555.j56qltntu32vterw@yuggoth.org> Message-ID: On Fri, Jan 6 2023 at 04:35:56 PM +00:00:00, Jeremy Stanley wrote: > On 2023-01-06 16:03:52 +0100 (+0100), Balazs Gibizer wrote: > [...] >> There is a fix proposed in tox > [...] > > Which is now merged, so should appear in the next release (barring > any reverts). The tox 4.6.2. contains that fix and it seems it fixed the issue in case of our unit test targets (e.g. 'py310') but I still see missing interpreter errors in nova's functional targets (e.g. 'functional-py310'). I tracked this down to a conflict the the basepython = python3 settings nova has and the generative env def. It seems this combination worked in tox 3 but not in tox 4 so I opened a tox issues[1]. I have a temporary fix proposed in nova[2] [1] https://github.com/tox-dev/tox/issues/2838 [2] https://review.opendev.org/c/openstack/nova/+/869545 > -- > Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jan 9 11:11:52 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Jan 2023 11:11:52 +0000 Subject: [Nova][Horizon] In-Reply-To: References: Message-ID: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> On Mon, 2023-01-09 at 09:50 +0000, Danny Webb wrote: > If you want to do this you'd have to use host aggregates rather than AZs I think. Setup a host aggregate that is then mapped to specific flavors which are RBAC'd to specific projects. AZ are just host aggreates with AZ metadata added To do tenant affintiy at the schduler level on older clouds you can use the AggregateMultiTenancyIsolation filter to map tenant to hostaggreates. from rocky on the perfer approch is to use teant isolation via placement aggreates https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement you do not need to modify falvors for that use case. host aggreates are not viabel to endusers at the api so you cannot adjust policy to limit them to specific tenants. if you really want to support this in horizon you would haveto apply the ```Openstack aggregate set --property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg``` to the aggreate that has the AZ defintion and modify horizon to check if the tenant id in the aggreate matched the tenant that is logged in. basically horizon would have to implement the filtering of AZs in its ui. nova does not provide that because we do not require the ```Tenant Isolation with Placement``` feature to be configured on the host aggreate that defines the AZ. normally it is not done that way and you will have a seperate host aggreate that overlaps with multile for a given tenant that defiens which hosts they can run on. anyway case the answer is that you need to tag the AZ with some metadata to track the tenant info (or reuse the filed we support for schduling) and modify horizion to filter by it. the alternitive approch is to propsoe a new feature to nova to allow it to to fileter in some whay but i am not sure what that would look like and it woudl not be backporatbale as it would be an api change so it would be a change in the B/2023.2 release at the earlest. > ________________________________ > From: Nguy?n H?u Kh?i > Sent: 09 January 2023 00:12 > To: OpenStack Discuss > Subject: [Nova][Horizon] > > > CAUTION: This email originates from outside THG > > ________________________________ > Hello guys. > Is there any way to assign AZ to a specified project? After searching, I cannot find any answer. > > Example. > > Sale project will only see Sale AZ to select. > Tech project will only see Tech AZ to select > > Thank you. Regards > Nguyen Huu Khoi > > Danny Webb > Principal OpenStack Engineer > Danny.Webb at thehutgroup.com > [THG Ingenuity Logo] > www.thg.com > [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] From nguyenhuukhoinw at gmail.com Mon Jan 9 11:15:06 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Jan 2023 18:15:06 +0700 Subject: [Nova][Horizon] In-Reply-To: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> References: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> Message-ID: I will test and let you know. Thank you so much? On Mon, Jan 9, 2023, 6:12 PM Sean Mooney wrote: > On Mon, 2023-01-09 at 09:50 +0000, Danny Webb wrote: > > If you want to do this you'd have to use host aggregates rather than AZs > I think. Setup a host aggregate that is then mapped to specific flavors > which are RBAC'd to specific projects. > AZ are just host aggreates with AZ metadata added > To do tenant affintiy at the schduler level on older clouds you can use > the AggregateMultiTenancyIsolation filter > to map tenant to hostaggreates. from rocky on the perfer approch is to use > teant isolation via placement aggreates > > https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement > > you do not need to modify falvors for that use case. > > host aggreates are not viabel to endusers at the api so you cannot adjust > policy to limit them to specific tenants. > > if you really want to support this in horizon you would haveto apply the > > ```Openstack aggregate set --property > filter_tenant_id=9691591f913949818a514f95286a6b90 myagg``` > > to the aggreate that has the AZ defintion and modify horizon to check if > the tenant id in the aggreate matched > the tenant that is logged in. basically horizon would have to implement > the filtering of AZs in its ui. nova does not > provide that because we do not require the ```Tenant Isolation with > Placement``` feature to be configured on the > host aggreate that defines the AZ. normally it is not done that way and > you will have a seperate host aggreate that overlaps with multile > for a given tenant that defiens which hosts they can run on. > > anyway case the answer is that you need to tag the AZ with some metadata > to track the tenant info (or reuse the filed we support for schduling) and > modify horizion to filter by it. the alternitive approch is to propsoe a > new feature to nova to allow it to to fileter in some whay but i am > not sure what that would look like and it woudl not be backporatbale as it > would be an api change so it would be a change in the B/2023.2 release at > the earlest. > > ________________________________ > > From: Nguy?n H?u Kh?i > > Sent: 09 January 2023 00:12 > > To: OpenStack Discuss > > Subject: [Nova][Horizon] > > > > > > CAUTION: This email originates from outside THG > > > > ________________________________ > > Hello guys. > > Is there any way to assign AZ to a specified project? After searching, I > cannot find any answer. > > > > Example. > > > > Sale project will only see Sale AZ to select. > > Tech project will only see Tech AZ to select > > > > Thank you. Regards > > Nguyen Huu Khoi > > > > Danny Webb > > Principal OpenStack Engineer > > Danny.Webb at thehutgroup.com > > [THG Ingenuity Logo] > > www.thg.com > > [https://i.imgur.com/wbpVRW6.png]< > https://www.linkedin.com/company/thg-ingenuity/?originalSubdomain=uk> [ > https://i.imgur.com/c3040tr.png] > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Mon Jan 9 12:01:50 2023 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Mon, 9 Jan 2023 12:01:50 +0000 Subject: [Nova][Horizon] In-Reply-To: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> References: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> Message-ID: Yeah, the part I wasn't sure about was visibility at the horizon / API level. Since host aggregates are largely invisible from the enduser it seemed to me to provide better UX to simply use aggregates without AZ affiliation. I guess the other question is if you are using volume types to route to different storage backends, can you set a default volume type for each tenant? I know you can set one globally in the cinder.conf but that wouldn't work if you wanted to different tenants to be isolated on their own storage appliances. ________________________________ From: Sean Mooney Sent: 09 January 2023 11:11 To: Danny Webb ; Nguy?n H?u Kh?i ; OpenStack Discuss Subject: Re: [Nova][Horizon] CAUTION: This email originates from outside THG On Mon, 2023-01-09 at 09:50 +0000, Danny Webb wrote: > If you want to do this you'd have to use host aggregates rather than AZs I think. Setup a host aggregate that is then mapped to specific flavors which are RBAC'd to specific projects. AZ are just host aggreates with AZ metadata added To do tenant affintiy at the schduler level on older clouds you can use the AggregateMultiTenancyIsolation filter to map tenant to hostaggreates. from rocky on the perfer approch is to use teant isolation via placement aggreates https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement you do not need to modify falvors for that use case. host aggreates are not viabel to endusers at the api so you cannot adjust policy to limit them to specific tenants. if you really want to support this in horizon you would haveto apply the ```Openstack aggregate set --property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg``` to the aggreate that has the AZ defintion and modify horizon to check if the tenant id in the aggreate matched the tenant that is logged in. basically horizon would have to implement the filtering of AZs in its ui. nova does not provide that because we do not require the ```Tenant Isolation with Placement``` feature to be configured on the host aggreate that defines the AZ. normally it is not done that way and you will have a seperate host aggreate that overlaps with multile for a given tenant that defiens which hosts they can run on. anyway case the answer is that you need to tag the AZ with some metadata to track the tenant info (or reuse the filed we support for schduling) and modify horizion to filter by it. the alternitive approch is to propsoe a new feature to nova to allow it to to fileter in some whay but i am not sure what that would look like and it woudl not be backporatbale as it would be an api change so it would be a change in the B/2023.2 release at the earlest. > ________________________________ > From: Nguy?n H?u Kh?i > Sent: 09 January 2023 00:12 > To: OpenStack Discuss > Subject: [Nova][Horizon] > > > CAUTION: This email originates from outside THG > > ________________________________ > Hello guys. > Is there any way to assign AZ to a specified project? After searching, I cannot find any answer. > > Example. > > Sale project will only see Sale AZ to select. > Tech project will only see Tech AZ to select > > Thank you. Regards > Nguyen Huu Khoi > > Danny Webb > Principal OpenStack Engineer > Danny.Webb at thehutgroup.com > [THG Ingenuity Logo] > www.thg.com> > [https://i.imgur.com/wbpVRW6.png]> [https://i.imgur.com/c3040tr.png] > Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jan 9 12:38:44 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Jan 2023 12:38:44 +0000 Subject: [Nova][Horizon] In-Reply-To: References: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> Message-ID: On Mon, 2023-01-09 at 12:01 +0000, Danny Webb wrote: > Yeah, the part I wasn't sure about was visibility at the horizon / API level. Since host aggregates are largely invisible from the enduser it seemed to me to provide better UX to simply use aggregates without AZ affiliation. > > I guess the other question is if you are using volume types to route to different storage backends, can you set a default volume type for each tenant? I know you can set one globally in the cinder.conf but that wouldn't work if you wanted to different tenants to be isolated on their own storage appliances. i think in general most service dont have the concep of per teanat defaults so nova does not have a concept of filtering AZ by tenat in the az list api and im not sure that cinder has the concept for voluem types. nova does have the idea of private flavors that can be limited to project but when you do a flaovr list you will still see the public flavors too. in general if we wanted to supprot this cleanly we woudl neeed to modify multiple project to supprot this so they havce the same behavior. for nova that would mean adding a way to assocate az and tenant for cidner that could be volume types. in generall openstack considers AZs, Flavors, volume types, Qos poicies to be "system scoped" ie resouces that are not assocated with tenants that is why they are not filtered by tenant at the api level since form a data model point of view that is not a usecase that is supported. > ________________________________ > From: Sean Mooney > Sent: 09 January 2023 11:11 > To: Danny Webb ; Nguy?n H?u Kh?i ; OpenStack Discuss > Subject: Re: [Nova][Horizon] > > CAUTION: This email originates from outside THG > > On Mon, 2023-01-09 at 09:50 +0000, Danny Webb wrote: > > If you want to do this you'd have to use host aggregates rather than AZs I think. Setup a host aggregate that is then mapped to specific flavors which are RBAC'd to specific projects. > AZ are just host aggreates with AZ metadata added > To do tenant affintiy at the schduler level on older clouds you can use the AggregateMultiTenancyIsolation filter > to map tenant to hostaggreates. from rocky on the perfer approch is to use teant isolation via placement aggreates > https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement > > you do not need to modify falvors for that use case. > > host aggreates are not viabel to endusers at the api so you cannot adjust policy to limit them to specific tenants. > > if you really want to support this in horizon you would haveto apply the > > ```Openstack aggregate set --property filter_tenant_id=9691591f913949818a514f95286a6b90 myagg``` > > to the aggreate that has the AZ defintion and modify horizon to check if the tenant id in the aggreate matched > the tenant that is logged in. basically horizon would have to implement the filtering of AZs in its ui. nova does not > provide that because we do not require the ```Tenant Isolation with Placement``` feature to be configured on the > host aggreate that defines the AZ. normally it is not done that way and you will have a seperate host aggreate that overlaps with multile > for a given tenant that defiens which hosts they can run on. > > anyway case the answer is that you need to tag the AZ with some metadata to track the tenant info (or reuse the filed we support for schduling) and > modify horizion to filter by it. the alternitive approch is to propsoe a new feature to nova to allow it to to fileter in some whay but i am > not sure what that would look like and it woudl not be backporatbale as it would be an api change so it would be a change in the B/2023.2 release at > the earlest. > > ________________________________ > > From: Nguy?n H?u Kh?i > > Sent: 09 January 2023 00:12 > > To: OpenStack Discuss > > Subject: [Nova][Horizon] > > > > > > CAUTION: This email originates from outside THG > > > > ________________________________ > > Hello guys. > > Is there any way to assign AZ to a specified project? After searching, I cannot find any answer. > > > > Example. > > > > Sale project will only see Sale AZ to select. > > Tech project will only see Tech AZ to select > > > > Thank you. Regards > > Nguyen Huu Khoi > > > > Danny Webb > > Principal OpenStack Engineer > > Danny.Webb at thehutgroup.com > > [THG Ingenuity Logo] > > www.thg.com> > > [https://i.imgur.com/wbpVRW6.png]> [https://i.imgur.com/c3040tr.png] > > > Danny Webb > Principal OpenStack Engineer > Danny.Webb at thehutgroup.com > [THG Ingenuity Logo] > www.thg.com > [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] From pierre at stackhpc.com Mon Jan 9 12:44:09 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 9 Jan 2023 13:44:09 +0100 Subject: [blazar] Next IRC meeting cancelled Message-ID: Hello, I am unable to run the IRC meeting on Thursday January 12. I propose to cancel it. Best wishes, Pierre Riteau (priteau) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Jan 9 13:48:04 2023 From: amy at demarco.com (Amy Marrich) Date: Mon, 9 Jan 2023 07:48:04 -0600 Subject: [Diversity] Diversity and Inclusion WG Meeting reminder Message-ID: This is a reminder that the Diversity and Inclusion WG will be meeting tomorrow at 14:00 UTC in the #openinfra-diversity channel on OFTC. We hope members of all OpenInfra projects join us as we continue working on planning for the OpenInfra Summit as well as Foundation-wide surveys. Thanks, Amy (spotz) 0 - https://etherpad.opendev.org/p/diversity-wg-agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Jan 9 13:50:04 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 9 Jan 2023 08:50:04 -0500 Subject: [all][gate][stable] Pinning tox<4 in stable branch testing In-Reply-To: References: <185801d98b6.110ec217e27667.6370059540731052772@ghanshyammann.com> <6786e99e-a0b9-5ddc-8ffa-e1ceedd787ff@gmail.com> <18588638aa6.be66d732145969.2670297230032600681@ghanshyammann.com> <1858983546e.12a5124a3150802.4554188631529639561@ghanshyammann.com> Message-ID: On 1/6/23 9:12 PM, Clark Boylan wrote: > On Fri, Jan 6, 2023, at 3:56 PM, Ghanshyam Mann wrote: >> ---- On Fri, 06 Jan 2023 10:41:43 -0800 Ghanshyam Mann wrote --- >> > ---- On Fri, 06 Jan 2023 05:42:45 -0800 Brian Rosmaita wrote --- [snip] >> Pinning in run-both.yaml playbook did not fix the python-cinderclient >> issue and pinning tox<4 in >> tox.ini is the way forward for this case. > > I don't think this is a proper fix. This goes back to the concern I already mentioned on this thread. The correct way to fix this is to ensure we aren't installing tox multiple times with the final install being the version we want. We should ensure we install it once with the correct version. > > The reason the python-cinderclient change failed is that devstack is blindly installing tox here: https://opendev.org/openstack/devstack/src/branch/master/lib/neutron_plugins/ovn_agent#L369-L370 which is installing latest tox per this log: https://zuul.opendev.org/t/openstack/build/961c429cd9fc4d649e8714aba67f052d/log/job-output.txt#9211-9279. > > The problem with adding requires = tox<4 in tox.ini is that this will cause tox to install a new tox in a new venv unnecessarily simply to run the target under an older tox. If we fix devstack instead then we can install tox once and everything should work. I think we have two separate issues here. The cinderclient functional test job just wants devstack to be up and running so that tox-based cinderclient tests can be run against devstack. I don't see that it's necessary that cinderclient have to use the same tox version to conduct its tests that devstack has installed for whatever reason devstack is installing tox. There may be good reasons for using different versions. In other words, it's not obvious to me that making devstack istelf tox-consistent implies that other projects running tox-based jobs against devstack have to use that same tox version. > >> >> -gmann > From nguyenhuukhoinw at gmail.com Mon Jan 9 14:31:46 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 9 Jan 2023 21:31:46 +0700 Subject: [Nova][Horizon] In-Reply-To: References: <03fcfda4162448c65aede7ca00a7ee7c97c9ec9f.camel@redhat.com> Message-ID: Hello. Thanks for your reply, I can set default volume type for tenants by using cli and it works fine with horizon. I will test with your guys suggests and let you know. On Mon, Jan 9, 2023, 7:01 PM Danny Webb wrote: > Yeah, the part I wasn't sure about was visibility at the horizon / API > level. Since host aggregates are largely invisible from the enduser it > seemed to me to provide better UX to simply use aggregates without AZ > affiliation. > > I guess the other question is if you are using volume types to route to > different storage backends, can you set a default volume type for each > tenant? I know you can set one globally in the cinder.conf but that > wouldn't work if you wanted to different tenants to be isolated on their > own storage appliances. > ------------------------------ > *From:* Sean Mooney > *Sent:* 09 January 2023 11:11 > *To:* Danny Webb ; Nguy?n H?u Kh?i < > nguyenhuukhoinw at gmail.com>; OpenStack Discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [Nova][Horizon] > > CAUTION: This email originates from outside THG > > On Mon, 2023-01-09 at 09:50 +0000, Danny Webb wrote: > > If you want to do this you'd have to use host aggregates rather than AZs > I think. Setup a host aggregate that is then mapped to specific flavors > which are RBAC'd to specific projects. > AZ are just host aggreates with AZ metadata added > To do tenant affintiy at the schduler level on older clouds you can use > the AggregateMultiTenancyIsolation filter > to map tenant to hostaggreates. from rocky on the perfer approch is to use > teant isolation via placement aggreates > > https://docs.openstack.org/nova/latest/admin/aggregates.html#tenant-isolation-with-placement > > you do not need to modify falvors for that use case. > > host aggreates are not viabel to endusers at the api so you cannot adjust > policy to limit them to specific tenants. > > if you really want to support this in horizon you would haveto apply the > > ```Openstack aggregate set --property > filter_tenant_id=9691591f913949818a514f95286a6b90 myagg``` > > to the aggreate that has the AZ defintion and modify horizon to check if > the tenant id in the aggreate matched > the tenant that is logged in. basically horizon would have to implement > the filtering of AZs in its ui. nova does not > provide that because we do not require the ```Tenant Isolation with > Placement``` feature to be configured on the > host aggreate that defines the AZ. normally it is not done that way and > you will have a seperate host aggreate that overlaps with multile > for a given tenant that defiens which hosts they can run on. > > anyway case the answer is that you need to tag the AZ with some metadata > to track the tenant info (or reuse the filed we support for schduling) and > modify horizion to filter by it. the alternitive approch is to propsoe a > new feature to nova to allow it to to fileter in some whay but i am > not sure what that would look like and it woudl not be backporatbale as it > would be an api change so it would be a change in the B/2023.2 release at > the earlest. > > ________________________________ > > From: Nguy?n H?u Kh?i > > Sent: 09 January 2023 00:12 > > To: OpenStack Discuss > > Subject: [Nova][Horizon] > > > > > > CAUTION: This email originates from outside THG > > > > ________________________________ > > Hello guys. > > Is there any way to assign AZ to a specified project? After searching, I > cannot find any answer. > > > > Example. > > > > Sale project will only see Sale AZ to select. > > Tech project will only see Tech AZ to select > > > > Thank you. Regards > > Nguyen Huu Khoi > > > > Danny Webb > > Principal OpenStack Engineer > > Danny.Webb at thehutgroup.com > > [THG Ingenuity Logo] > > www.thg.com > > [https://i.imgur.com/wbpVRW6.png]< > https://www.linkedin.com/company/thg-ingenuity/?originalSubdomain=uk> [ > https://i.imgur.com/c3040tr.png] > > > *Danny Webb* > Principal OpenStack Engineer > Danny.Webb at thehutgroup.com > [image: THG Ingenuity Logo] > www.thg.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bartosz at stackhpc.com Mon Jan 9 14:45:10 2023 From: bartosz at stackhpc.com (Bartosz Bezak) Date: Mon, 9 Jan 2023 15:45:10 +0100 Subject: [Kolla] Transition Ussuri/Victoria to EOL Message-ID: <7AFDF150-F92B-4DAC-8072-FD46E8007F0E@stackhpc.com> Hello, As aggreed on IRC meeting, Kolla Ussuri and Victoria deliverables (Kolla, Kolla-Ansible, Kayobe) are going EOL. [1] Wallaby remains in extended maintenance. Xena, Yoga and Zed are our current stable releases. Ussuri and Victoria branches were in Extended Maintenance for a some time, Kolla community does not have resources to support those branches actively. Moreover Ussuri and Victoria were the last releases that support CentOS 8 which is also EOL. All changes to those branches have been abandoned. [1] https://review.opendev.org/c/openstack/releases/+/869569 Best regards, Bartosz Bezak -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh at openinfra.dev Mon Jan 9 14:54:32 2023 From: josh at openinfra.dev (josh at openinfra.dev) Date: Mon, 9 Jan 2023 09:54:32 -0500 (EST) Subject: The OpenInfra Summit CFP is closing soon! In-Reply-To: References: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> Message-ID: <1673276072.34424050@apps.rackspace.com> Hi Arnaud, Thank you very much for letting us know about this! You are absolutely correct, there was an issue with the DB for the Social Summary field. This has now been adjusted and you should be able to submit the full 280 characters for the Social Summary. Again, thank you for researching the issue and for raising a flag on it - I really appreciate it! Cheers, Josh -----Original Message----- From: "Arnaud Morin" Sent: Monday, January 9, 2023 4:50am To: "Helena Spease" Cc: openstack-discuss at lists.openstack.org Subject: Re: The OpenInfra Summit CFP is closing soon! I found this: https://github.com/OpenStackweb/summit-api/blob/main/tests/schema.sql#L9991 Not sure this is the root cause, but maybe the field is wrong in the DB? We also have this: https://github.com/OpenStackweb/summit-api/blob/main/app/Http/Controllers/Apis/Protected/Summit/Factories/SummitEventValidationRulesFactory.php#L43 So it should be good on API side On 09.01.23 - 09:38, Arnaud Morin wrote: > Hey Helena, > > We are preparing some talk submissions, but it seems the > "Social Summary (280 chars)" is not accepting 280 chars, but only 100. > Is it normal behavior? > > Cheers, > Arnaud. > > > On 06.01.23 - 13:07, Helena Spease wrote: > > Hi Everyone! > > > > The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is closing in just a few days[1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. > > > > The CFP closes January 10, 2023, at 11:59 p.m. PT. See what that is in your timezone [3] > > > > We are also now accepting submissions for Forum sessions [4]! Looking for other resources? Find information on registration, sponsorships, travel support and visa requests at https://openinfra.dev/summit/ > > > > If you have any questions feel free to reach out :) > > > > Cheers, > > Helena > > > > [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations > > [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ > > [3] https://www.timeanddate.com/worldclock/fixedtime.html?msg=2023+OpenInfra+Summit+CFP+Closes&iso=20230110T2359&p1=137 > > [4] https://cfp.openinfra.dev/app/vancouver-2023/20/ > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Jan 9 16:33:06 2023 From: zigo at debian.org (Thomas Goirand) Date: Mon, 9 Jan 2023 17:33:06 +0100 Subject: [cinder] Unit test failures under Python 3.11 - mocks can no longer be provided as the specs for other Mocks In-Reply-To: References: Message-ID: On 1/4/23 11:36, Jiri Podivin wrote: > This is a good catch. We should get a hold of this before it creeps on > us in CI. > Maybe we should open it in shared backlog? As much as I know, only Cinder has this bug (I know because all of Debian OpenStack package are running unit tests at build time, and all of them were rebuilt with Python 3.11 recently). Please help finish this: https://review.opendev.org/c/openstack/cinder/+/869396 Also, I need help fixing this one: https://bugs.debian.org/1026524 which I didn't forward upstream yet, but someone on IRC worked on it (can't remember who...). Cheers, Thomas Goirand (zigo) From volehuy1998 at gmail.com Mon Jan 9 09:49:05 2023 From: volehuy1998 at gmail.com (=?UTF-8?B?SHV5IFbDtSBMw6o=?=) Date: Mon, 9 Jan 2023 16:49:05 +0700 Subject: Different between of the role of Nova-Conductor and Nova-Compute Message-ID: Hi, I have a question about Nova role - Assert: I don't understand about OPS/Redhat explaination. Because all of information said nova-conductor will help database-accessing for nova-compute, every chart/image showed, too. - Reallity: When I download OPS Nova source code looked into nova/compute/manager.py, I saw the many many block code that queries the data from the database (instance list) and updates the data (instance.save). So the assertion of the Openstack and Redhat development community is wrong or true???? Link 1: https://docs.openstack.org/nova/pike/install/get-started-compute.html Link 2: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/configuration_reference_guide/section_conductor Link n: .... -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jan 9 17:02:20 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Jan 2023 17:02:20 +0000 Subject: [tc] Fwd: Invitation: OpenInfra Board sync with OpenStack TC @ Wed Feb 8, 2023 2pm - 3pm (CST) Message-ID: <20230109170219.2rywjxic6pk65oyt@yuggoth.org> Just a reminder that a couple of months ago we arranged a February meeting time between the OpenStack community and the OpenInfra Board of Directors, Wednesday 2023-02-08 at 20:00 UTC. Julia has graciously supplied a conference call connection for the hour (calendar invite attached). I've also added the connection info to the pad we've been using for planning this call: https://etherpad.opendev.org/p/2023-02-board-openstack-sync -- Jeremy Stanley ----- Forwarded message from Julia Kreger ----- Date: Wed, 16 Nov 2022 21:31:42 +0000 Subject: Invitation: OpenInfra Board sync with OpenStack TC @ Wed Feb 8, 2023 2pm - 3pm (CST) (jeremy at openinfra.dev) OpenInfra Board sync with OpenStack TC Wednesday Feb 8, 2023 ? 2pm ? 3pm Central Time - Chicago Location https://us02web.zoom.us/j/87498907965?pwd=T0h3dDAwVHk1dTBqcVpxa202YmRPUT09 Greetings Directors & All, Meeting invite per informal discussion call on November 16th, for the next informal discussion. Note: This is being sent in advance of the new board being seated. We'll obviously need to update this invite once elections have completed. -Julia ?????????? Julia Kreger is inviting you to a scheduled Zoom meeting. Join Zoom Meeting: https://us02web.zoom.us/j/87498907965?pwd=T0h3dDAwVHk1dTBqcVpxa202YmRPUT09 Meeting ID: 874 9890 7965 Passcode: 035164 ----- End forwarded message ----- -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: text/calendar Size: 2325 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From noonedeadpunk at gmail.com Mon Jan 9 18:23:30 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 9 Jan 2023 19:23:30 +0100 Subject: Different between of the role of Nova-Conductor and Nova-Compute In-Reply-To: References: Message-ID: As of today, nova-compute can access data only through conductor. You should not have database connection details in nova.conf for nova-compute - if you do, this should raise an exception. However you're referring the Pike release and back then having db details in config wasn't considered as error at least. But I still doubt that nova-compute was accessing DB directly even then. ??, 9 ???. 2023 ?., 18:04 Huy V? L? : > Hi, I have a question about Nova role > > > - Assert: I don't understand about OPS/Redhat explaination. Because all of > information said nova-conductor will help database-accessing for > nova-compute, every chart/image showed, too. > > - Reallity: When I download OPS Nova source code looked into > nova/compute/manager.py, I saw the many many block code that queries the > data from the database (instance list) and updates the data (instance.save). > > So the assertion of the Openstack and Redhat development community is > wrong or true???? > > Link 1: > https://docs.openstack.org/nova/pike/install/get-started-compute.html > Link 2: > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/configuration_reference_guide/section_conductor > Link n: .... > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jan 9 18:54:42 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Jan 2023 18:54:42 +0000 Subject: Different between of the role of Nova-Conductor and Nova-Compute In-Reply-To: References: Message-ID: On Mon, 2023-01-09 at 16:49 +0700, Huy V? L? wrote: > Hi, I have a question about Nova role > > > - Assert: I don't understand about OPS/Redhat explaination. Because all of > information said nova-conductor will help database-accessing for > nova-compute, every chart/image showed, too. > > - Reallity: When I download OPS Nova source code looked into > nova/compute/manager.py, I saw the many many block code that queries the > data from the database (instance list) and updates the data (instance.save). > > So the assertion of the Openstack and Redhat development community is wrong > or true???? the compute agent does not have the ablity to connect to the db directly. even if you give it the db user name and passworkd. before the conductor was intoduced the compute agents directly connected to the db to make changes. this was seen as a possible security issue if the comptue host was compromised so nova was reacitected about 8 years ago. as part of that reacitecutre the condoctor was intodouced and all db operations done by the compute were delegated to the conductor via an rpc call. so if the compute does instance.save() that save operation on the instance object does an rpc call to the conductor to save the instnace. the local_condoctor mode that allowed the condutor code to be executed in the nova-compute process was removed in ocata https://github.com/openstack/nova/commit/c36dbe1f721ea6ca6b083932c8f27022a03ddf53 after being deprecated in mitaka https://github.com/openstack/nova/commit/0da0971cc44b93110032a3b382614f3f84297951 the role of the conductor is to orchserate all db interaction for compute service in a cell and to orchastreate long running operatiosn like live migrate or server create. for exampel when booting a vms which can take soem time to compelte we can execute that logic in the api. similarly since we have not selected a host yet we cannot delegate the server creation to a comptue agent. for such long running operations the conductor is used to execute teh task asyconously. architecutaly the conductor also plays other rules in a multi cell deployment. for example compute agent are not ment to know what cell they are a memebr of. when doing a cross cell migrate the supper conductor is responsible for managing the inter cell db operations. > > Link 1: > https://docs.openstack.org/nova/pike/install/get-started-compute.html > Link 2: > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/configuration_reference_guide/section_conductor > Link n: .... From smooney at redhat.com Mon Jan 9 20:25:00 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Jan 2023 20:25:00 +0000 Subject: Different between of the role of Nova-Conductor and Nova-Compute In-Reply-To: References: Message-ID: On Mon, 2023-01-09 at 19:23 +0100, Dmitriy Rabotyagov wrote: > As of today, nova-compute can access data only through conductor. You > should not have database connection details in nova.conf for nova-compute - > if you do, this should raise an exception. having the credentials wont raise an execption but they wont be used. tripleo used to put them in the nova-comptue nova.conf. i dont know if that has been fixed on master but we have trieed to get them removed several times but other installer have also done this incorrectly in the past. > However you're referring the Pike release and back then having db details > in config wasn't considered as error at least. But I still doubt that > nova-compute was accessing DB directly even then. as noted in my other reply local condutor mode was remvoed in ocata and deprecated in mitaka so in pike there is defintly no direct db access form the compute agent. > > > ??, 9 ???. 2023 ?., 18:04 Huy V? L? : > > > Hi, I have a question about Nova role > > > > > > - Assert: I don't understand about OPS/Redhat explaination. Because all of > > information said nova-conductor will help database-accessing for > > nova-compute, every chart/image showed, too. > > > > - Reallity: When I download OPS Nova source code looked into > > nova/compute/manager.py, I saw the many many block code that queries the > > data from the database (instance list) and updates the data (instance.save). > > > > So the assertion of the Openstack and Redhat development community is > > wrong or true???? > > > > Link 1: > > https://docs.openstack.org/nova/pike/install/get-started-compute.html > > Link 2: > > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/8/html/configuration_reference_guide/section_conductor > > Link n: .... > > From gmann at ghanshyammann.com Mon Jan 9 20:30:22 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 Jan 2023 12:30:22 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 11 at 1600 UTC Message-ID: <185983a192b.bde086db288775.973615283502833653@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2023 Jan 11, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Jan 10 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From vincentlee676 at gmail.com Tue Jan 10 03:48:55 2023 From: vincentlee676 at gmail.com (vincent lee) Date: Mon, 9 Jan 2023 21:48:55 -0600 Subject: Accessing databases of one openstack component from another Message-ID: Hi all, I have a working OpenStack in the yoga version and am trying to customize the zun component. In terms of deployment, I am using Kolla-ansible to deploy the OpenStack. I am trying to access blazar databases from the zun container. However, I have no clue of how the actual flow goes. I hope to receive some directions or suggestions to get me started. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 10 11:54:53 2023 From: smooney at redhat.com (Sean Mooney) Date: Tue, 10 Jan 2023 11:54:53 +0000 Subject: Accessing databases of one openstack component from another In-Reply-To: References: Message-ID: On Mon, 2023-01-09 at 21:48 -0600, vincent lee wrote: > Hi all, I have a working OpenStack in the yoga version and am trying to > customize the zun component. In terms of deployment, I am using > Kolla-ansible to deploy the OpenStack. I am trying to access blazar > databases from the zun container. > we do not allow db to db interaction between services upstream. the db scheme of each service is considered private. the public interface to most services is teh rest api. if zun need to interact with blazar it should do so via blazars rest api. the RPC apis are also considerd internal apis that are not intended to be shareed between services the only exception to this is the notifcations bus wich is a readonly stream of events for the consuming service and write only for the publishing service. > However, I have no clue of how the actual > flow goes. I hope to receive some directions or suggestions to get me > started. > > Best, > Vincent From zigo at debian.org Tue Jan 10 13:13:18 2023 From: zigo at debian.org (Thomas Goirand) Date: Tue, 10 Jan 2023 14:13:18 +0100 Subject: Different between of the role of Nova-Conductor and Nova-Compute In-Reply-To: References: Message-ID: <68107735-eaf2-936b-a186-24c3e7402fee@debian.org> On 1/9/23 21:25, Sean Mooney wrote: > On Mon, 2023-01-09 at 19:23 +0100, Dmitriy Rabotyagov wrote: >> As of today, nova-compute can access data only through conductor. You >> should not have database connection details in nova.conf for nova-compute - >> if you do, this should raise an exception. > having the credentials wont raise an execption but they wont be used. Which is what should be done to allow an all-in-one-server setup. Cheers, Thomas Goirand (zigo) From ralonsoh at redhat.com Tue Jan 10 14:00:13 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 10 Jan 2023 15:00:13 +0100 Subject: [neutron] Bug deputy 2-8 January Message-ID: Hello Neutrinos: This is the (short) bug list of the last week: * https://bugs.launchpad.net/neutron/+bug/2002316: "ha router - vxlan - incorrect ovs flow ?" Unassigned. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jan 10 14:06:45 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 10 Jan 2023 14:06:45 +0000 Subject: Overriding tox's install_command won't work for non-service projects Message-ID: Another tox 4 PSA. It turns out the tox 3 was not using the command in '[tox] install_command' when installing the package under test. tox 4 does, which means overriding '[tox] install_command' to include a constraints file (-c) will prevent you installing any requirement that is listed in the upper-constraints file, even if said requirement is the thing you're currently working on. This applies to all libraries (e.g. oslo.db, python-cinderclient) but not the services (cinder, nova) since those aren't included in upper-constraints. The "correct" way to respect upper-constraints is to provide them in 'deps' alongside the requirements file(s), e.g. [testenv] deps = -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt This will cause tox to install all of the package's dependencies first *with* constraints before installing the package itself *without constraints*. There is a bug report open against pip to change this behaviour [1], but it's been sat there for over two years with no activity so I wouldn't rely on this. Stephen [1] https://github.com/pypa/pip/issues/7839 From smooney at redhat.com Tue Jan 10 14:43:17 2023 From: smooney at redhat.com (Sean Mooney) Date: Tue, 10 Jan 2023 14:43:17 +0000 Subject: Different between of the role of Nova-Conductor and Nova-Compute In-Reply-To: <68107735-eaf2-936b-a186-24c3e7402fee@debian.org> References: <68107735-eaf2-936b-a186-24c3e7402fee@debian.org> Message-ID: On Tue, 2023-01-10 at 14:13 +0100, Thomas Goirand wrote: > On 1/9/23 21:25, Sean Mooney wrote: > > On Mon, 2023-01-09 at 19:23 +0100, Dmitriy Rabotyagov wrote: > > > As of today, nova-compute can access data only through conductor. You > > > should not have database connection details in nova.conf for nova-compute - > > > if you do, this should raise an exception. > > having the credentials wont raise an execption but they wont be used. > > Which is what should be done to allow an all-in-one-server setup. no in an all in one setup you shoudl use different cofnig files for nova-compute and the rest of the nova services. you can use a combined one but that is not considerd to be following best pratices as you are provide more access tehn require to the comptue agent > > Cheers, > > Thomas Goirand (zigo) > > From jay at gr-oss.io Tue Jan 10 15:33:38 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 10 Jan 2023 07:33:38 -0800 Subject: [ironic] Moving Aija to core-emeritus Message-ID: Hi all, As many of you may already know, Aija is no longer actively working on OpenStack. She informed many Ironic cores over email of this last week. I asked her if she expected to contribute on her own time and she said no. In order to ensure we keep core reviewer lists up to date, I suggest we move Aija to core-emeritus status, removing core permissions from gerrit. If she decides to begin working on OpenStack again in the future, we'd obviously quickly return core permissions. What do you think? -- Jay Faulkner Ironic PTL TC Member P.S. A huge thanks to Aija for her hard work on the sushy project and the Dell driver in Ironic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From akanevsk at redhat.com Tue Jan 10 15:45:45 2023 From: akanevsk at redhat.com (Arkady Kanevsky) Date: Tue, 10 Jan 2023 09:45:45 -0600 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: +1 On Tue, Jan 10, 2023 at 9:44 AM Jay Faulkner wrote: > Hi all, > > As many of you may already know, Aija is no longer actively working on > OpenStack. She informed many Ironic cores over email of this last week. I > asked her if she expected to contribute on her own time and she said no. > > In order to ensure we keep core reviewer lists up to date, I suggest we > move Aija to core-emeritus status, removing core permissions from gerrit. > If she decides to begin working on OpenStack again in the future, we'd > obviously quickly return core permissions. > > What do you think? > > -- > Jay Faulkner > Ironic PTL > TC Member > > P.S. A huge thanks to Aija for her hard work on the sushy project and the > Dell driver in Ironic. > > > -- Arkady Kanevsky, Ph.D. Phone: 972 707-6456 Corporate Phone: 919 729-5744 ext. 8176456 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 10 16:22:36 2023 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 10 Jan 2023 17:22:36 +0100 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: On Tue, Jan 10, 2023 at 4:58 PM Jay Faulkner wrote: > Hi all, > > As many of you may already know, Aija is no longer actively working on > OpenStack. She informed many Ironic cores over email of this last week. I > asked her if she expected to contribute on her own time and she said no. > > In order to ensure we keep core reviewer lists up to date, I suggest we > move Aija to core-emeritus status, removing core permissions from gerrit. > If she decides to begin working on OpenStack again in the future, we'd > obviously quickly return core permissions. > > What do you think? > Sad +2 from me. Thank you Aija, your work and expertise have been really appreciated! > > -- > Jay Faulkner > Ironic PTL > TC Member > > P.S. A huge thanks to Aija for her hard work on the sushy project and the > Dell driver in Ironic. > > > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Jan 10 16:39:10 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 10 Jan 2023 08:39:10 -0800 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: Sounds good to me! -Julia On Tue, Jan 10, 2023 at 7:51 AM Jay Faulkner wrote: > Hi all, > > As many of you may already know, Aija is no longer actively working on > OpenStack. She informed many Ironic cores over email of this last week. I > asked her if she expected to contribute on her own time and she said no. > > In order to ensure we keep core reviewer lists up to date, I suggest we > move Aija to core-emeritus status, removing core permissions from gerrit. > If she decides to begin working on OpenStack again in the future, we'd > obviously quickly return core permissions. > > What do you think? > > -- > Jay Faulkner > Ironic PTL > TC Member > > P.S. A huge thanks to Aija for her hard work on the sushy project and the > Dell driver in Ironic. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Tue Jan 10 16:58:09 2023 From: helena at openstack.org (Helena Spease) Date: Tue, 10 Jan 2023 10:58:09 -0600 Subject: The OpenInfra Summit CFP is closing soon! In-Reply-To: <1673276072.34424050@apps.rackspace.com> References: <17669EA6-4CD3-4B3E-A980-B6572EF25A59@openstack.org> <1673276072.34424050@apps.rackspace.com> Message-ID: <22352AE1-CC94-4C6C-A93F-157A70DE5B54@openstack.org> Thanks for fixing the error, Josh! I also want to throw out a reminder that today is the last day to submit your talks to the CFP for the 2023 OpenInfra Summit (June 13-15, 2023)[1]! The CFP closes January 10, 2023, at 11:59 p.m. PT. Happy CFP-ing, everyone! Cheers, Helena [1] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ > On Jan 9, 2023, at 8:54 AM, josh at openinfra.dev wrote: > > Hi Arnaud, > > Thank you very much for letting us know about this! You are absolutely correct, there was an issue with the DB for the Social Summary field. > > This has now been adjusted and you should be able to submit the full 280 characters for the Social Summary. > > Again, thank you for researching the issue and for raising a flag on it - I really appreciate it! > > Cheers, > Josh > -----Original Message----- > From: "Arnaud Morin" > Sent: Monday, January 9, 2023 4:50am > To: "Helena Spease" > Cc: openstack-discuss at lists.openstack.org > Subject: Re: The OpenInfra Summit CFP is closing soon! > > I found this: > https://github.com/OpenStackweb/summit-api/blob/main/tests/schema.sql#L9991 > > Not sure this is the root cause, but maybe the field is wrong in the DB? > > We also have this: > https://github.com/OpenStackweb/summit-api/blob/main/app/Http/Controllers/Apis/Protected/Summit/Factories/SummitEventValidationRulesFactory.php#L43 > > So it should be good on API side > > On 09.01.23 - 09:38, Arnaud Morin wrote: > > Hey Helena, > > > > We are preparing some talk submissions, but it seems the > > "Social Summary (280 chars)" is not accepting 280 chars, but only 100. > > Is it normal behavior? > > > > Cheers, > > Arnaud. > > > > > > On 06.01.23 - 13:07, Helena Spease wrote: > > > Hi Everyone! > > > > > > The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is closing in just a few days[1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. > > > > > > The CFP closes January 10, 2023, at 11:59 p.m. PT. See what that is in your timezone [3] > > > > > > We are also now accepting submissions for Forum sessions [4]! Looking for other resources? Find information on registration, sponsorships, travel support and visa requests at https://openinfra.dev/summit/ > > > > > > If you have any questions feel free to reach out :) > > > > > > Cheers, > > > Helena > > > > > > [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations > > > [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ > > > [3] https://www.timeanddate.com/worldclock/fixedtime.html?msg=2023+OpenInfra+Summit+CFP+Closes&iso=20230110T2359&p1=137 > > > [4] https://cfp.openinfra.dev/app/vancouver-2023/20/ > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Tue Jan 10 17:54:36 2023 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Tue, 10 Jan 2023 21:24:36 +0330 Subject: nova api error Message-ID: hello On nova api I have this error : OSError: Apache/mod_wsgi request data read error: Partial results are valid but processing is incomplete and I have Unknown Error (HTTP 504) error on every nova request . -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Tue Jan 10 18:36:06 2023 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 10 Jan 2023 18:36:06 +0000 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: Sounds good to me. Thanks for all your work (so far :-), Aija! ________________________________ Von: Jay Faulkner Gesendet: Dienstag, 10. Januar 2023, 16:47 An: OpenStack Discuss Betreff: [ironic] Moving Aija to core-emeritus Hi all, As many of you may already know, Aija is no longer actively working on OpenStack. She informed many Ironic cores over email of this last week. I asked her if she expected to contribute on her own time and she said no. In order to ensure we keep core reviewer lists up to date, I suggest we move Aija to core-emeritus status, removing core permissions from gerrit. If she decides to begin working on OpenStack again in the future, we'd obviously quickly return core permissions. What do you think? -- Jay Faulkner Ironic PTL TC Member P.S. A huge thanks to Aija for her hard work on the sushy project and the Dell driver in Ironic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Tue Jan 10 21:08:40 2023 From: fsbiz at yahoo.com (Farhad Sunavala) Date: Tue, 10 Jan 2023 21:08:40 +0000 (UTC) Subject: [openstack-helm] Stability of Openstack Helm for large Ironic (baremetal) clouds with Openstack Yoga References: <1348272324.5039594.1673384920270.ref@mail.yahoo.com> Message-ID: <1348272324.5039594.1673384920270@mail.yahoo.com> Hi, We're currently running Openstack services on baremetal.Time to upgrade has come and it is a nightmare.We are evaluating containerizing the services with Openstack Helm. Our cloud:We are essentially a large Openstack Ironic (Baremetal) cloud with 5000-8000 baremetal nodes in different DCs.We are thinking of upgrading from Queens to Yoga. Any feedback on stability of Openstack Helm for such installations? Any pointers on what to expect, gotchas, etc.? Thanks,Fred.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Jan 10 22:56:08 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 10 Jan 2023 14:56:08 -0800 Subject: [ironic] Bugfix branches being EOL'd first week of Jan, 2023 In-Reply-To: References: Message-ID: It's my intention to execute these changes tomorrow, Jan 11th. Please take notice. Thanks, Jay Faulkner On Tue, Dec 13, 2022 at 9:04 AM Jay Faulkner wrote: > OpenStack Community and Operators, > > As documented in > https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, > Ironic performs bugfix releases in the middle of a cycle to permit > downstream packagers to more rapidly deliver features to standalone Ironic > users. > > However, we've neglected as a project to cleanup or EOL any of these > branches -- until now. Please take notice that during the first week in > January, we will be EOL-ing all old, unsupported Ironic bugfix branches. > This will be handled similarly to an EOL of a stable branch; we will create > a tag -- e.g. for bugfix/x.y branch, we would tag bugfix-x.y-eol -- then > remove the branch. > > These branches have been out of support for months and should not be in > use in your Ironic clusters. If you are using any branches slated for > retirement, please immediately upgrade to a supported Ironic version. > > A full listing of projects and branches impacted: > > ironic branches being retired > bugfix/15.1 > bugfix/15.2 > bugfix/16.1 > bugfix/16.2 > bugfix/18.0 > bugfix/20.0 > > ironic-python-agent branches being retired > bugfix/6.2 > bugfix/6.3 > bugfix/6.5 > bugfix/6.6 > bugfix/8.0 > bugfix/8.4 > > ironic-inspector branches being retired > bugfix/10.2 > bugfix/10.3 > bugfix/10.5 > bugfix/10.10 > > Thank you, > Jay Faulkner > Ironic PTL > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Jan 11 00:45:16 2023 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Jan 2023 19:45:16 -0500 Subject: [openstack-helm] Stability of Openstack Helm for large Ironic (baremetal) clouds with Openstack Yoga In-Reply-To: <1348272324.5039594.1673384920270@mail.yahoo.com> References: <1348272324.5039594.1673384920270.ref@mail.yahoo.com> <1348272324.5039594.1673384920270@mail.yahoo.com> Message-ID: Hi there, We run Helm successfully here and we have it working for a quite a few different clouds with the OSH charts. (And some of our open source tooling to glue it all) Thanks Mohammed On Tue, Jan 10, 2023 at 7:42 PM Farhad Sunavala wrote: > Hi, > > We're currently running Openstack services on baremetal. > Time to upgrade has come and it is a nightmare. > We are evaluating containerizing the services with Openstack Helm. > > Our cloud: > We are essentially a large Openstack Ironic (Baremetal) cloud with > 5000-8000 baremetal nodes in different DCs. > We are thinking of upgrading from Queens to Yoga. > > Any feedback on stability of Openstack Helm for such installations? Any > pointers on what to expect, gotchas, etc. > > Thanks, > Fred.. > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jan 11 06:49:55 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Jan 2023 22:49:55 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 11 at 1600 UTC In-Reply-To: <185983a192b.bde086db288775.973615283502833653@ghanshyammann.com> References: <185983a192b.bde086db288775.973615283502833653@ghanshyammann.com> Message-ID: <1859f97abf7.f2974b3b410865.4091847308677558562@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Jan 11 at 1600 UTC. Location: IRC OFTC network in the #openstack-tc channel * Roll call * Follow up on past action items * Gate health check * Cleanup of PyPI maintainer list for OpenStack Projects ** There are other maintainers present along with 'openstackci', A few examples: *** https://pypi.org/project/murano/ *** https://pypi.org/project/glance/ ** More new maintainers are being added without knowledge to OpenStack and by skipping our contribution process *** Example: https://github.com/openstack/xstatic-font-awesome/pull/2 * Mistral situation ** Release team proposing it to mark its release deprecated *** https://review.opendev.org/c/openstack/governance/+/866562 ** Gate is fixed (except python-mistralclient) *** https://review.opendev.org/q/topic:gate-fix-mistral-repo *** Core members are actively fixing/merging the changes now ** Beta release patches *** https://review.opendev.org/c/openstack/releases/+/869470 *** https://review.opendev.org/c/openstack/releases/+/869448 * Adjutant situation ** Gate is fixed by PTL ** Beta release is happening (patch not merged yet but has one +2 from release team) *** https://review.opendev.org/c/openstack/releases/+/869449 *** https://review.opendev.org/c/openstack/releases/+/869471 ** Proposal is to remove it from Inactive projects list *** https://review.opendev.org/c/openstack/governance/+/869665 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 09 Jan 2023 12:30:22 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2023 Jan 11, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Jan 10 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From dtantsur at redhat.com Wed Jan 11 10:00:25 2023 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 11 Jan 2023 11:00:25 +0100 Subject: [ironic] [baremetal] [ops] Mark the date: new bare metal SIG meetup on Feb 8th Message-ID: Hi stackers! The bare metal SIG will start with a new format of meetings this year: quarterly and with more content. Please read the announcement: https://ironicbaremetal.org/blog/baremetal-sig-2023q1/ Let me know if you have any questions. Otherwise, we'll be looking forward to talking to you. Zoom link will be posted later. Dmitry P.S. Please help to spread the word! -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Wed Jan 11 10:51:47 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 11 Jan 2023 11:51:47 +0100 Subject: [neutron] PTG flavour: virtual or in person Message-ID: Hello Neutrinos: I'm reaching you today to ask you about the PTG. Please first read [1] to have a bit of context. Due to the requests from the contributors to bring back the in person events, there will be a PTG along with the OpenInfra Summit in Vancouver. The virtual PTGs will remain and the first one will be in March. My main questions here are: 1) What do you prefer: virtual or in-person? 2) For this year, will you attend the virtual PTG only? Or will you attend the Vancouver one too? In the second question I'm implying that the virtual ones are "mandatory". At least this year those events will be needed to define the roadmap of the next releases, same as the previous virtual PTGs we had (please don't forget to order your ticket [2]). Regards. [1] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031645.html [2]https://openinfra-ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Jan 11 11:01:57 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 11 Jan 2023 12:01:57 +0100 Subject: [neutron] PTG flavour: virtual or in person In-Reply-To: References: Message-ID: <4477461.nMhJUGdZTT@p1> Hi, Dnia ?roda, 11 stycznia 2023 11:51:47 CET Rodolfo Alonso Hernandez pisze: > Hello Neutrinos: > > I'm reaching you today to ask you about the PTG. Please first read [1] to > have a bit of context. Due to the requests from the contributors to bring > back the in person events, there will be a PTG along with the OpenInfra > Summit in Vancouver. The virtual PTGs will remain and the first one will be > in March. > > My main questions here are: > 1) What do you prefer: virtual or in-person? > 2) For this year, will you attend the virtual PTG only? Or will you attend > the Vancouver one too? > > In the second question I'm implying that the virtual ones are "mandatory". > At least this year those events will be needed to define the roadmap of the > next releases, same as the previous virtual PTGs we had (please don't > forget to order your ticket [2]). > > Regards. > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031645.html > [2]https://openinfra-ptg.eventbrite.com > Personally I would like to have in-person even and meet again many of Neutron people there. But I know that not all people will want/can travel to Vancouver. So maybe we can have virtual events as the "main" ones and use in-person even in Vancouver for something like "hackfest", "office hours" or something similar. Generally something with less strict agenda and meetings. But that's just my 2 cents :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From senrique at redhat.com Wed Jan 11 11:35:57 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 11 Jan 2023 11:35:57 +0000 Subject: [cinder] Bug Report from 01-11-2023 Message-ID: This is a bug report from 01-04-2022 to 01-11-2023. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/2001619 "svf : if pool attribute is specified in volume type during retype along with --migration-policy defaults to cinder generic migration." Assigned to Sathyanarayana R. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Jan 11 12:17:00 2023 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 11 Jan 2023 13:17:00 +0100 Subject: [neutron] PTG flavour: virtual or in person In-Reply-To: <4477461.nMhJUGdZTT@p1> References: <4477461.nMhJUGdZTT@p1> Message-ID: Hi, I Agree with Slawek. The virtual events are perfect to discuss things, but without the salt of personal discussion during the meetings or around the coffee machines :-) So it would be good to have an in?person event, but with a light agenda. Lajos Slawek Kaplonski ezt ?rta (id?pont: 2023. jan. 11., Sze, 12:09): > Hi, > > Dnia ?roda, 11 stycznia 2023 11:51:47 CET Rodolfo Alonso Hernandez pisze: > > Hello Neutrinos: > > > > I'm reaching you today to ask you about the PTG. Please first read [1] to > > have a bit of context. Due to the requests from the contributors to bring > > back the in person events, there will be a PTG along with the OpenInfra > > Summit in Vancouver. The virtual PTGs will remain and the first one will > be > > in March. > > > > My main questions here are: > > 1) What do you prefer: virtual or in-person? > > 2) For this year, will you attend the virtual PTG only? Or will you > attend > > the Vancouver one too? > > > > In the second question I'm implying that the virtual ones are > "mandatory". > > At least this year those events will be needed to define the roadmap of > the > > next releases, same as the previous virtual PTGs we had (please don't > > forget to order your ticket [2]). > > > > Regards. > > > > [1] > > > https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031645.html > > [2]https://openinfra-ptg.eventbrite.com > > > > Personally I would like to have in-person even and meet again many of > Neutron people there. But I know that not all people will want/can travel > to Vancouver. So maybe we can have virtual events as the "main" ones and > use in-person even in Vancouver for something like "hackfest", "office > hours" or something similar. Generally something with less strict agenda > and meetings. But that's just my 2 cents :) > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Jan 11 15:51:02 2023 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 11 Jan 2023 09:51:02 -0600 Subject: [neutron] PTG flavour: virtual or in person In-Reply-To: References: <4477461.nMhJUGdZTT@p1> Message-ID: Hi, I think in-person meetings are very important. I understand, though, the challenge of inclusiveness, so let's try to make as many accommodations as possible. But definitely, in my opinion, we should meet in person. Cheerios On Wed, Jan 11, 2023 at 6:18 AM Lajos Katona wrote: > Hi, > I Agree with Slawek. > The virtual events are perfect to discuss things, but without the salt of > personal discussion during the meetings or around the coffee machines :-) > So it would be good to have an in?person event, but with a light agenda. > > Lajos > > Slawek Kaplonski ezt ?rta (id?pont: 2023. jan. 11., > Sze, 12:09): > >> Hi, >> >> Dnia ?roda, 11 stycznia 2023 11:51:47 CET Rodolfo Alonso Hernandez pisze: >> > Hello Neutrinos: >> > >> > I'm reaching you today to ask you about the PTG. Please first read [1] >> to >> > have a bit of context. Due to the requests from the contributors to >> bring >> > back the in person events, there will be a PTG along with the OpenInfra >> > Summit in Vancouver. The virtual PTGs will remain and the first one >> will be >> > in March. >> > >> > My main questions here are: >> > 1) What do you prefer: virtual or in-person? >> > 2) For this year, will you attend the virtual PTG only? Or will you >> attend >> > the Vancouver one too? >> > >> > In the second question I'm implying that the virtual ones are >> "mandatory". >> > At least this year those events will be needed to define the roadmap of >> the >> > next releases, same as the previous virtual PTGs we had (please don't >> > forget to order your ticket [2]). >> > >> > Regards. >> > >> > [1] >> > >> https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031645.html >> > [2]https://openinfra-ptg.eventbrite.com >> > >> >> Personally I would like to have in-person even and meet again many of >> Neutron people there. But I know that not all people will want/can travel >> to Vancouver. So maybe we can have virtual events as the "main" ones and >> use in-person even in Vancouver for something like "hackfest", "office >> hours" or something similar. Generally something with less strict agenda >> and meetings. But that's just my 2 cents :) >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Jan 11 15:56:17 2023 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 11 Jan 2023 16:56:17 +0100 Subject: [largescale-sig] Next meeting: Jan 11, 15utc In-Reply-To: References: Message-ID: Here is the summary of our SIG meeting today. We discussed our next OpenInfra Live episode on January 26, featuring Ubisoft. We also decided to alternate IRC meeting times with a APAC+EU-friendly time in an effort to include new participants. You can read the detailed meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2023/large_scale_sig.2023-01-11-15.01.html Our next IRC meeting will be February 8, at 0900utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From ignaziocassano at gmail.com Wed Jan 11 22:10:36 2023 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 11 Jan 2023 23:10:36 +0100 Subject: [openstack][nova][kvm] numa Message-ID: Hello All, I would like to know if with numa in kvm hypervisor I must disable merge across nodes ? If yes, is there any metadata to do it ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Jan 11 23:14:01 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 11 Jan 2023 15:14:01 -0800 Subject: [ironic] Bugfix branches being EOL'd first week of Jan, 2023 In-Reply-To: References: Message-ID: This change has been executed for Ironic and Ironic-Python-Agent. However, due to an issue applying the ACLs for ironic-inspector. The root cause of that ACL problem has been found and is being resolved. However, to ensure I'll be around after the change applies in case of any issues; I'm going to postpone applying the changes for ironic-inspector until tomorrow. Thanks, Jay Faulkner On Tue, Jan 10, 2023 at 2:56 PM Jay Faulkner wrote: > It's my intention to execute these changes tomorrow, Jan 11th. Please take > notice. > > Thanks, > Jay Faulkner > > On Tue, Dec 13, 2022 at 9:04 AM Jay Faulkner wrote: > >> OpenStack Community and Operators, >> >> As documented in >> https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, >> Ironic performs bugfix releases in the middle of a cycle to permit >> downstream packagers to more rapidly deliver features to standalone Ironic >> users. >> >> However, we've neglected as a project to cleanup or EOL any of these >> branches -- until now. Please take notice that during the first week in >> January, we will be EOL-ing all old, unsupported Ironic bugfix branches. >> This will be handled similarly to an EOL of a stable branch; we will create >> a tag -- e.g. for bugfix/x.y branch, we would tag bugfix-x.y-eol -- then >> remove the branch. >> >> These branches have been out of support for months and should not be in >> use in your Ironic clusters. If you are using any branches slated for >> retirement, please immediately upgrade to a supported Ironic version. >> >> A full listing of projects and branches impacted: >> >> ironic branches being retired >> bugfix/15.1 >> bugfix/15.2 >> bugfix/16.1 >> bugfix/16.2 >> bugfix/18.0 >> bugfix/20.0 >> >> ironic-python-agent branches being retired >> bugfix/6.2 >> bugfix/6.3 >> bugfix/6.5 >> bugfix/6.6 >> bugfix/8.0 >> bugfix/8.4 >> >> ironic-inspector branches being retired >> bugfix/10.2 >> bugfix/10.3 >> bugfix/10.5 >> bugfix/10.10 >> >> Thank you, >> Jay Faulkner >> Ironic PTL >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Jan 11 23:14:50 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 11 Jan 2023 15:14:50 -0800 Subject: [ironic] Bugfix branches being EOL'd first week of Jan, 2023 In-Reply-To: References: Message-ID: I truncated a sentence in the previous post. > This change has been executed for Ironic and Ironic-Python-Agent. However, due to an issue applying the ACLs for ironic-inspector, I was unable to retire those bugfix branches for ironic-inspector today.. The root cause of that ACL problem has been found and is being resolved. On Wed, Jan 11, 2023 at 3:14 PM Jay Faulkner wrote: > This change has been executed for Ironic and Ironic-Python-Agent. However, > due to an issue applying the ACLs for ironic-inspector. The root cause of > that ACL problem has been found and is being resolved. > > However, to ensure I'll be around after the change applies in case of any > issues; I'm going to postpone applying the changes for ironic-inspector > until tomorrow. > > Thanks, > Jay Faulkner > > On Tue, Jan 10, 2023 at 2:56 PM Jay Faulkner wrote: > >> It's my intention to execute these changes tomorrow, Jan 11th. Please >> take notice. >> >> Thanks, >> Jay Faulkner >> >> On Tue, Dec 13, 2022 at 9:04 AM Jay Faulkner wrote: >> >>> OpenStack Community and Operators, >>> >>> As documented in >>> https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, >>> Ironic performs bugfix releases in the middle of a cycle to permit >>> downstream packagers to more rapidly deliver features to standalone Ironic >>> users. >>> >>> However, we've neglected as a project to cleanup or EOL any of these >>> branches -- until now. Please take notice that during the first week in >>> January, we will be EOL-ing all old, unsupported Ironic bugfix branches. >>> This will be handled similarly to an EOL of a stable branch; we will create >>> a tag -- e.g. for bugfix/x.y branch, we would tag bugfix-x.y-eol -- then >>> remove the branch. >>> >>> These branches have been out of support for months and should not be in >>> use in your Ironic clusters. If you are using any branches slated for >>> retirement, please immediately upgrade to a supported Ironic version. >>> >>> A full listing of projects and branches impacted: >>> >>> ironic branches being retired >>> bugfix/15.1 >>> bugfix/15.2 >>> bugfix/16.1 >>> bugfix/16.2 >>> bugfix/18.0 >>> bugfix/20.0 >>> >>> ironic-python-agent branches being retired >>> bugfix/6.2 >>> bugfix/6.3 >>> bugfix/6.5 >>> bugfix/6.6 >>> bugfix/8.0 >>> bugfix/8.4 >>> >>> ironic-inspector branches being retired >>> bugfix/10.2 >>> bugfix/10.3 >>> bugfix/10.5 >>> bugfix/10.10 >>> >>> Thank you, >>> Jay Faulkner >>> Ironic PTL >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jan 12 02:09:06 2023 From: smooney at redhat.com (Sean Mooney) Date: Thu, 12 Jan 2023 02:09:06 +0000 Subject: [openstack][nova][kvm] numa In-Reply-To: References: Message-ID: <58e99fabb31205425e21afc7b3e61cd905f35578.camel@redhat.com> On Wed, 2023-01-11 at 23:10 +0100, Ignazio Cassano wrote: > Hello All, > I would like to know if with numa in kvm hypervisor I must disable merge > across nodes ? If yes, is there any metadata to do it ? no nova has numa toplogy awareness for vms with a virutal numa toplogy i.e. if you want vms ot have numa affinity you can request a numa toplogy explcitly by using hw:numa_nodes= it can also be done via the image with hw_numa_nodes. nova will also generate a request for a singel numa node if you use specific feature like cpu pinnign or hugepages to name two cases. i belive pmem also does it so that is not ment to be an exaustive list. all vms that have an implciet or explict numa toplogy should have hw:mem_page_size or hw_mem_page_size set to an allowed value (small,large,any,) for example hw:mem_page_size=small will create a singel numa node guest using implcit numa request as a result of specifying a page size and small will ensure that hugepages are not used for the vm. on must cpu archtecutres small means the default 4k pages are uesed. such a vm will be pinned to a spcific host numa ndoe but will float over cpu on that numa node sicne cpu pinnign is not enabeld. if the vm does not request a numa topology we will not do any numa aware placment or affinity enforcement. nova does not support mixing numa and non numa vms on the same host and its left to the operator to ensure that useing any of the means supproted by placement or schduler filters. > Ignazio From ignaziocassano at gmail.com Thu Jan 12 05:17:08 2023 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 12 Jan 2023 06:17:08 +0100 Subject: [openstack][nova][kvm] numa In-Reply-To: <58e99fabb31205425e21afc7b3e61cd905f35578.camel@redhat.com> References: <58e99fabb31205425e21afc7b3e61cd905f35578.camel@redhat.com> Message-ID: Thank you for the clarification. Il Gio 12 Gen 2023, 03:09 Sean Mooney ha scritto: > On Wed, 2023-01-11 at 23:10 +0100, Ignazio Cassano wrote: > > Hello All, > > I would like to know if with numa in kvm hypervisor I must disable merge > > across nodes ? If yes, is there any metadata to do it ? > no nova has numa toplogy awareness for vms with a virutal numa toplogy > > i.e. if you want vms ot have numa affinity you can request a numa toplogy > explcitly by using hw:numa_nodes= > it can also be done via the image with hw_numa_nodes. > > nova will also generate a request for a singel numa node if you use > specific feature like cpu pinnign or hugepages to name two cases. i belive > pmem > also does it so that is not ment to be an exaustive list. > > all vms that have an implciet or explict numa toplogy should have > hw:mem_page_size or hw_mem_page_size set to an allowed value > (small,large,any,) for example > hw:mem_page_size=small will create a singel numa node guest using implcit > numa request as a > result of specifying > a page size and small will ensure that hugepages are not used for the vm. > on must cpu archtecutres small means the default 4k pages are uesed. > such a vm will be pinned to a spcific host numa ndoe but will float over > cpu on that numa node sicne cpu pinnign is not enabeld. > > if the vm does not request a numa topology we will not do any numa aware > placment or affinity enforcement. > > nova does not support mixing numa and non numa vms on the same host and > its left to the operator to ensure that useing any of the means supproted > by placement or schduler filters. > > > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Thu Jan 12 06:56:04 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 12 Jan 2023 07:56:04 +0100 Subject: Experience with VGPUs Message-ID: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> Dear All, we are planning to have a POC on VGPUs in our Openstack cluster. Therefore I have a few questions and generally wanted to ask how well VGPUs are supported in Openstack. The docs, in particular: https://docs.openstack.org/nova/zed/admin/virtual-gpu.html explain quite well the general implementation. But I am more interested in general experience with using VGPUs in Openstack. We currently have a small YOGA cluster, planning to upgrade to Zed soon, with a couple of compute nodes. Currently our users use consumer cards like RTX 3050/3060 on their laptops and the idea would be to provide VGPUs to these users. For this I would like to make a very small POC where we first equip one compute node with an Nvidia GPU. Gladly also a few tips on which card would be a good starting point are highly appreciated. I know this heavily depends on the server hardware but this is something I can figure out later. Also do we need additional software licenses to run this? I saw this very nice presentation from CERN on VGPUs: https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf In the table they are listing Quadro vDWS licenses. I assume we need these in order to use the cards? Also do we need something like Cyborg for this or is VGPU fully implemented in Nova? Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Jan 12 07:25:48 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 12 Jan 2023 12:55:48 +0530 Subject: [cinder] 2023.1 R-9 virtual mid cycle on 18th January, 2023 Message-ID: Hello Argonauts, As discussed in yesterday's cinder upstream meeting[1], we will be conducting our second mid cycle on 18th January, 2023 (R-9 week). Following are the details: Date: 18th January 2023 Time: 1400-1600 UTC Meeting link: https://bluejeans.com/556681290 Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles Don't forget to add topics and see you there! [1] https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-01-11-14.00.log.html#l-47 Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arne.Wiebalck at cern.ch Thu Jan 12 07:42:59 2023 From: Arne.Wiebalck at cern.ch (Arne Wiebalck) Date: Thu, 12 Jan 2023 07:42:59 +0000 Subject: Experience with VGPUs In-Reply-To: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> Message-ID: Hi Oliver, The presentation you linked was only *at* CERN, not *from* CERN (it was during an OpenStack Day we organised here). Sylvain and/or Mohammed may be available to answer the questions you have related to that deck, or also in general for the integration of GPUs. Now, *at* CERN we also have hypervisors with different GPUs in our fleet, and are also looking into various options how to efficiently provision them: as bare metal, as vGPUs, using MIG support, ... and we have submitted a presentation proposal for the upcoming summit to share our experiences. If you have very specific questions, we can try to answer them here, but maybe there is interest and it would be more efficient to organize a session/call (e.g. as part of the Openstack Operators activities or the Scientific SIG?) to exchange experiences on GPU integration and answer questions there? What do you and others think? Cheers, Arne ________________________________________ From: Oliver Weinmann Sent: Thursday, 12 January 2023 07:56 To: openstack-discuss Subject: Experience with VGPUs Dear All, we are planning to have a POC on VGPUs in our Openstack cluster. Therefore I have a few questions and generally wanted to ask how well VGPUs are supported in Openstack. The docs, in particular: https://docs.openstack.org/nova/zed/admin/virtual-gpu.html explain quite well the general implementation. But I am more interested in general experience with using VGPUs in Openstack. We currently have a small YOGA cluster, planning to upgrade to Zed soon, with a couple of compute nodes. Currently our users use consumer cards like RTX 3050/3060 on their laptops and the idea would be to provide VGPUs to these users. For this I would like to make a very small POC where we first equip one compute node with an Nvidia GPU. Gladly also a few tips on which card would be a good starting point are highly appreciated. I know this heavily depends on the server hardware but this is something I can figure out later. Also do we need additional software licenses to run this? I saw this very nice presentation from CERN on VGPUs: https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf In the table they are listing Quadro vDWS licenses. I assume we need these in order to use the cards? Also do we need something like Cyborg for this or is VGPU fully implemented in Nova? Best Regards, Oliver From noonedeadpunk at gmail.com Thu Jan 12 07:55:33 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Thu, 12 Jan 2023 08:55:33 +0100 Subject: Experience with VGPUs In-Reply-To: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> Message-ID: Hi Oliver, Nvidia's vGPU/MIG are quite popular options and usage of them don't really require cyborg - they can be utilized solely with Nova/Placement. However, there are plenty of nuances, as implementation of vGPUs also depends on the GPU architecture - Tesla's are quite different from Amperes in how they got created driver-side and got represented among placement resources. Also I'm not sure that desktop cards, like RTX 3050, does support vGPUs at all. Highly likely, that the only option for this type of cards will be PCI-passthrough, which is supported quite well and super easy to implement, as doesn't require any extra drivers. But if you want to leverage vGPUs/MIG, you will likely need cards like A10 (which doesn't have MIG support) or A30. Most of supported models along with possible slices are mentioned here: https://docs.nvidia.com/grid/15.0/grid-vgpu-user-guide/index.html#supported-gpus-grid-vgpu Regarding licensing - with vGPU approach you always license clients, not hypervisors. So you don't need any license to create VMs with vGPUs, just hypervisor driver that can be downloaded from Nvidia enterprise portal. And you will be able to test out if vGPU works inside VM, as absent license will apply limitations only after some time. And license type also depends on the workloads you want to run. So in case of AI training workloads you most likely need vCS license, but then vGPUs can be used only as computational ones, but not for virtual desktops. You can read more about licenses and their types here: https://docs.nvidia.com/grid/15.0/grid-licensing-user-guide/index.html To be completely frank, if our workloads won't require CUDA support, I would look closely on AMD GPUs, since there is no mess with licensing and their implementation of SR-IOV is way more starightforward and clear, at least for me. So if you're looking for GPUs for virtual desktops, that might be a good option for you. However, Nvidia is way more widespread in openstack workloads, so it's more likely to get some help/gotchas regarding Nvidia rather then any other GPU. ??, 12 ???. 2023 ?., 07:58 Oliver Weinmann : > Dear All, > > we are planning to have a POC on VGPUs in our Openstack cluster. Therefore > I have a few questions and generally wanted to ask how well VGPUs are > supported in Openstack. The docs, in particular: > > https://docs.openstack.org/nova/zed/admin/virtual-gpu.html > > explain quite well the general implementation. > > > But I am more interested in general experience with using VGPUs in > Openstack. We currently have a small YOGA cluster, planning to upgrade to > Zed soon, with a couple of compute nodes. Currently our users use consumer > cards like RTX 3050/3060 on their laptops and the idea would be to provide > VGPUs to these users. For this I would like to make a very small POC where > we first equip one compute node with an Nvidia GPU. Gladly also a few tips > on which card would be a good starting point are highly appreciated. I know > this heavily depends on the server hardware but this is something I can > figure out later. Also do we need additional software licenses to run this? > I saw this very nice presentation from CERN on VGPUs: > > > https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf > > In the table they are listing Quadro vDWS licenses. I assume we need > these in order to use the cards? Also do we need something like Cyborg for > this or is VGPU fully implemented in Nova? > > Best Regards, > > Oliver > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Thu Jan 12 09:15:40 2023 From: elfosardo at gmail.com (Riccardo Pittau) Date: Thu, 12 Jan 2023 10:15:40 +0100 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: +2 her work has been invaluable Riccardo On Wed, Jan 11, 2023 at 1:41 AM Arne Wiebalck wrote: > Sounds good to me. > > Thanks for all your work (so far :-), Aija! > > > ------------------------------ > *Von:* Jay Faulkner > *Gesendet:* Dienstag, 10. Januar 2023, 16:47 > *An:* OpenStack Discuss > *Betreff:* [ironic] Moving Aija to core-emeritus > > Hi all, > > As many of you may already know, Aija is no longer actively working on > OpenStack. She informed many Ironic cores over email of this last week. I > asked her if she expected to contribute on her own time and she said no. > > In order to ensure we keep core reviewer lists up to date, I suggest we > move Aija to core-emeritus status, removing core permissions from gerrit. > If she decides to begin working on OpenStack again in the future, we'd > obviously quickly return core permissions. > > What do you think? > > -- > Jay Faulkner > Ironic PTL > TC Member > > P.S. A huge thanks to Aija for her hard work on the sushy project and the > Dell driver in Ironic. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpiwowar at redhat.com Thu Jan 12 09:15:54 2023 From: lpiwowar at redhat.com (Lukas Piwowarski) Date: Thu, 12 Jan 2023 10:15:54 +0100 Subject: [CI][Tempest] Broken tempest-full-multinode-py3 job Message-ID: Hi, I've got stuck trying to figure out why one of our tempest jobs is failing. I will be grateful for any kind of input regarding potential solutions. I did a little investigation of the failing job (tempest-multinode-full-py3 [1][2]) and this is what I found: - The failure is mostly caused by scenario tests. There was one occurrence of failure outside of the scenario tests ( test_create_servers_on_different_hosts_with_list_of_servers) - The failure is always connected to the creation of a VM. The reasons for the failure differ but most of the time it looks like the VM gets created but tempest is not able to access it ("Authentication (publickey) failed.") or the created instance can not be found ("Instance f4e582fe-52e6-40e3-b366-2b11ed589089 could not be found.") - The failure is not connected to one specific test. I am wondering why only the multinode job is failing as in other jobs the scenario tests are passing fine. Thanks for any kind of help Luk?? Piwowarsk [1] https://ca077a96786c794b51b6-8aab0ff599dd9fcfda722d0dee6871dd.ssl.cf5.rackcdn.com/866692/12/check/tempest-multinode-full-py3/2684eac/job-output.txt [2] https://review.opendev.org/c/openstack/tempest/+/866692 -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jan 12 09:26:01 2023 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 12 Jan 2023 10:26:01 +0100 Subject: Overriding tox's install_command won't work for non-service projects In-Reply-To: References: Message-ID: <20230112092601.xmflu4boe4za6jdj@localhost> On 10/01, Stephen Finucane wrote: > Another tox 4 PSA. It turns out the tox 3 was not using the command in '[tox] > install_command' when installing the package under test. tox 4 does, which means > overriding '[tox] install_command' to include a constraints file (-c) will > prevent you installing any requirement that is listed in the upper-constraints > file, even if said requirement is the thing you're currently working on. This > applies to all libraries (e.g. oslo.db, python-cinderclient) but not the > services (cinder, nova) since those aren't included in upper-constraints. > > The "correct" way to respect upper-constraints is to provide them in 'deps' > alongside the requirements file(s), e.g. > > [testenv] > deps = > -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} > -r{toxinidir}/requirements.txt > -r{toxinidir}/test-requirements.txt > > This will cause tox to install all of the package's dependencies first *with* > constraints before installing the package itself *without constraints*. There is > a bug report open against pip to change this behaviour [1], but it's been sat > there for over two years with no activity so I wouldn't rely on this. > > Stephen > > [1] https://github.com/pypa/pip/issues/7839 > > Hi Stephen, In my past experience with Cinder tox honored the "install_command" just fine, the problem was actually when using "usedevelop = True" and setting the constraints in deps. We ended up adding a comment in our tox to prevent people from trying to move the constraints to "deps", as that created problems. This is the comment present in Cinder's tox.ini: # NOTE: Do not move the constraints from the install_command into deps, as that # may result in tox using unconstrained/untested dependencies. # We use "usedevelop = True" for tox jobs (except bindep), so tox does 2 # install calls, one for the deps and another for the cinder source code # as editable (pip -e). # Without the constraints in the install_command only the first # installation will honor the upper constraints, and the second install # for cinder itself will not know about the constraints which can result # in installing versions we don't want. # With constraints in the install_command tox will always honor our # constraints. Has this double requirement installation changed? Will it work fine now in the scenario described in our note? Cheers, Gorka. From sbauza at redhat.com Thu Jan 12 09:58:52 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 12 Jan 2023 10:58:52 +0100 Subject: Experience with VGPUs In-Reply-To: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> Message-ID: Le jeu. 12 janv. 2023 ? 08:02, Oliver Weinmann a ?crit : > Dear All, > > we are planning to have a POC on VGPUs in our Openstack cluster. Therefore > I have a few questions and generally wanted to ask how well VGPUs are > supported in Openstack. The docs, in particular: > > https://docs.openstack.org/nova/zed/admin/virtual-gpu.html > > explain quite well the general implementation. > > > Indeed, and that's why you can't find nvidia-specific documentation in there. Upstream documentation in general shouldn't be telling about specific hardware but rather the general implementation. > But I am more interested in general experience with using VGPUs in > Openstack. We currently have a small YOGA cluster, planning to upgrade to > Zed soon, with a couple of compute nodes. Currently our users use consumer > cards like RTX 3050/3060 on their laptops and the idea would be to provide > VGPUs to these users. For this I would like to make a very small POC where > we first equip one compute node with an Nvidia GPU. Gladly also a few tips > on which card would be a good starting point are highly appreciated. I know > this heavily depends on the server hardware but this is something I can > figure out later. Also do we need additional software licenses to run this? > I saw this very nice presentation from CERN on VGPUs: > > > https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf > > In the table they are listing Quadro vDWS licenses. I assume we need > these in order to use the cards? > Disclaimer : I'm not a Nvidia developer and I just enable their drivers so maybe I could provide wrong answers but lemme try. First, consumer cards like RTX3xxx GPUs don't support virtual GPUs because they don't have a specific nvidia license for them. For being able to create virtual GPUs, you need to rather have professional nvidia cards like Tesla or Ampere. See this documentation, it will explain both the supported hardware and the licenses you need to use (in case you want to run it from a RHEL compute) : https://docs.nvidia.com/grid/13.0/grid-vgpu-release-notes-red-hat-el-kvm/index.html#validated-platforms That being said, you'll quickly discover those GPUs can be expensive, so maybe it would good for you to know that nvidia T4 GPUs work correctly for what you want to test. > Also do we need something like Cyborg for this or is VGPU fully > implemented in Nova? > You can do either, but yeah Virtual GPUs are fully supported within Nova as of now. HTH, -Sylvain > Best Regards, > > Oliver > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jan 12 09:59:10 2023 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 12 Jan 2023 10:59:10 +0100 Subject: Issue resizing volumes attached to running volume backed instances In-Reply-To: References: Message-ID: <20230112095910.twlsro4oimajqpob@localhost> On 08/12, J?r?me BECOT wrote: > Hello Openstack, > > We have Ussuri deployed on a few clouds, and they're all plugged to > PureStorage Arrays. We allow users to only use volumes for their servers. It > means that each server disk is a LUN attached over ISCSI (with multipath) on > the compute node hosting the server. Everything works quite fine, but we > have a weird issue when extending volumes attached to running instances. The > guests notice the new disk size .. of the last extent. > > Say I have a server with a 10gb disk. I add 5gb. On the guest, still 10gb. I > add another 5gb, and on the guest I get 15, and so on. I've turned the debug > mode on and I could see no error in the log. Looking closer at the log I > could catch the culprit: > > 2022-12-08 17:35:13.998 46195 DEBUG os_brick.initiator.linuxscsi [] Starting > size: *76235669504* > 2022-12-08 17:35:14.028 46195 DEBUG os_brick.initiator.linuxscsi [] volume > size after scsi device rescan *80530636800* extend_volume > 2022-12-08 17:35:14.035 46195 DEBUG os_brick.initiator.linuxscsi [] Volume > device info = {'device': '/dev/disk/by-path/ip-1...1:3260-iscsi-iqn.2010-06.com.purestorage:flasharray.x-lun-10', > 'host': '5', 'channel': '0', 'id': '0', 'lun': '10'} extend_volume > 2022-12-08 17:35:14.348 46195 INFO os_brick.initiator.linuxscsi [] Find > Multipath device file for volume WWN 3624... > 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] Checking > to see if /dev/disk/by-id/dm-uuid-mpath-3624.. exists yet. wait_for_path > 2022-12-08 17:35:14.349 46195 DEBUG os_brick.initiator.linuxscsi [] > /dev/disk/by-id/dm-uuid-mpath-3624... has shown up. wait_for_path > 2022-12-08 17:35:14.382 46195 INFO os_brick.initiator.linuxscsi [] > mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *current size 76235669504* > 2022-12-08 17:35:14.412 46195 INFO os_brick.initiator.linuxscsi [] > mpath(/dev/disk/by-id/dm-uuid-mpath-3624) *new size 76235669504* > 2022-12-08 17:35:14.413 46195 DEBUG oslo_concurrency.lockutils [] Lock > "extend_volume" released by > "os_brick.initiator.connectors.iscsi.ISCSIConnector.extend_volume" :: held > 2.062s inner 2022-12-08 17:35:14.459 46195 DEBUG > os_brick.initiator.connectors.iscsi [] <== extend_volume: return (2217ms) > *76235669504* trace_logging_wrapper > 2022-12-08 17:35:14.461 46195 DEBUG nova.virt.libvirt.volume.iscsi [] Extend > iSCSI Volume /dev/dm-28; new_size=*76235669504* extend_volume > 2022-12-08 17:35:14.462 46195 DEBUG nova.virt.libvirt.driver [] Resizing > target device /dev/dm-28 to *76235669504* _resize_attached_volume > > The logs clearly shows that the rescan confirm the new size but when > interrogating multipath, it does not. But requesting multipath few seconds > after on the command line shows the new size as well. It explains the > behaviour. > > I'm running Ubuntu 18.04 with multipath 0.7.4-2ubuntu3.2. The os-brick code > for multipath is far more basic than the one in master branch. Maybe the > multipath version installed is too recent for os-brick. > > Thanks for the help > > Jerome > Hi J?r?me, As far as I can see this is a problem with the speed in which things run. The speed at which the extend happens in the backend and is visible in the compute node is slower than the speed at which Nova asks os-brick to check the new size. There are multiple reasons why this could be happening: - Pure extend is not synchronous: So Cinder tells Nova that it has extended the volume before it has actually happened in the backend. I doubt that is the case. - The iSCSI notification of the new size to the compute node is slow. - The Nova execution is too fast for the compute node to notice the change in the volume's size. This is similar to the previous one, but in this case it's not that iSCSI is slow and it's a problem with the Network or the Storage Array, but it's that the compute node is too fast. Regardless, in my opinion there is a Cinder-Nova-OS-brick change that could be implemented to improve these situations. The extend_volume method in os-brick could receive the expected new size, that way it can actually wait a bit for the system to reflect it if it notices that it hasn't increased yet. Cheers, Gorka. From stephenfin at redhat.com Thu Jan 12 13:15:50 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 12 Jan 2023 13:15:50 +0000 Subject: Overriding tox's install_command won't work for non-service projects In-Reply-To: <20230112092601.xmflu4boe4za6jdj@localhost> References: <20230112092601.xmflu4boe4za6jdj@localhost> Message-ID: On Thu, 2023-01-12 at 10:26 +0100, Gorka Eguileor wrote: > On 10/01, Stephen Finucane wrote: > > Another tox 4 PSA. It turns out the tox 3 was not using the command in '[tox] > > install_command' when installing the package under test. tox 4 does, which means > > overriding '[tox] install_command' to include a constraints file (-c) will > > prevent you installing any requirement that is listed in the upper-constraints > > file, even if said requirement is the thing you're currently working on. This > > applies to all libraries (e.g. oslo.db, python-cinderclient) but not the > > services (cinder, nova) since those aren't included in upper-constraints. > > > > The "correct" way to respect upper-constraints is to provide them in 'deps' > > alongside the requirements file(s), e.g. > > > > [testenv] > > deps = > > -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} > > -r{toxinidir}/requirements.txt > > -r{toxinidir}/test-requirements.txt > > > > This will cause tox to install all of the package's dependencies first *with* > > constraints before installing the package itself *without constraints*. There is > > a bug report open against pip to change this behaviour [1], but it's been sat > > there for over two years with no activity so I wouldn't rely on this. > > > > Stephen > > > > [1] https://github.com/pypa/pip/issues/7839 > > > > > > Hi Stephen, > > In my past experience with Cinder tox honored the "install_command" just > fine, the problem was actually when using "usedevelop = True" and > setting the constraints in deps. > > We ended up adding a comment in our tox to prevent people from trying to > move the constraints to "deps", as that created problems. This is the > comment present in Cinder's tox.ini: > > # NOTE: Do not move the constraints from the install_command into deps, as that > # may result in tox using unconstrained/untested dependencies. > # We use "usedevelop = True" for tox jobs (except bindep), so tox does 2 > # install calls, one for the deps and another for the cinder source code > # as editable (pip -e). > # Without the constraints in the install_command only the first > # installation will honor the upper constraints, and the second install > # for cinder itself will not know about the constraints which can result > # in installing versions we don't want. > # With constraints in the install_command tox will always honor our > # constraints. Hmm, I tried to reproduce this this morning and tox 3 is using 'install_command' as expected when installing the package under test (assuming 'skipsdist' isn't set). I think I screwed up here ? > Has this double requirement installation changed? Will it work fine now > in the scenario described in our note? No, it sounds like what you've done is correct. Note that it'll only work for services though (i.e. things that aren't listed in upper-constraints). You can't do this for e.g. python-cinderclient. Nothing has changed there though. Stephen > Cheers, > Gorka. From nguyenhuukhoinw at gmail.com Thu Jan 12 01:54:48 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Thu, 12 Jan 2023 08:54:48 +0700 Subject: [horizon] Message-ID: Hello guys. I use Openstack Xena which deployed by Kolla Ansible.My problem is when I launch instance from Horizon, Port selection won't show as below pictures: [image: image.png] [image: image.png] I try to change browser but this problem still persists Any suggestions for me? Thank you. Regards. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69343 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 84669 bytes Desc: not available URL: From geguileo at redhat.com Thu Jan 12 14:30:12 2023 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 12 Jan 2023 15:30:12 +0100 Subject: Overriding tox's install_command won't work for non-service projects In-Reply-To: References: <20230112092601.xmflu4boe4za6jdj@localhost> Message-ID: <20230112143012.gfslm52zxwvgsvkp@localhost> On 12/01, Stephen Finucane wrote: > On Thu, 2023-01-12 at 10:26 +0100, Gorka Eguileor wrote: > > On 10/01, Stephen Finucane wrote: > > > Another tox 4 PSA. It turns out the tox 3 was not using the command in '[tox] > > > install_command' when installing the package under test. tox 4 does, which means > > > overriding '[tox] install_command' to include a constraints file (-c) will > > > prevent you installing any requirement that is listed in the upper-constraints > > > file, even if said requirement is the thing you're currently working on. This > > > applies to all libraries (e.g. oslo.db, python-cinderclient) but not the > > > services (cinder, nova) since those aren't included in upper-constraints. > > > > > > The "correct" way to respect upper-constraints is to provide them in 'deps' > > > alongside the requirements file(s), e.g. > > > > > > [testenv] > > > deps = > > > -c{env:UPPER_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} > > > -r{toxinidir}/requirements.txt > > > -r{toxinidir}/test-requirements.txt > > > > > > This will cause tox to install all of the package's dependencies first *with* > > > constraints before installing the package itself *without constraints*. There is > > > a bug report open against pip to change this behaviour [1], but it's been sat > > > there for over two years with no activity so I wouldn't rely on this. > > > > > > Stephen > > > > > > [1] https://github.com/pypa/pip/issues/7839 > > > > > > > > > > Hi Stephen, > > > > In my past experience with Cinder tox honored the "install_command" just > > fine, the problem was actually when using "usedevelop = True" and > > setting the constraints in deps. > > > > We ended up adding a comment in our tox to prevent people from trying to > > move the constraints to "deps", as that created problems. This is the > > comment present in Cinder's tox.ini: > > > > # NOTE: Do not move the constraints from the install_command into deps, as that > > # may result in tox using unconstrained/untested dependencies. > > # We use "usedevelop = True" for tox jobs (except bindep), so tox does 2 > > # install calls, one for the deps and another for the cinder source code > > # as editable (pip -e). > > # Without the constraints in the install_command only the first > > # installation will honor the upper constraints, and the second install > > # for cinder itself will not know about the constraints which can result > > # in installing versions we don't want. > > # With constraints in the install_command tox will always honor our > > # constraints. > > Hmm, I tried to reproduce this this morning and tox 3 is using 'install_command' > as expected when installing the package under test (assuming 'skipsdist' isn't > set). I think I screwed up here ? > > > Has this double requirement installation changed? Will it work fine now > > in the scenario described in our note? > > No, it sounds like what you've done is correct. Note that it'll only work for > services though (i.e. things that aren't listed in upper-constraints). You can't > do this for e.g. python-cinderclient. Nothing has changed there though. > > Stephen OK, thanks for taking the time to double check and confirm. :-) From jay at gr-oss.io Thu Jan 12 16:44:15 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 12 Jan 2023 08:44:15 -0800 Subject: [ironic] Moving Aija to core-emeritus In-Reply-To: References: Message-ID: Thank you again Aija for your work; your code will be running on OpenStack clusters around the world for many years to come. I've removed Aija's core permissions from sushy; if she begins work on OpenStack again at a later date they'll be quickly restored. Thanks, Jay Faulkner On Thu, Jan 12, 2023 at 1:15 AM Riccardo Pittau wrote: > +2 > her work has been invaluable > > Riccardo > > On Wed, Jan 11, 2023 at 1:41 AM Arne Wiebalck > wrote: > >> Sounds good to me. >> >> Thanks for all your work (so far :-), Aija! >> >> >> ------------------------------ >> *Von:* Jay Faulkner >> *Gesendet:* Dienstag, 10. Januar 2023, 16:47 >> *An:* OpenStack Discuss >> *Betreff:* [ironic] Moving Aija to core-emeritus >> >> Hi all, >> >> As many of you may already know, Aija is no longer actively working on >> OpenStack. She informed many Ironic cores over email of this last week. I >> asked her if she expected to contribute on her own time and she said no. >> >> In order to ensure we keep core reviewer lists up to date, I suggest we >> move Aija to core-emeritus status, removing core permissions from gerrit. >> If she decides to begin working on OpenStack again in the future, we'd >> obviously quickly return core permissions. >> >> What do you think? >> >> -- >> Jay Faulkner >> Ironic PTL >> TC Member >> >> P.S. A huge thanks to Aija for her hard work on the sushy project and the >> Dell driver in Ironic. >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Thu Jan 12 17:39:16 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Thu, 12 Jan 2023 23:09:16 +0530 Subject: RuntimeError: The expected HostnameMap and RoleHostnameFormat are not defined in data file: /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml | Openstack wallaby | tripleo | centos 8 stream Message-ID: Hi, INFO: Openstack wallaby on centos 8 stream I am trying to deploy a DCN and during the ceph deploy command i am getting the following error: (undercloud) [stack at hkg2director dcn01]$ openstack overcloud ceph deploy dcn01_overcloud-baremetal-deployed.yaml --stack dcn01 --config initial-ceph.conf --output deployed_ceph.yaml --container-image-prepare containers-prepare-parameter.yaml --network-data custom_network_data.yaml --cluster dcn01 --roles-data dcn01_roles.yaml -vvv The full traceback is: Traceback (most recent call last): File "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 241, in get_deployed_roles_to_hosts UnboundLocalError: local variable 'matching_hosts' referenced before assignment During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 102, in File "", line 94, in _ansiballz_main File "", line 40, in invoke_module File "/usr/lib64/python3.6/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 500, in File "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 471, in main File "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 246, in get_deployed_roles_to_hosts RuntimeError: The expected HostnameMap and RoleHostnameFormat are not defined in data file: /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml 2023-01-13 01:27:15.811878 | 48d539a1-1679-ea57-d54d-000000000013 | FATAL | Create Ceph spec based on baremetal_deployed_path and tripleo_roles | undercloud | error={ "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 241, in get_deployed_roles_to_hosts\nUnboundLocalError: local variable 'matching_hosts' referenced before assignment\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 500, in \n File \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 471, in main\n File \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 246, in get_deployed_roles_to_hosts\nRuntimeError: The expected HostnameMap and RoleHostnameFormat are not defined in data file: /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } (undercloud) [stack at hkg2director dcn01]$ cat /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml parameter_defaults: DeployedServerPortMap: dcn01-hci-0-ctlplane: fixed_ips: - ip_address: 172.25.221.96 dcn01-hci-1-ctlplane: fixed_ips: - ip_address: 172.25.221.90 dcn01-hci-2-ctlplane: fixed_ips: - ip_address: 172.25.221.106 HCICount: 3 HCIHostnameFormat: '%stackname%-hci-%index%' HostnameMap: dcn01-hci-0: dcn01-hci-0 dcn01-hci-1: dcn01-hci-1 dcn01-hci-2: dcn01-hci-2 NodePortMap: dcn01-hci-0: ctlplane: ip_address: 172.25.221.96 ...> Can someone please tell me what could be the issue here?? Am I missing something? With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Thu Jan 12 17:56:00 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 12 Jan 2023 09:56:00 -0800 Subject: [ironic] Bugfix branches being EOL'd first week of Jan, 2023 In-Reply-To: References: Message-ID: Change is complete. All branches listed on the etherpad have been removed, and replaced with an equivalent EOL tag. - Jay Faulkner On Wed, Jan 11, 2023 at 3:14 PM Jay Faulkner wrote: > I truncated a sentence in the previous post. > > > This change has been executed for Ironic and Ironic-Python-Agent. > However, due to an issue applying the ACLs for ironic-inspector, I was > unable to retire those bugfix branches for ironic-inspector today.. The > root cause of that ACL problem has been found and is being resolved. > > On Wed, Jan 11, 2023 at 3:14 PM Jay Faulkner wrote: > >> This change has been executed for Ironic and Ironic-Python-Agent. >> However, due to an issue applying the ACLs for ironic-inspector. The root >> cause of that ACL problem has been found and is being resolved. >> >> However, to ensure I'll be around after the change applies in case of any >> issues; I'm going to postpone applying the changes for ironic-inspector >> until tomorrow. >> >> Thanks, >> Jay Faulkner >> >> On Tue, Jan 10, 2023 at 2:56 PM Jay Faulkner wrote: >> >>> It's my intention to execute these changes tomorrow, Jan 11th. Please >>> take notice. >>> >>> Thanks, >>> Jay Faulkner >>> >>> On Tue, Dec 13, 2022 at 9:04 AM Jay Faulkner wrote: >>> >>>> OpenStack Community and Operators, >>>> >>>> As documented in >>>> https://specs.openstack.org/openstack/ironic-specs/specs/approved/new-release-model.html, >>>> Ironic performs bugfix releases in the middle of a cycle to permit >>>> downstream packagers to more rapidly deliver features to standalone Ironic >>>> users. >>>> >>>> However, we've neglected as a project to cleanup or EOL any of these >>>> branches -- until now. Please take notice that during the first week in >>>> January, we will be EOL-ing all old, unsupported Ironic bugfix branches. >>>> This will be handled similarly to an EOL of a stable branch; we will create >>>> a tag -- e.g. for bugfix/x.y branch, we would tag bugfix-x.y-eol -- then >>>> remove the branch. >>>> >>>> These branches have been out of support for months and should not be in >>>> use in your Ironic clusters. If you are using any branches slated for >>>> retirement, please immediately upgrade to a supported Ironic version. >>>> >>>> A full listing of projects and branches impacted: >>>> >>>> ironic branches being retired >>>> bugfix/15.1 >>>> bugfix/15.2 >>>> bugfix/16.1 >>>> bugfix/16.2 >>>> bugfix/18.0 >>>> bugfix/20.0 >>>> >>>> ironic-python-agent branches being retired >>>> bugfix/6.2 >>>> bugfix/6.3 >>>> bugfix/6.5 >>>> bugfix/6.6 >>>> bugfix/8.0 >>>> bugfix/8.4 >>>> >>>> ironic-inspector branches being retired >>>> bugfix/10.2 >>>> bugfix/10.3 >>>> bugfix/10.5 >>>> bugfix/10.10 >>>> >>>> Thank you, >>>> Jay Faulkner >>>> Ironic PTL >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Jan 13 03:03:33 2023 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Fri, 13 Jan 2023 03:03:33 +0000 Subject: =?gb2312?B?tPC4tDogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> Message-ID: <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> -> ----????----- >???: Arne Wiebalck [mailto:Arne.Wiebalck at cern.ch] >????: 2023?1?12? 15:43 >???: Oliver Weinmann ; openstack-discuss >??: Re: Experience with VGPUs > > Hi Oliver, > > The presentation you linked was only *at* CERN, not *from* CERN (it was during an OpenStack Day we organised here). Sylvain and/or Mohammed may be available to answer the questions you have related to that deck, or also in general for the integration of GPUs. > Now, *at* CERN we also have hypervisors with different GPUs in our fleet, and are also looking into various options how to efficiently provision them: > as bare metal, as vGPUs, using MIG support, ... and we have submitted a presentation proposal for the upcoming summit to share our experiences. > If you have very specific questions, we can try to answer them here, but maybe there is interest and it would be more efficient to organize a session/call (e.g. as part of the Openstack Operators activities or the Scientific SIG?) to exchange experiences on GPU integration and answer questions there? > What do you and others think? > Cheers, > Arne > > ________________________________________ > From: Oliver Weinmann > Sent: Thursday, 12 January 2023 07:56 > To: openstack-discuss > Subject: Experience with VGPUs > > Dear All, > > we are planning to have a POC on VGPUs in our Openstack cluster. Therefore I have a few questions and generally wanted to ask how well VGPUs are supported in Openstack. The docs, in particular: > > https://docs.openstack.org/nova/zed/admin/virtual-gpu.html > > explain quite well the general implementation. > > But I am more interested in general experience with using VGPUs in Openstack. We currently have a small YOGA cluster, planning to upgrade to Zed soon, with a couple of compute nodes. Currently our users use consumer cards like RTX 3050/3060 on their laptops and the idea would be to provide VGPUs to these users. For this I > would like to make a very small POC where we first equip one compute node with an Nvidia GPU. Gladly also a few tips on which card would be a good starting point are highly appreciated. I know this heavily depends on the server hardware but this is something I can figure out later. Also do we need additional software >licenses > to run this? I saw this very nice presentation from CERN on VGPUs: > > https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf > In the table they are listing Quadro vDWS licenses. I assume we need these in order to use the cards? Also do we need something like Cyborg for this or is VGPU fully implemented in Nova? You can try to use Cyborg manage your GPU devices, it also can support list/attach vGPU for an instance, if you want to attach/detach an device from an instance that you should transform your flavor, because the vGPU/GPU info need to be added in flavor now(If you want to use this feature may be need to separate such GPU metadata from flavor, we have discussed in nova team before). I am working in Inspur, in our InCloud OS conduct, we are using Cyborg manage GPU/vGPU, FPGA, QAT etc. devices. And adapted GPU T4/T100 (support vGPU), A100(support mig), I think use Cyborg to better manage local GPU devices, please refer api docs of Cyborg https://docs.openstack.org/api-ref/accelerator/ > Best Regards, > Oliver From swogatpradhan22 at gmail.com Fri Jan 13 06:31:48 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 13 Jan 2023 12:01:48 +0530 Subject: RuntimeError: The expected HostnameMap and RoleHostnameFormat are not defined in data file: /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml | Openstack wallaby | tripleo | centos 8 stream In-Reply-To: References: Message-ID: Hi, The above issue was passed. Now i am facing another issue ( TypeError: __init__() got an unexpected keyword argument 'location' ) $ openstack overcloud ceph deploy dcn01_overcloud-baremetal-deployed.yaml --stack dcn01 --config initial-ceph.conf --output deployed_ceph.yaml --container-image-prepare containers-prepare-parameter.yaml --network-data custom_network_data.yaml --cluster dcn01 --roles-data dcn01_roles.yaml -vvv 2023-01-13 13:57:41.039472 | 48d539a1-1679-6d35-f36d-000000000013 | TASK | Create Ceph spec based on baremetal_deployed_path and tripleo_roles Using module file /usr/share/ansible/plugins/modules/ceph_spec_bootstrap.py Pipelining is enabled. ESTABLISH LOCAL CONNECTION FOR USER: stack EXEC /bin/sh -c '/usr/bin/python3 && sleep 0' The full traceback is: Traceback (most recent call last): File "", line 102, in File "", line 94, in _ansiballz_main File "", line 40, in invoke_module File "/usr/lib64/python3.6/runpy.py", line 205, in run_module return _run_module_code(code, init_globals, run_name, mod_spec) File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 500, in File "/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 487, in main File "/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", line 333, in get_specs TypeError: __init__() got an unexpected keyword argument 'location' 2023-01-13 13:57:42.204777 | 48d539a1-1679-6d35-f36d-000000000013 | FATAL | Create Ceph spec based on baremetal_deployed_path and tripleo_roles | undercloud | error={ "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 500, in \n File \"/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 487, in main\n File \"/tmp/ansible_ceph_spec_bootstrap_payload_67bkkver/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", line 333, in get_specs\nTypeError: __init__() got an unexpected keyword argument 'location'\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1 } 2023-01-13 13:57:42.206279 | 48d539a1-1679-6d35-f36d-000000000013 | TIMING | Create Ceph spec based on baremetal_deployed_path and tripleo_roles | undercloud | 0:00:02.169503 | 1.16s (undercloud) [stack at hkg2director dcn01]$ cat dcn01_overcloud-baremetal-deployed.yaml parameter_defaults: DeployedServerPortMap: dcn01-hci-0-ctlplane: fixed_ips: - ip_address: 172.25.221.108 dcn01-hci-1-ctlplane: fixed_ips: - ip_address: 172.25.221.96 dcn01-hci-2-ctlplane: fixed_ips: - ip_address: 172.25.221.105 DistributedComputeHCICount: 3 DistributedComputeHCIHostnameFormat: '%stackname%-distributedcomputehci-%index%' HostnameMap: dcn01-distributedcomputehci-0: dcn01-hci-0 dcn01-distributedcomputehci-1: dcn01-hci-1 dcn01-distributedcomputehci-2: dcn01-hci-2 NodePortMap: dcn01-hci-0: ctlplane: ip_address: 172.25.221.108 ip_address_uri: 172.25.221.108 <...> (undercloud) [stack at hkg2director dcn01]$ cat dcn01_roles.yaml ############################################################################### # File generated by TripleO ############################################################################### ############################################################################### # Role: DistributedComputeHCI # ############################################################################### - name: DistributedComputeHCI description: | Distributed Compute Node role with Ceph, Cinder volume, and Glance. tags: - compute networks: InternalApi: subnet: internal_apis2_subnet Tenant: subnet: tenants2_subnet Storage: subnet: storages2_subnet StorageMgmt: subnet: storage_mgmts2_subnet HostnameFormatDefault: '%stackname%-hci-%index%' RoleParametersDefault: FsAioMaxNumber: 1048576 TunedProfileName: "throughput-performance" # CephOSD present so serial has to be 1 update_serial: 1 ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BarbicanClient <...> On Thu, Jan 12, 2023 at 11:09 PM Swogat Pradhan wrote: > Hi, > INFO: Openstack wallaby on centos 8 stream > > I am trying to deploy a DCN and during the ceph deploy command i am > getting the following error: > > (undercloud) [stack at hkg2director dcn01]$ openstack overcloud ceph deploy > dcn01_overcloud-baremetal-deployed.yaml --stack dcn01 --config > initial-ceph.conf --output deployed_ceph.yaml --container-image-prepare > containers-prepare-parameter.yaml --network-data custom_network_data.yaml > --cluster dcn01 --roles-data dcn01_roles.yaml -vvv > > The full traceback is: > Traceback (most recent call last): > File > "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", > line 241, in get_deployed_roles_to_hosts > UnboundLocalError: local variable 'matching_hosts' referenced before > assignment > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "", line 102, in > File "", line 94, in _ansiballz_main > File "", line 40, in invoke_module > File "/usr/lib64/python3.6/runpy.py", line 205, in run_module > return _run_module_code(code, init_globals, run_name, mod_spec) > File "/usr/lib64/python3.6/runpy.py", line 96, in _run_module_code > mod_name, mod_spec, pkg_name, script_name) > File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code > exec(code, run_globals) > File > "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", > line 500, in > File > "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", > line 471, in main > File > "/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py", > line 246, in get_deployed_roles_to_hosts > RuntimeError: The expected HostnameMap and RoleHostnameFormat are not > defined in data file: > /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml > 2023-01-13 01:27:15.811878 | 48d539a1-1679-ea57-d54d-000000000013 | FATAL > | Create Ceph spec based on baremetal_deployed_path and tripleo_roles | > undercloud | error={ > "changed": false, > "module_stderr": "Traceback (most recent call last):\n File > \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", > line 241, in get_deployed_roles_to_hosts\nUnboundLocalError: local variable > 'matching_hosts' referenced before assignment\n\nDuring handling of the > above exception, another exception occurred:\n\nTraceback (most recent call > last):\n File \"\", line 102, in \n File \"\", line > 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File > \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return > _run_module_code(code, init_globals, run_name, mod_spec)\n File > \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, > mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", > line 85, in _run_code\n exec(code, run_globals)\n File > \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", > line 500, in \n File > \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", > line 471, in main\n File > \"/tmp/ansible_ceph_spec_bootstrap_payload_c4o_1q1y/ansible_ceph_spec_bootstrap_payload.zip/ansible/modules/ceph_spec_bootstrap.py\", > line 246, in get_deployed_roles_to_hosts\nRuntimeError: The expected > HostnameMap and RoleHostnameFormat are not defined in data file: > /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml\n", > "module_stdout": "", > "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", > "rc": 1 > } > > (undercloud) [stack at hkg2director dcn01]$ cat > /home/stack/dcn01/dcn01_overcloud-baremetal-deployed.yaml > parameter_defaults: > DeployedServerPortMap: > dcn01-hci-0-ctlplane: > fixed_ips: > - ip_address: 172.25.221.96 > dcn01-hci-1-ctlplane: > fixed_ips: > - ip_address: 172.25.221.90 > dcn01-hci-2-ctlplane: > fixed_ips: > - ip_address: 172.25.221.106 > HCICount: 3 > HCIHostnameFormat: '%stackname%-hci-%index%' > HostnameMap: > dcn01-hci-0: dcn01-hci-0 > dcn01-hci-1: dcn01-hci-1 > dcn01-hci-2: dcn01-hci-2 > NodePortMap: > dcn01-hci-0: > ctlplane: > ip_address: 172.25.221.96 > ...> > > Can someone please tell me what could be the issue here?? > Am I missing something? > > > With regards, > > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Jan 13 10:38:47 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 13 Jan 2023 11:38:47 +0100 Subject: [neutron] Drivers meeting cancelled today Message-ID: Hello Neutrinos: Due to the lack of agenda [1], today's meeting is cancelled. Have a nice weekend! [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From igene at igene.tw Fri Jan 13 13:07:35 2023 From: igene at igene.tw (Gene Kuo) Date: Fri, 13 Jan 2023 13:07:35 +0000 Subject: =?utf-8?Q?Re:_=E7=AD=94=E5=A4=8D:_Experience_with_VGPUs?= In-Reply-To: <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Hi Oliver, I had some experience on using Nvidia vGPUs (Tesla P4) in my own OpenStack cluster. The setup is pretty simple, follow the guides from Nvidia to install Linux KVM drivers[1] and OpenStack document[2] for attaching vGPU mdevs to your instances. Licensing is at the client (VM) side and not the server (hypervisor) side. The cards that you mentioned you are using (RTX3050/3060) doesn't support vGPU, there is a list of supported cards listed by Nvidia[3]. For newer cards using MIGs I have no experience but I would expect the overall procedure to be similar. As for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. Regards, Gene Kuo [1] https://docs.nvidia.com/grid/13.0/grid-vgpu-user-guide/index.html#red-hat-el-kvm-install-configure-vgpu [2] https://docs.openstack.org/nova/latest/admin/virtual-gpu.html [3] https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html ------- Original Message ------- On Friday, January 13th, 2023 at 12:03 PM, Brin Zhang wrote: > -> ----????----- > > > ???: Arne Wiebalck [mailto:Arne.Wiebalck at cern.ch] > > ????: 2023?1?12? 15:43 > > ???: Oliver Weinmann oliver.weinmann at me.com; openstack-discuss openstack-discuss at lists.openstack.org > > ??: Re: Experience with VGPUs > > > > Hi Oliver, > > > > The presentation you linked was only at CERN, not from CERN (it was during an OpenStack Day we organised here). Sylvain and/or Mohammed may be available to answer the questions you have related to that deck, or also in general for the integration of GPUs. > > > Now, at CERN we also have hypervisors with different GPUs in our fleet, and are also looking into various options how to efficiently provision them: > > as bare metal, as vGPUs, using MIG support, ... and we have submitted a presentation proposal for the upcoming summit to share our experiences. > > > If you have very specific questions, we can try to answer them here, but maybe there is interest and it would be more efficient to organize a session/call (e.g. as part of the Openstack Operators activities or the Scientific SIG?) to exchange experiences on GPU integration and answer questions there? > > > What do you and others think? > > > Cheers, > > Arne > > > > ________________________________________ > > From: Oliver Weinmann oliver.weinmann at me.com > > Sent: Thursday, 12 January 2023 07:56 > > To: openstack-discuss > > Subject: Experience with VGPUs > > > > Dear All, > > > > we are planning to have a POC on VGPUs in our Openstack cluster. Therefore I have a few questions and generally wanted to ask how well VGPUs are supported in Openstack. The docs, in particular: > > > > https://docs.openstack.org/nova/zed/admin/virtual-gpu.html > > > > explain quite well the general implementation. > > > > But I am more interested in general experience with using VGPUs in Openstack. We currently have a small YOGA cluster, planning to upgrade to Zed soon, with a couple of compute nodes. Currently our users use consumer cards like RTX 3050/3060 on their laptops and the idea would be to provide VGPUs to these users. For this I > > would like to make a very small POC where we first equip one compute node with an Nvidia GPU. Gladly also a few tips on which card would be a good starting point are highly appreciated. I know this heavily depends on the server hardware but this is something I can figure out later. Also do we need additional software > > licenses > to run this? I saw this very nice presentation from CERN on VGPUs: > > > > https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf > > > In the table they are listing Quadro vDWS licenses. I assume we need these in order to use the cards? Also do we need something like Cyborg for this or is VGPU fully implemented in Nova? > > > You can try to use Cyborg manage your GPU devices, it also can support list/attach vGPU for an instance, if you want to attach/detach an device from an instance that you should transform your flavor, because the vGPU/GPU info need to be added in flavor now(If you want to use this feature may be need to separate such GPU metadata from flavor, we have discussed in nova team before). > I am working in Inspur, in our InCloud OS conduct, we are using Cyborg manage GPU/vGPU, FPGA, QAT etc. devices. And adapted GPU T4/T100 (support vGPU), A100(support mig), I think use Cyborg to better manage local GPU devices, please refer api docs of Cyborg https://docs.openstack.org/api-ref/accelerator/ > > > Best Regards, > > > Oliver From smooney at redhat.com Fri Jan 13 14:16:15 2023 From: smooney at redhat.com (Sean Mooney) Date: Fri, 13 Jan 2023 14:16:15 +0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A?= Experience with VGPUs In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: <653400ad161da231b448d8862585a83473bbcc62.camel@redhat.com> On Fri, 2023-01-13 at 13:07 +0000, Gene Kuo wrote: > Hi Oliver, > > I had some experience on using Nvidia vGPUs (Tesla P4) in my own OpenStack cluster. The setup is pretty simple, follow the guides from Nvidia to install Linux KVM drivers[1] and OpenStack document[2] for attaching vGPU mdevs to your instances. Licensing is at the client (VM) side and not the server (hypervisor) side. The cards that you mentioned you are using (RTX3050/3060) doesn't support vGPU, there is a list of supported cards listed by Nvidia[3]. > > For newer cards using MIGs I have no experience but I would expect the overall procedure to be similar. the main differnce for mig mode is that the mdevs are created ontop of sriov VFs so from a nova prespective instead of listing the adress of the PF you need to enable the VFs instead in the config. its more or less the same other then that on the nova side. obvioulsy there is alittle more work to cofnigr the VFs ectra on the host for mig mode but its mostly transparent to nova all that changes is which pci device (the PF or VF) provides the inventories of mdevs which nova will attach to the vm. in the MIG case each vf expose at most 1 mdev instance of a specific type with out mig the pf expose multiple instance of a singel mdev type. > > As for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. ya so on the amd side if you happen ot have those drivers then instead of using nova vGPU feature you jsut use normal pci passhtough pci passthough in nova contery to what some assume was not orgianly added for sriov networking. it was added for intel QAT device and supprots PFs, VFs and non sriov capable pcie devices. as long as the device is stateles you can use it with the generic pci passthough supprot via the pci alais in the instance flavor. so if you have the drvier you just need to create a pci alias for the amd gpu vfs and use them like any other accelorator that support sriov. > > Regards, > Gene Kuo > > [1] https://docs.nvidia.com/grid/13.0/grid-vgpu-user-guide/index.html#red-hat-el-kvm-install-configure-vgpu > [2] https://docs.openstack.org/nova/latest/admin/virtual-gpu.html > [3] https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html > > ------- Original Message ------- > On Friday, January 13th, 2023 at 12:03 PM, Brin Zhang wrote: > > > > -> ----????----- > > > > > ???: Arne Wiebalck [mailto:Arne.Wiebalck at cern.ch] > > > ????: 2023?1?12? 15:43 > > > ???: Oliver Weinmann oliver.weinmann at me.com; openstack-discuss openstack-discuss at lists.openstack.org > > > ??: Re: Experience with VGPUs > > > > > > Hi Oliver, > > > > > > The presentation you linked was only at CERN, not from CERN (it was during an OpenStack Day we organised here). Sylvain and/or Mohammed may be available to answer the questions you have related to that deck, or also in general for the integration of GPUs. > > > > > Now, at CERN we also have hypervisors with different GPUs in our fleet, and are also looking into various options how to efficiently provision them: > > > as bare metal, as vGPUs, using MIG support, ... and we have submitted a presentation proposal for the upcoming summit to share our experiences. > > > > > If you have very specific questions, we can try to answer them here, but maybe there is interest and it would be more efficient to organize a session/call (e.g. as part of the Openstack Operators activities or the Scientific SIG?) to exchange experiences on GPU integration and answer questions there? > > > > > What do you and others think? > > > > > Cheers, > > > Arne > > > > > > ________________________________________ > > > From: Oliver Weinmann oliver.weinmann at me.com > > > Sent: Thursday, 12 January 2023 07:56 > > > To: openstack-discuss > > > Subject: Experience with VGPUs > > > > > > Dear All, > > > > > > we are planning to have a POC on VGPUs in our Openstack cluster. Therefore I have a few questions and generally wanted to ask how well VGPUs are supported in Openstack. The docs, in particular: > > > > > > https://docs.openstack.org/nova/zed/admin/virtual-gpu.html > > > > > > explain quite well the general implementation. > > > > > > But I am more interested in general experience with using VGPUs in Openstack. We currently have a small YOGA cluster, planning to upgrade to Zed soon, with a couple of compute nodes. Currently our users use consumer cards like RTX 3050/3060 on their laptops and the idea would be to provide VGPUs to these users. For this I > > > would like to make a very small POC where we first equip one compute node with an Nvidia GPU. Gladly also a few tips on which card would be a good starting point are highly appreciated. I know this heavily depends on the server hardware but this is something I can figure out later. Also do we need additional software > > > licenses > to run this? I saw this very nice presentation from CERN on VGPUs: > > > > > > https://indico.cern.ch/event/776411/contributions/3345183/attachments/1851624/3039917/02_-_vGPUs_with_OpenStack_-_Accelerating_Science.pdf > > > > > In the table they are listing Quadro vDWS licenses. I assume we need these in order to use the cards? Also do we need something like Cyborg for this or is VGPU fully implemented in Nova? > > > > > > You can try to use Cyborg manage your GPU devices, it also can support list/attach vGPU for an instance, if you want to attach/detach an device from an instance that you should transform your flavor, because the vGPU/GPU info need to be added in flavor now(If you want to use this feature may be need to separate such GPU metadata from flavor, we have discussed in nova team before). > > I am working in Inspur, in our InCloud OS conduct, we are using Cyborg manage GPU/vGPU, FPGA, QAT etc. devices. And adapted GPU T4/T100 (support vGPU), A100(support mig), I think use Cyborg to better manage local GPU devices, please refer api docs of Cyborg https://docs.openstack.org/api-ref/accelerator/ > > > > > Best Regards, > > > > > Oliver > From noonedeadpunk at gmail.com Fri Jan 13 15:18:20 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 13 Jan 2023 16:18:20 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: You are saying that, like Nvidia GRID drivers are open-sourced while in fact they're super far from being that. In order to download drivers not only for hypervisors, but also for guest VMs you need to have an account in their Enterprise Portal. It took me roughly 6 weeks of discussions with hardware vendors and Nvidia support to get a proper account there. And that happened only after applying for their Partner Network (NPN). That still doesn't solve the issue of how to provide drivers to guests, except pre-build a series of images with these drivers pre-installed (we ended up with making a DIB element for that [1]). Not saying about the need to distribute license tokens for guests and the whole mess with compatibility between hypervisor and guest drivers (as guest driver can't be newer then host one, and HVs can't be too new either). It's not that I'm protecting AMD, but just saying that Nvidia is not that straightforward either, and at least on paper AMD vGPUs look easier both for operators and end-users. [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > As for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. > From oliver.weinmann at me.com Fri Jan 13 15:32:10 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 13 Jan 2023 15:32:10 -0000 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux In-Reply-To: References: Message-ID: <06d571fe-d816-4461-9d3a-477a474f1251@me.com> Hi,just? a quick update. I deployed a rocky 9 vm in my openstack kolla-ansible test environment, ran bootstrap and deployed it as a compute node but there is one container that is listed as unhealthy:(yoga) [vagrant at seed ~]$ ssh compute05 -l vagrant "sudo docker ps -a"vagrant at compute05's password:CONTAINER ID?? IMAGE????????????????????????????????????????????????????????????????????????????? COMMAND????????????????? CREATED??????? STATUS??????????????????? PORTS???? NAMES3701de0c77e7?? 172.28.7.140:4000/openstack.kolla/centos-source-openvswitch-vswitchd:yoga????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours (healthy)?????????????? openvswitch_vswitchd083e04ae0fcb?? 172.28.7.140:4000/openstack.kolla/centos-source-openvswitch-db-server:yoga???????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours (unhealthy)???????????? openvswitch_db17a156ff5352?? 172.28.7.140:4000/openstack.kolla/centos-source-nova-compute:yoga????????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours (healthy)?????????????? nova_compute12d308adf9ce?? 172.28.7.140:4000/openstack.kolla/centos-source-nova-libvirt:yoga????????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours (healthy)?????????????? nova_libvirtcc35631ec420?? 172.28.7.140:4000/openstack.kolla/centos-source-nova-ssh:yoga????????????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours (healthy)?????????????? nova_sshad688dec24c8?? 172.28.7.140:4000/openstack.kolla/centos-source-prometheus-libvirt-exporter:yoga?? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? prometheus_libvirt_exporter02e52983458c?? 172.28.7.140:4000/openstack.kolla/centos-source-prometheus-cadvisor:yoga?????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? prometheus_cadvisor878a6eb1bb42?? 172.28.7.140:4000/openstack.kolla/centos-source-prometheus-node-exporter:yoga????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? prometheus_node_exporter25faaf319f8a?? 172.28.7.140:4000/openstack.kolla/centos-source-cron:yoga????????????????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? cronefd1cc64967c?? 172.28.7.140:4000/openstack.kolla/centos-source-kolla-toolbox:yoga???????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? kolla_toolbox822876acf7c2?? 172.28.7.140:4000/openstack.kolla/centos-source-fluentd:yoga?????????????????????? "dumb-init --single-?"?? 46 hours ago?? Up 46 hours???????????????????????? fluentdTo rule out that there is a general issue I deployed another rocky 8 and this worked just fine.Also the compute services look just fine:(yoga) [vagrant at seed ~]$ openstack compute service list+--------------------------------------+----------------+-----------+----------+---------+-------+----------------------------+| ID?????????????????????????????????? | Binary???????? | Host????? | Zone???? | Status? | State | Updated At???????????????? |+--------------------------------------+----------------+-----------+----------+---------+-------+----------------------------+| 3acd5340-4c79-4844-a321-ef3ac73602c5 | nova-scheduler | control01 | internal | enabled | up??? | 2023-01-13T14:46:10.000000 || e289aef6-a3fa-402f-80e3-6bbd7807ae43 | nova-scheduler | control03 | internal | enabled | up??? | 2023-01-13T14:46:12.000000 || 06d6ce7f-907d-4fe1-92bc-611511d7ce01 | nova-scheduler | control02 | internal | enabled | up??? | 2023-01-13T14:46:08.000000 || b5051a46-ca04-4dbb-9067-75091fe26cfe | nova-conductor | control01 | internal | enabled | up??? | 2023-01-13T14:46:08.000000 || e3e0a17f-b445-44b7-8355-972754ab397f | nova-conductor | control02 | internal | enabled | up??? | 2023-01-13T14:46:09.000000 || 293b8e26-48de-4656-97a6-2c7020d19e44 | nova-conductor | control03 | internal | enabled | up??? | 2023-01-13T14:46:04.000000 || c007e4a5-bee9-4428-8494-5f2cdddaec91 | nova-compute?? | compute01 | nova???? | enabled | up??? | 2023-01-13T14:46:08.000000 || 62615300-dbee-4337-bc81-c956377abee9 | nova-compute?? | compute02 | nova???? | enabled | up??? | 2023-01-13T14:46:07.000000 || a497d03a-6de8-418f-94a2-8b610babf48b | nova-compute?? | compute03 | nova???? | enabled | up??? | 2023-01-13T14:46:12.000000 || cbdbd319-77e2-4d98-aeee-f3e65d7d8d00 | nova-compute?? | compute04 | nova???? | enabled | up??? | 2023-01-13T14:46:10.000000 || 56e126e5-ddd5-4b4d-94d6-dff2cf850ddb | nova-compute?? | compute05 | nova???? | enabled | up??? | 2023-01-13T14:46:06.000000 |+--------------------------------------+----------------+-----------+----------+---------+-------+----------------------------+I manually restarted the unhealthy container but it stays unhealthy. So I just spun up a couple cirros instances. Then I associated a FIP to the one that was scheduled on the rocky 9 compute node and tried to ping it and it doesn't work.Would be good to get some info on how to perform the upgrade.Cheers,OliverOn Jan 8, 2023, at 9:56 PM, Oliver Weinmann wrote:Hi,That is a good question. I?m also running yoga on rocky 8 and due to some problems with yoga I would like to upgrade to zed too soon. I have created a very simple staging deployment on a single ESXi host with 3 controllers and 2 compute nodes with the same config that I use in the production cluster. This lets me try the upgrade path. I assume while there is the possibility to upgrade from rocky 8 to 9, I wouldn?t do that. Instead I would do a fresh install of rocky9. I can only think of the docs not being 100% accurate and you can run yoga on rocky9 too. I will give it a try.Cheers,OliverVon meinem iPhone gesendetAm 08.01.2023 um 10:25 schrieb wodel youchi :?Hi,Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga to Zed since we have to do OS upgrade also???Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Fri Jan 13 15:41:05 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 13 Jan 2023 15:41:05 -0000 Subject: =?utf-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Hi Everyone,thanks for the many replies and hints. I think I will go for an NVIDIA T4 for now and try to get it working in our OpenStack cluster by following your guidelines @Gene. I will report back on the progress.Cheers,OliverOn Jan 13, 2023, at 4:20 PM, Dmitriy Rabotyagov wrote:You are saying that, like Nvidia GRID drivers are open-sourced whilein fact they're super far from being that. In order to downloaddrivers not only for hypervisors, but also for guest VMs you need tohave an account in their Enterprise Portal. It took me roughly 6 weeksof discussions with hardware vendors and Nvidia support to get aproper account there. And that happened only after applying for theirPartner Network (NPN).That still doesn't solve the issue of how to provide drivers toguests, except pre-build a series of images with these driverspre-installed (we ended up with making a DIB element for that [1]).Not saying about the need to distribute license tokens for guests andthe whole mess with compatibility between hypervisor and guest drivers(as guest driver can't be newer then host one, and HVs can't be toonew either).It's not that I'm protecting AMD, but just saying that Nvidia is notthat straightforward either, and at least on paper AMD vGPUs lookeasier both for operators and end-users.[1] https://github.com/citynetwork/dib-elements/tree/main/nvgridAs for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Fri Jan 13 04:42:52 2023 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Fri, 13 Jan 2023 10:12:52 +0530 Subject: [horizon] In-Reply-To: References: Message-ID: Hi, Looking at the error message, it is a browser issue because I have also just checked on my horizon env. for the master branch. It is working fine. Clear your cache and cookies for your browser or perform a hard refresh for your browser and then try. Thanks & regards, Vishal Manchanda On Thu, Jan 12, 2023 at 7:56 PM Nguy?n H?u Kh?i wrote: > Hello guys. > I use Openstack Xena which deployed by Kolla Ansible.My problem is when I > launch instance from Horizon, Port selection won't show as below pictures: > [image: image.png] > > [image: image.png] > I try to change browser but this problem still persists > > Any suggestions for me? > > Thank you. Regards. > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69343 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 84669 bytes Desc: not available URL: From nguyenhuukhoinw at gmail.com Fri Jan 13 07:36:56 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Fri, 13 Jan 2023 14:36:56 +0700 Subject: [horizon] In-Reply-To: References: Message-ID: Hello. I used some pc and browser but it still happens. I use horizon on Xena release. On Fri, Jan 13, 2023, 11:43 AM vishal manchanda < manchandavishal143 at gmail.com> wrote: > Hi, > > Looking at the error message, it is a browser issue because I have also > just checked on my horizon env. for the master branch. > It is working fine. Clear your cache and cookies for your browser or > perform a hard refresh for your browser and then try. > > Thanks & regards, > Vishal Manchanda > > On Thu, Jan 12, 2023 at 7:56 PM Nguy?n H?u Kh?i > wrote: > >> Hello guys. >> I use Openstack Xena which deployed by Kolla Ansible.My problem is when I >> launch instance from Horizon, Port selection won't show as below pictures: >> [image: image.png] >> >> [image: image.png] >> I try to change browser but this problem still persists >> >> Any suggestions for me? >> >> Thank you. Regards. >> >> Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69343 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 84669 bytes Desc: not available URL: From yipikai7 at gmail.com Fri Jan 13 19:56:45 2023 From: yipikai7 at gmail.com (Cedric) Date: Fri, 13 Jan 2023 20:56:45 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Ended up with the very same conclusions than Dimitry regarding the use of Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: - respecting the licensing model as operationnal constraints, note that guests need to reach a license server in order to get a token (could be via the Nvidia SaaS service or on-prem) - drivers for both guest and hypervisor are not easy to implement and maintain on large scale. A year ago, hypervisors drivers were not packaged to Debian/Ubuntu, but builded though a bash script, thus requiering additional automatisation work and careful attention regarding kernel update/reboot of Nova hypervisors. Cheers On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov wrote: > > You are saying that, like Nvidia GRID drivers are open-sourced while > in fact they're super far from being that. In order to download > drivers not only for hypervisors, but also for guest VMs you need to > have an account in their Enterprise Portal. It took me roughly 6 weeks > of discussions with hardware vendors and Nvidia support to get a > proper account there. And that happened only after applying for their > Partner Network (NPN). > That still doesn't solve the issue of how to provide drivers to > guests, except pre-build a series of images with these drivers > pre-installed (we ended up with making a DIB element for that [1]). > Not saying about the need to distribute license tokens for guests and > the whole mess with compatibility between hypervisor and guest drivers > (as guest driver can't be newer then host one, and HVs can't be too > new either). > > It's not that I'm protecting AMD, but just saying that Nvidia is not > that straightforward either, and at least on paper AMD vGPUs look > easier both for operators and end-users. > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > As for AMD cards, AMD stated that some of their MI series card supports SR-IOV for vGPUs. However, those drivers are never open source or provided closed source to public, only large cloud providers are able to get them. So I don't really recommend getting AMD cards for vGPU unless you are able to get support from them. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jan 13 20:06:21 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 13 Jan 2023 21:06:21 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: To have that said, deb/rpm packages they are providing doesn't help much, as: * There is no repo for them, so you need to download them manually from enterprise portal * They can't be upgraded anyway, as driver version is part of the package name. And each package conflicts with any another one. So you need to explicitly remove old package and only then install new one. And yes, you must stop all VMs before upgrading driver and no, you can't live migrate GPU mdev devices due to that now being implemented in qemu. So deb/rpm/generic driver doesn't matter at the end tbh. ??, 13 ???. 2023 ?., 20:56 Cedric : > > Ended up with the very same conclusions than Dimitry regarding the use of > Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: > > - respecting the licensing model as operationnal constraints, note that > guests need to reach a license server in order to get a token (could be via > the Nvidia SaaS service or on-prem) > - drivers for both guest and hypervisor are not easy to implement and > maintain on large scale. A year ago, hypervisors drivers were not packaged > to Debian/Ubuntu, but builded though a bash script, thus requiering > additional automatisation work and careful attention regarding kernel > update/reboot of Nova hypervisors. > > Cheers > > > On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < > noonedeadpunk at gmail.com> wrote: > > > > You are saying that, like Nvidia GRID drivers are open-sourced while > > in fact they're super far from being that. In order to download > > drivers not only for hypervisors, but also for guest VMs you need to > > have an account in their Enterprise Portal. It took me roughly 6 weeks > > of discussions with hardware vendors and Nvidia support to get a > > proper account there. And that happened only after applying for their > > Partner Network (NPN). > > That still doesn't solve the issue of how to provide drivers to > > guests, except pre-build a series of images with these drivers > > pre-installed (we ended up with making a DIB element for that [1]). > > Not saying about the need to distribute license tokens for guests and > > the whole mess with compatibility between hypervisor and guest drivers > > (as guest driver can't be newer then host one, and HVs can't be too > > new either). > > > > It's not that I'm protecting AMD, but just saying that Nvidia is not > > that straightforward either, and at least on paper AMD vGPUs look > > easier both for operators and end-users. > > > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > > > > As for AMD cards, AMD stated that some of their MI series card > supports SR-IOV for vGPUs. However, those drivers are never open source or > provided closed source to public, only large cloud providers are able to > get them. So I don't really recommend getting AMD cards for vGPU unless you > are able to get support from them. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jan 13 20:09:35 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 13 Jan 2023 21:09:35 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: But despite all my rant - it's all related to the Nvidia part of things, not openstack. Support of GPUs and vGPUs is fair enough and nova folks do their best to support that hardware. ??, 13 ???. 2023 ?., 21:06 Dmitriy Rabotyagov : > To have that said, deb/rpm packages they are providing doesn't help much, > as: > * There is no repo for them, so you need to download them manually from > enterprise portal > * They can't be upgraded anyway, as driver version is part of the package > name. And each package conflicts with any another one. So you need to > explicitly remove old package and only then install new one. And yes, you > must stop all VMs before upgrading driver and no, you can't live migrate > GPU mdev devices due to that now being implemented in qemu. So > deb/rpm/generic driver doesn't matter at the end tbh. > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > >> >> Ended up with the very same conclusions than Dimitry regarding the use of >> Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >> >> - respecting the licensing model as operationnal constraints, note that >> guests need to reach a license server in order to get a token (could be via >> the Nvidia SaaS service or on-prem) >> - drivers for both guest and hypervisor are not easy to implement and >> maintain on large scale. A year ago, hypervisors drivers were not packaged >> to Debian/Ubuntu, but builded though a bash script, thus requiering >> additional automatisation work and careful attention regarding kernel >> update/reboot of Nova hypervisors. >> >> Cheers >> >> >> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >> noonedeadpunk at gmail.com> wrote: >> > >> > You are saying that, like Nvidia GRID drivers are open-sourced while >> > in fact they're super far from being that. In order to download >> > drivers not only for hypervisors, but also for guest VMs you need to >> > have an account in their Enterprise Portal. It took me roughly 6 weeks >> > of discussions with hardware vendors and Nvidia support to get a >> > proper account there. And that happened only after applying for their >> > Partner Network (NPN). >> > That still doesn't solve the issue of how to provide drivers to >> > guests, except pre-build a series of images with these drivers >> > pre-installed (we ended up with making a DIB element for that [1]). >> > Not saying about the need to distribute license tokens for guests and >> > the whole mess with compatibility between hypervisor and guest drivers >> > (as guest driver can't be newer then host one, and HVs can't be too >> > new either). >> > >> > It's not that I'm protecting AMD, but just saying that Nvidia is not >> > that straightforward either, and at least on paper AMD vGPUs look >> > easier both for operators and end-users. >> > >> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >> > >> > > >> > > As for AMD cards, AMD stated that some of their MI series card >> supports SR-IOV for vGPUs. However, those drivers are never open source or >> provided closed source to public, only large cloud providers are able to >> get them. So I don't really recommend getting AMD cards for vGPU unless you >> are able to get support from them. >> > > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jan 13 23:02:48 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 13 Jan 2023 15:02:48 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2023 Jan 13: Reading: 5 min Message-ID: <185ad5f18d8.114ad61b0682544.7703391497056244031@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Jan 11. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-11-16.00.log.html * The next TC weekly meeting will be on Jan 18 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Jan 17. 2. What we completed this week: ========================= * Nothing specific for this week. 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[2]. Open Reviews ----------------- * Five open reviews for ongoing activities[3]. Cleanup of PyPI maintainer list for OpenStack Projects ---------------------------------------------------------------- The horizon team discussed the 'xstatic-font-awesome' repo PyPi maintainer topic[4] in the weekly meetings[5], As the next step, they will be discussing it with PyPi non-OpenStack maintainers for the possible option to do the maintenance in a single place (either in OpenStack or outside of OpenStack). TC continued the discussion for all other repo/developers' PyPi maintainer cleanup. knikolla will be automating to get the list of deliverables that need PyPi maintainer's cleanup and after that TC will reach out to the respective project PTL to do the audit if we can cleanup those directly or need some more discussion with external maintainers (like 'xstatic-font-awesome' case) if there is any. We will continue this topic discussion in TC next weekly meeting also. Project updates ------------------- * Add Cinder Huawei charm[6] * Add the woodpecker charm to Openstack charms[7] Less Active/Inactive projects: ~~~~~~~~~~~~~~~~~~~~~~ * Zaqar status Zaqar gate is broken due to the MongoDB package not being present in Ubuntu 22.04 which is used for testing in 2023.1 cycle. Zaqar's team knows about the issue and discusses the same[8]. The TC is discussing and reaching out to the PTL about it. The release team is waiting to get a clear situation on the activeness/gate fix of this project so that they can decide on its release for 2023.1 cycle. It is also proposed to be marked as Inactive[9]. TC will continue the discussion with the Zaqar team and take the next action soon. * Mistral status: The Mistral team is actively merging the gate fixes[10], python-mistralclient gate is facing more issue which is being looked at by the Mistral team. The release team is trying the beta release patches for mistral deliverables[11][12] * Adjutant Status The Adjutant gate is green and its beta release is done[13][14]. With that, Dale proposed to remove this project from the Inactive project list[15] which has a positive response from the TC review. Thanks, Dale to make this project active again. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [17] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions [2] https://etherpad.opendev.org/p/tc-2023.1-tracker [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://github.com/openstack/xstatic-font-awesome/pull/2 [5] https://meetings.opendev.org/meetings/horizon/2023/horizon.2023-01-11-15.00.log.html#l-43 [6] https://review.opendev.org/c/openstack/governance/+/867588 [7] https://review.opendev.org/c/openstack/governance/+/869752 [8] https://review.opendev.org/c/openstack/zaqar/+/857924/comments/a0d5d45e_3008683c [9] https://review.opendev.org/c/openstack/governance/+/870098 [10] https://review.opendev.org/q/project:openstack/mistral [11] https://review.opendev.org/c/openstack/releases/+/869470 [12] https://review.opendev.org/c/openstack/releases/+/869448 [13] https://review.opendev.org/c/openstack/releases/+/869449 [14] https://review.opendev.org/c/openstack/releases/+/869471 [15] https://review.opendev.org/c/openstack/governance/+/869665 [16] hhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From ianyrchoi at gmail.com Sat Jan 14 10:59:22 2023 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 14 Jan 2023 19:59:22 +0900 Subject: [skyline][i18n] Questions on more language translation support Message-ID: Hi Skyline team and I18n contributors, First, Skyline team, thank you for considering skyline with i18n! As Seongsoo started to talk via skyline IRC channel [1], I see that there are some needs for more language support for the Skyline project. Let me elaborate through the following questions as current I18n SIG lead: 1. Skyline team: While I believe you are welcome to contribute to other languages by adding per-language json files like [2], is it okay to contribute Korean language through Korea User Group members? Me and/or Seongsoo would like to happily step up as reviewers to guarantee sufficient language quality. 2. Any translation interests from other I18n language teams? Feel free to tell me or reply to this thread. 3. While number 1 would be a short-term approach to add Korean language support, to support more international languages, I want to suggest integrating with react-intl if skyline is using react (from my brief investigation, it is yes through [3]), since react-intl support to convert json <-> pot which functionality was previously used by tripleo-ui for i18n support [4]. Although I18n SIG is trying to migrate from Zanata to Weblate, a new translation platform [5], pot support would be essential for standardizing i18n and the translation process. With many thanks, /Ian [1] https://meetings.opendev.org/irclogs/%23openstack-skyline/%23openstack-skyline.2023-01-13.log.html [2] https://github.com/openstack/skyline-console/tree/master/src/locales [3] https://opendev.org/openstack/skyline-console/src/branch/master/package.json#L78 [4] https://opendev.org/openstack/tripleo-ui/src/commit/d1baef537f0746efceacfddd2fc671e3efa478d0/docs/translation.rst [5] https://lists.openstack.org/pipermail/openstack-i18n/2022-October/003559.html From ianyrchoi at gmail.com Sat Jan 14 11:02:01 2023 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Sat, 14 Jan 2023 20:02:01 +0900 Subject: [Heat][OpenStack-I18n] Need help in making sense of a message in Heat In-Reply-To: References: Message-ID: (Adding openstack-discuss mailing list and adding "[Heat]" on the subject) Hi, It would be so great if OpenStack Heat users or developers can help Juliette's question on i18n thread! Thank you all, /Ian On Fri, Jan 13, 2023 at 11:39 PM Juliette Tux wrote: > > Hello, > Could anybody kindly elaborate on the meaning of a condition in a message: > > "The flag which represents condition of reserved resources of the > lease. If it is true, the amount of reserved resources is less than > the request or reserved resources were changed." > > The obscure part for me is "less than the request or reserved > resources were changed." > > Context: heat/engine/resources/openstack/blazar/lease.py:233 > > TY! > > -- > ? ?????????, ??????? ???? > > _______________________________________________ > OpenStack-I18n mailing list > OpenStack-I18n at lists.openstack.org From amonster369 at gmail.com Sun Jan 15 15:06:41 2023 From: amonster369 at gmail.com (A Monster) Date: Sun, 15 Jan 2023 16:06:41 +0100 Subject: Destroy a specific service from an openstack kolla deployment Message-ID: Is it possible to remove a specific service from openstack, using > > kolla-ansible destroy -i multinode --tag service destroys all service and doesn't take into consideration the tag parameter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nazmul.sil at squaregroup.com Sun Jan 15 03:55:14 2023 From: nazmul.sil at squaregroup.com (Nazmul Haque (N&C, SIL)) Date: Sun, 15 Jan 2023 03:55:14 +0000 Subject: Minimum Number Of Nodes for Deployment Message-ID: Hi, We are exploring various open source cloud platform. We are currently running Nutanix for our private cloud infrastructure. We would like to explore Openstack. Currently we have two clusters , one with 5 nodes and another with 3 nodes. In order to deploy two Openstack clusters what is the minimum number of nodes required for a private cloud infrastructure ensuring high availability and data integrity in mind. I will pursue the COA course however I want to have some background from industry experts since I am under the impression that Openstack deployment would require a lot of nodes for initial deployment which would be very expensive. Regards, Md Nazmul Haque Disclaimer: This e-mail, including any attachment with it may contain privileged, proprietary & confidential information and is intended solely for the addressee. If you are not the intended recipient of this e-mail, please notify us immediately and destroy this email without taking any copies or showing it to anyone. Unauthorized use of this e-mail is prohibited. SQUARE will not take any responsibility for misdirection, corruption or unauthorized use of e-mail communications, or for any damage that may be caused as a result of transmitting or receiving an e-mail communication. Caution: Do not click links or open attachments unless you recognize the sender and know the content is safe. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sun Jan 15 22:31:02 2023 From: amonster369 at gmail.com (A Monster) Date: Sun, 15 Jan 2023 23:31:02 +0100 Subject: [kolla] [cinder] cinder.exception.NoValidBackend Message-ID: I deployed openstack yoga using kolla ansible, and used LVM as a backend for cinder storage service, after the deployment everything was working fine, but after a month, I could no longer create a volume as I get an error message > schedule allocate volume:Could not find any available weighted backend. > and the cinder_scheduler log shows the following error message > ERROR cinder.scheduler.flows.create_volume > [req-2290223d-8f96-48d7-8680-959c5932d5be 3b4bd66ca3b04ab587a9a64a9e6966bc > ec29bdcd73064dfb8186f413537327eb - - -] Failed to run task > cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: > No valid backend was found. No weighed backends available: > cinder.exception.NoValidBackend: No valid backend was found. No weighed > backends available > the storage server doesn't show any apparent errors, any ideas of how can I fix this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Jan 16 02:29:20 2023 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 16 Jan 2023 11:29:20 +0900 Subject: [Openstack Tacker] - Issue while creating NS In-Reply-To: References: Message-ID: <678eead6-bbff-43a0-1ee7-60d06f95bb96@gmail.com> Hi, Sorry for the slow reply. I've asked if someone will fix, but no one can do that because most of us were joined in tacker team and not so understand well for the features. In addition, we haven't focused on such a legacy features including NS or mistral support anymore, but ETSI-NFV support recently. Could you consider to use later version of tacker instead? Thanks, Yasufumi On 2022/12/18 21:19, Lokendra Rathour wrote: > Hi Team, > Was trying to create NS using the document : > > https://docs.openstack.org/tacker/yoga/user/nsd_usage_guide.html > > But getting an error, the bug has been opened, requesting your kind > assistance. > https://bugs.launchpad.net/tacker/+bug/1999502 > > thanks once again. > From tkajinam at redhat.com Mon Jan 16 03:23:37 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 16 Jan 2023 12:23:37 +0900 Subject: [Blazar][Heat][OpenStack-I18n] Need help in making sense of a message in Heat In-Reply-To: References: Message-ID: Let me add the Blazar tag. That text comes from the explanation of the "degrade" property of the OS::Blazar::Lease resource type. My basic understanding about Blazar is that it allows users to "reserve" some resources in advance. So "the amount of reserved resources is less than the request" would indicate that Blazar could not reserve the amount of resources users requested (for example a user tried to reserve 10 instances but Blazar could reserve only 8). On the other hand, "reserved resources were changed." would indicate the situation where the user updated the reservation request and Blazar is still processing the update. I'd appreciate any double-check from the Blazar team because I'm not really familiar with Blazar and the team would have clear understanding about the feature (and possibly the better explanation) On Sat, Jan 14, 2023 at 8:07 PM Ian Y. Choi wrote: > (Adding openstack-discuss mailing list and adding "[Heat]" on the subject) > > Hi, > > It would be so great if OpenStack Heat users or developers can help > Juliette's question on i18n thread! > > > Thank you all, > > /Ian > > On Fri, Jan 13, 2023 at 11:39 PM Juliette Tux > wrote: > > > > Hello, > > Could anybody kindly elaborate on the meaning of a condition in a > message: > > > > "The flag which represents condition of reserved resources of the > > lease. If it is true, the amount of reserved resources is less than > > the request or reserved resources were changed." > > > > The obscure part for me is "less than the request or reserved > > resources were changed." > > > > Context: heat/engine/resources/openstack/blazar/lease.py:233 > > > > TY! > > > > -- > > ? ?????????, ??????? ???? > > > > _______________________________________________ > > OpenStack-I18n mailing list > > OpenStack-I18n at lists.openstack.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jan 16 05:06:42 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 16 Jan 2023 10:36:42 +0530 Subject: [kolla] [cinder] cinder.exception.NoValidBackend In-Reply-To: References: Message-ID: Hi, The error you're seeing could be due to a number of reasons. I will mention some places where you can check: * check if the c-volume process is active and running: There could be some ERROR in the c-vol service logs due to LVM related issues to PV, VG etc * given the scheduler logs, it looks like the backend couldn't pass filters and weighers, Check if there is available space in backend. If you're using a different volume type then does the backend pass the availability zone and capabilities filter of the volume type. * Also I would recommend going through the scheduler logs (before the one you pasted) to see which filters your backend passes and which it doesn't to get an idea where it filters out your desired backend. Hope that helps. Thanks Rajat Dhasmana On Mon, Jan 16, 2023 at 4:06 AM A Monster wrote: > I deployed openstack yoga using kolla ansible, and used LVM as a backend > for cinder storage service, after the deployment everything was working > fine, but after a month, I could no longer create a volume as I get an > error message > >> schedule allocate volume:Could not find any available weighted backend. >> > and the cinder_scheduler log shows the following error message > >> ERROR cinder.scheduler.flows.create_volume >> [req-2290223d-8f96-48d7-8680-959c5932d5be 3b4bd66ca3b04ab587a9a64a9e6966bc >> ec29bdcd73064dfb8186f413537327eb - - -] Failed to run task >> cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: >> No valid backend was found. No weighed backends available: >> cinder.exception.NoValidBackend: No valid backend was found. No weighed >> backends available >> > > the storage server doesn't show any apparent errors, > any ideas of how can I fix this? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wu.wenxiang at 99cloud.net Mon Jan 16 06:04:47 2023 From: wu.wenxiang at 99cloud.net (=?UTF-8?B?5ZC05paH55u4?=) Date: Mon, 16 Jan 2023 14:04:47 +0800 (GMT+08:00) Subject: =?UTF-8?B?UmU6W3NreWxpbmVdW2kxOG5dIFF1ZXN0aW9ucyBvbiBtb3JlIGxhbmd1YWdlIHRyYW5zbGF0aW9uIHN1cHBvcnQ=?= In-Reply-To: References: Message-ID: Hello, Ian Skyline only support i18n in Chinese and English now, of course it will be upgraded later. Based on your feedback, we will give priority to complement the language needed by developers. However, there're no native Korean speakers in skyline dev team, so we only provide basic translation based on translation tools. Welcome the corresponding contributors to supplement more beautiful sentences. The support of our code for multi-language is not complicated, and the time spent is mainly on translation. Could you raise a ticket on https://launchpad.net/skyline-apiserver ? Skyline dev team handled requests & bug reports from ticket system. Thanks Best Regrads Wu Wenxiang Original: From?Ian Y. ChoiDate?2023-01-14 18:59:22To?openstack-i18n , OpenStack Discuss Cc?Subject?[skyline][i18n] Questions on more language translation supportHi Skyline team and I18n contributors, First, Skyline team, thank you for considering skyline with i18n! As Seongsoo started to talk via skyline IRC channel [1], I see that there are some needs for more language support for the Skyline project. Let me elaborate through the following questions as current I18n SIG lead: 1. Skyline team: While I believe you are welcome to contribute to other languages by adding per-language json files like [2], is it okay to contribute Korean language through Korea User Group members? Me and/or Seongsoo would like to happily step up as reviewers to guarantee sufficient language quality. 2. Any translation interests from other I18n language teams? Feel free to tell me or reply to this thread. 3. While number 1 would be a short-term approach to add Korean language support, to support more international languages, I want to suggest integrating with react-intl if skyline is using react (from my brief investigation, it is yes through [3]), since react-intl support to convert json <-> pot which functionality was previously used by tripleo-ui for i18n support [4]. Although I18n SIG is trying to migrate from Zanata to Weblate, a new translation platform [5], pot support would be essential for standardizing i18n and the translation process. With many thanks, /Ian [1] https://meetings.opendev.org/irclogs/%23openstack-skyline/%23openstack-skyline.2023-01-13.log.html [2] https://github.com/openstack/skyline-console/tree/master/src/locales [3] https://opendev.org/openstack/skyline-console/src/branch/master/package.json#L78 [4] https://opendev.org/openstack/tripleo-ui/src/commit/d1baef537f0746efceacfddd2fc671e3efa478d0/docs/translation.rst [5] https://lists.openstack.org/pipermail/openstack-i18n/2022-October/003559.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ulrich.Schwickerath at cern.ch Mon Jan 16 10:38:08 2023 From: Ulrich.Schwickerath at cern.ch (Ulrich Schwickerath) Date: Mon, 16 Jan 2023 11:38:08 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Hi, all, just to add to the discussion, at CERN we have recently deployed a bunch of A100 GPUs in PCI passthrough mode, and are now looking into improving their usage by using MIG. From the NOVA point of view things seem to work OK, we can schedule VMs requesting a VGPU, the client starts up and gets a license token from our NVIDIA license server (distributing license keys is our private cloud is relatively easy in our case). It's a PoC only for the time being, and we're not ready to put that forward as we're facing issues with CUDA on the client (it fails immediately in memory operations with 'not supported', still investigating why this happens). Once we get that working it would be nice to be able to have a more fine grained scheduling so that people can ask for MIG devices of different size. The other challenge is how to set limits on GPU resources. Once the above issues have been sorted out we may want to look into cyborg as well thus we are quite interested in first experiences with this. Kind regards, Ulrich On 13.01.23 21:06, Dmitriy Rabotyagov wrote: > To have that said, deb/rpm packages they are providing doesn't help > much, as: > * There is no repo for them, so you need to download them manually > from enterprise portal > * They can't be upgraded anyway, as driver version is part of the > package name. And each package conflicts with any another one. So you > need to explicitly remove old package and only then install new one. > And yes, you must stop all VMs before upgrading driver and no, you > can't live migrate GPU mdev devices due to that now being implemented > in qemu. So deb/rpm/generic driver doesn't matter at the end tbh. > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > > > Ended up with the very same conclusions than Dimitry regarding the > use of Nvidia Vgrid for the VGPU use case with Nova, it works > pretty well but: > > - respecting the licensing model as operationnal constraints, note > that guests need to reach a license server in order to get a token > (could be via the Nvidia SaaS service or on-prem) > - drivers for both guest and hypervisor are not easy to implement > and maintain on large scale. A year ago, hypervisors drivers were > not packaged to Debian/Ubuntu, but builded though a bash script, > thus requiering additional automatisation work and careful > attention regarding kernel update/reboot of Nova hypervisors. > > Cheers > > > On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov > wrote: > > > > You are saying that, like Nvidia GRID drivers are open-sourced while > > in fact they're super far from being that. In order to download > > drivers not only for hypervisors, but also for guest VMs you need to > > have an account in their Enterprise Portal. It took me roughly 6 > weeks > > of discussions with hardware vendors and Nvidia support to get a > > proper account there. And that happened only after applying for > their > > Partner Network (NPN). > > That still doesn't solve the issue of how to provide drivers to > > guests, except pre-build a series of images with these drivers > > pre-installed (we ended up with making a DIB element for that [1]). > > Not saying about the need to distribute license tokens for > guests and > > the whole mess with compatibility between hypervisor and guest > drivers > > (as guest driver can't be newer then host one, and HVs can't be too > > new either). > > > > It's not that I'm protecting AMD, but just saying that Nvidia is not > > that straightforward either, and at least on paper AMD vGPUs look > > easier both for operators and end-users. > > > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > > > > As for AMD cards, AMD stated that some of their MI series card > supports SR-IOV for vGPUs. However, those drivers are never open > source or provided closed source to public, only large cloud > providers are able to get them. So I don't really recommend > getting AMD cards for vGPU unless you are able to get support from > them. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jan 16 11:33:08 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 16 Jan 2023 11:33:08 +0000 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A?= Experience with VGPUs In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> On Mon, 2023-01-16 at 11:38 +0100, Ulrich Schwickerath wrote: > Hi, all, > > just to add to the discussion, at CERN we have recently deployed a bunch > of A100 GPUs in PCI passthrough mode, and are now looking into improving > their usage by using MIG. From the NOVA point of view things seem to > work OK, we can schedule VMs requesting a VGPU, the client starts up and > gets a license token from our NVIDIA license server (distributing > license keys is our private cloud is relatively easy in our case). It's > a PoC only for the time being, and we're not ready to put that forward > as we're facing issues with CUDA on the client (it fails immediately in > memory operations with 'not supported', still investigating why this > happens). > > Once we get that working it would be nice to be able to have a more fine > grained scheduling so that people can ask for MIG devices of different > size. The other challenge is how to set limits on GPU resources. Once > the above issues have been sorted out we may want to look into cyborg as > well thus we are quite interested in first experiences with this. so those two usecasue can kind of be fulfilled in yoga. in yoga we finally merged supprot for unified limits via keystone https://specs.openstack.org/openstack/nova-specs/specs/yoga/implemented/unified-limits-nova.html this allow yout to create quotas/limits on any reslouce class. that is our intended way for you to set limits on GPU resources by leveraging the?generic mdev support in xena to map differnt mdev types to differnt resouce classes. https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html you can also use the provider confugration files https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/provider-config-file.html to simplfy adding traits to the gpu resouces in a declaritive way to enabel better schduling for example adding traits for the CUDA version supported by a given vGPU on a host. so you coudl do something like this assuming you have 2 gpus types Alice and Bob Alice support CUDA 3 and has a small amount of vram (i.e. you older generate of gpus) Bob is the new kid on the block with CUDA 9000 support and all the vram you could ask for ( the latest and greates GPU) using the nova geneic mdev feature you can map the Alice GPUS to CUSTOM_VGPU_ALICE and BOB to CUSTOM_VGPU_BOB and using unifed limits you can set a limit/quota of 10 CUSTOM_VGPU_ALICE reoscues and 1 CUSTOM_VGPU_BOB resouces on a given project using provider.yaml you can tag the Alice gpus with CUSTOM_CUDA_3 and the BOB gpus with CUSTOM_CUDA_9000 in the useing flavors you can create flavor defintion that request the diferent GPU types using resouce:CUSTOM_VGPU_ALICE=1 but if you want to prevent images that need CUDA 9000 form being schduled using the ALICE GPU simply add traits:CUSTOM_CUDA_9000 to the image. so if you have yoga you have all of the above features avaiabel. xena does nto give you the quota enforcement but youc and do all the schduling bits provider.yaml is entirly optionalbut that has been aournd the longest. some of this would also just work for cyborg since it shoudl be using custom resocue classes to model the gpus in placment already. we started adding geneic pci devices to placemnt in zed and we are completeing it this cycle https://specs.openstack.org/openstack/nova-specs/specs/2023.1/approved/pci-device-tracking-in-placement.html so the same unified limits appoch will work for pci passthoguh going forward too. hopefully this helps you meet those usecasues. we dont really have any good produciton example of peopel combining all of the above featues so if you do use them as descibed feedback is welcome. we designed these features to all work together in this way but since they are relitivly new addtions we suspect may operators have not used them yet or know about there existance. > > Kind regards, > > Ulrich > > On 13.01.23 21:06, Dmitriy Rabotyagov wrote: > > To have that said, deb/rpm packages they are providing doesn't help > > much, as: > > * There is no repo for them, so you need to download them manually > > from enterprise portal > > * They can't be upgraded anyway, as driver version is part of the > > package name. And each package conflicts with any another one. So you > > need to explicitly remove old package and only then install new one. > > And yes, you must stop all VMs before upgrading driver and no, you > > can't live migrate GPU mdev devices due to that now being implemented > > in qemu. So deb/rpm/generic driver doesn't matter at the end tbh. > > > > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > > > > > > Ended up with the very same conclusions than Dimitry regarding the > > use of Nvidia Vgrid for the VGPU use case with Nova, it works > > pretty well but: > > > > - respecting the licensing model as operationnal constraints, note > > that guests need to reach a license server in order to get a token > > (could be via the Nvidia SaaS service or on-prem) > > - drivers for both guest and hypervisor are not easy to implement > > and maintain on large scale. A year ago, hypervisors drivers were > > not packaged to Debian/Ubuntu, but builded though a bash script, > > thus requiering additional automatisation work and careful > > attention regarding kernel update/reboot of Nova hypervisors. > > > > Cheers > > > > > > On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov > > wrote: > > > > > > You are saying that, like Nvidia GRID drivers are open-sourced while > > > in fact they're super far from being that. In order to download > > > drivers not only for hypervisors, but also for guest VMs you need to > > > have an account in their Enterprise Portal. It took me roughly 6 > > weeks > > > of discussions with hardware vendors and Nvidia support to get a > > > proper account there. And that happened only after applying for > > their > > > Partner Network (NPN). > > > That still doesn't solve the issue of how to provide drivers to > > > guests, except pre-build a series of images with these drivers > > > pre-installed (we ended up with making a DIB element for that [1]). > > > Not saying about the need to distribute license tokens for > > guests and > > > the whole mess with compatibility between hypervisor and guest > > drivers > > > (as guest driver can't be newer then host one, and HVs can't be too > > > new either). > > > > > > It's not that I'm protecting AMD, but just saying that Nvidia is not > > > that straightforward either, and at least on paper AMD vGPUs look > > > easier both for operators and end-users. > > > > > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > > > > > > > As for AMD cards, AMD stated that some of their MI series card > > supports SR-IOV for vGPUs. However, those drivers are never open > > source or provided closed source to public, only large cloud > > providers are able to get them. So I don't really recommend > > getting AMD cards for vGPU unless you are able to get support from > > them. > > > > > > > From lucasagomes at gmail.com Mon Jan 16 11:45:24 2023 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 16 Jan 2023 08:45:24 -0300 Subject: [neutron] Bug Deputy Report January 9 - 15 Message-ID: Hi, This is the Neutron bug report from January 9th to 15th. *Critical:* * https://bugs.launchpad.net/neutron/+bug/2002800 - "Allow multiple IPv6 ports on router from same network on ml2/ovs+vxlan+dvr" - Assigned to: Fernando Royo *High:* * https://bugs.launchpad.net/neutron/+bug/2002417 - "DVR+HA routers all answering to ping on private interface" - Assigned to: Arnaud Morin *Needs further triage:* * https://bugs.launchpad.net/neutron/+bug/2002577 - "The neutron-keepalived-state-change.log log is not rotated and grows without bound until disk is full" - Unassigned *Low:* * https://bugs.launchpad.net/neutron/+bug/2002839 - "Remove compatibility with OVN<20.09" - Unassigned *RFE / Wishlist:* * https://bugs.launchpad.net/neutron/+bug/2002687 - "[RFE] Active-active L3 Gateway with Multihoming" - Assigned to: Dmitrii Shcherbakov Cheers, Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Mon Jan 16 13:03:22 2023 From: amonster369 at gmail.com (A Monster) Date: Mon, 16 Jan 2023 14:03:22 +0100 Subject: Cinder LVM backend shows 100% space usage Message-ID: I deployed openstack using kolla ansible using LVM as a backend, but after using it for a while, I ended up creating multiple volume with different sizes, but now I can no longer create new volumes, after checking lvs I found out that the thin pool created by cinder displays a 100% space usage of 20TB , but the size used by the volumes created is less than 1TB. how can I fix this isse? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Mon Jan 16 15:28:01 2023 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 16 Jan 2023 15:28:01 +0000 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: <07c3e121-b4b7-a7e1-85da-27230619eba5@rd.bbc.co.uk> On 16/01/2023 11:33, Sean Mooney wrote: > > you can also use the provider confugration files > https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/provider-config-file.html > to simplfy adding traits to the gpu resouces in a declaritive way to enabel better schduling > for example adding traits for the CUDA version supported by a given vGPU on a host. Very interesting - I started to look at some ansible to deploy these provider config files. There is note at the end of the doc saying "it is recommended to use the schema provided by nova to validate the config using a simple jsonschema validator" - the natural place to do this with ansible would be here https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html#parameter-validate but I can't find a way to do that on a YAML file with a jsonschema CLI one-liner. What would the right way to validate the yaml with the ansible copy module? Thanks, Jon. From garcetto at gmail.com Mon Jan 16 15:35:34 2023 From: garcetto at gmail.com (garcetto) Date: Mon, 16 Jan 2023 16:35:34 +0100 Subject: [kolla] where to find docker compose files Message-ID: good afternoon, where can i find docker compose files for quay,io docker images used in kolla-ansible? need to undestand how are build, thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Mon Jan 16 15:36:45 2023 From: liliueecg at gmail.com (Li Liu) Date: Mon, 16 Jan 2023 10:36:45 -0500 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> Message-ID: Hi Ulrich, I believe this is a perfect use case for Cyborg which provides state-of-the-art heterogeneous hardware management and is easy to use. cc: Brin Zhang Thank you Regards Li Liu On Mon, Jan 16, 2023 at 5:39 AM Ulrich Schwickerath < Ulrich.Schwickerath at cern.ch> wrote: > Hi, all, > > just to add to the discussion, at CERN we have recently deployed a bunch > of A100 GPUs in PCI passthrough mode, and are now looking into improving > their usage by using MIG. From the NOVA point of view things seem to work > OK, we can schedule VMs requesting a VGPU, the client starts up and gets a > license token from our NVIDIA license server (distributing license keys is > our private cloud is relatively easy in our case). It's a PoC only for the > time being, and we're not ready to put that forward as we're facing issues > with CUDA on the client (it fails immediately in memory operations with > 'not supported', still investigating why this happens). > > Once we get that working it would be nice to be able to have a more fine > grained scheduling so that people can ask for MIG devices of different > size. The other challenge is how to set limits on GPU resources. Once the > above issues have been sorted out we may want to look into cyborg as well > thus we are quite interested in first experiences with this. > > Kind regards, > > Ulrich > On 13.01.23 21:06, Dmitriy Rabotyagov wrote: > > To have that said, deb/rpm packages they are providing doesn't help much, > as: > * There is no repo for them, so you need to download them manually from > enterprise portal > * They can't be upgraded anyway, as driver version is part of the package > name. And each package conflicts with any another one. So you need to > explicitly remove old package and only then install new one. And yes, you > must stop all VMs before upgrading driver and no, you can't live migrate > GPU mdev devices due to that now being implemented in qemu. So > deb/rpm/generic driver doesn't matter at the end tbh. > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > >> >> Ended up with the very same conclusions than Dimitry regarding the use of >> Nvidia Vgrid for the VGPU use case with Nova, it works pretty well but: >> >> - respecting the licensing model as operationnal constraints, note that >> guests need to reach a license server in order to get a token (could be via >> the Nvidia SaaS service or on-prem) >> - drivers for both guest and hypervisor are not easy to implement and >> maintain on large scale. A year ago, hypervisors drivers were not packaged >> to Debian/Ubuntu, but builded though a bash script, thus requiering >> additional automatisation work and careful attention regarding kernel >> update/reboot of Nova hypervisors. >> >> Cheers >> >> >> On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov < >> noonedeadpunk at gmail.com> wrote: >> > >> > You are saying that, like Nvidia GRID drivers are open-sourced while >> > in fact they're super far from being that. In order to download >> > drivers not only for hypervisors, but also for guest VMs you need to >> > have an account in their Enterprise Portal. It took me roughly 6 weeks >> > of discussions with hardware vendors and Nvidia support to get a >> > proper account there. And that happened only after applying for their >> > Partner Network (NPN). >> > That still doesn't solve the issue of how to provide drivers to >> > guests, except pre-build a series of images with these drivers >> > pre-installed (we ended up with making a DIB element for that [1]). >> > Not saying about the need to distribute license tokens for guests and >> > the whole mess with compatibility between hypervisor and guest drivers >> > (as guest driver can't be newer then host one, and HVs can't be too >> > new either). >> > >> > It's not that I'm protecting AMD, but just saying that Nvidia is not >> > that straightforward either, and at least on paper AMD vGPUs look >> > easier both for operators and end-users. >> > >> > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid >> > >> > > >> > > As for AMD cards, AMD stated that some of their MI series card >> supports SR-IOV for vGPUs. However, those drivers are never open source or >> provided closed source to public, only large cloud providers are able to >> get them. So I don't really recommend getting AMD cards for vGPU unless you >> are able to get support from them. >> > > >> > >> > -- Thank you Regards Li -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Jan 16 16:14:37 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 16 Jan 2023 17:14:37 +0100 Subject: [kolla] where to find docker compose files In-Reply-To: References: Message-ID: Hello, I suppose you are referring to Dockerfiles rather than Docker Compose files, since Docker Compose is not used in Kolla. All the Dockerfiles can be found at https://opendev.org/openstack/kolla/src/branch/master/docker: look for Dockerfile.j2 in each subdirectory. On Mon, 16 Jan 2023 at 16:39, garcetto wrote: > good afternoon, > where can i find docker compose files for quay,io docker images used in > kolla-ansible? > > need to undestand how are build, thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jan 16 21:28:36 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 16 Jan 2023 13:28:36 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 18 at 1600 UTC Message-ID: <185bc7bef17.f89c43dd822066.5887308196090367322@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2023 Jan 18, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Jan 17 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From johnsomor at gmail.com Tue Jan 17 00:52:26 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 16 Jan 2023 16:52:26 -0800 Subject: [designate] Proposal to deprecate the agent framework and agent based backends Message-ID: TLDR: The Designate team would like to deprecate the backend agent framework and the agent based backends due to lack of development and design issues with the current implementation. The following backends would be deprecated: Bind9 (Agent), Denominator, Microsoft DNS (Agent), Djbdns (Agent), Gdnsd (Agent), and Knot2 (Agent). Designate includes many backend DNS server drivers[1], many of which are "native" (also known as xfr type backends) backend implementations. In addition to the "native" backends, Designate has an agent backend[2] that supports other backends via an agent process. To quote the agent backend documentation[2]: This backend uses an extension[3] of the DNS protocol itself to send management requests to the remote agent processes, where the requests will be actioned. The rpc traffic between designate and the agent is both unauthenticated and unencrypted. Do not run this traffic over unsecured networks. Here are the reasons we are proposing to deprecate the agent framework now: 1. The agent protocol used by Designate is using an "unassigned"[4][5] DNS opcode (14) that is causing problems with the dnspython library >= 2.3.0 which is now validating the opcode when building DNS messages. It is a bad practice to use "unassigned" values as they may be officially assigned at any time, likely with an incompatible message format. 2. The agent backends are not tested in the OpenStack jobs[1]. 3. Many of the agent backends have been marked as "Experimental" since 2016 with no additional contributions beyond general repository code maintenance. 4. The protocol between the Designate worker process and the agent process is unauthenticated and unencrypted. 5. We do not know of a development resource to rewrite the agent framework protocol to address issue #1 and #4 above. 6. The introduction of catalog zones[6] may eliminate the need for some of the agent based backend drivers. By marking the agent framework and agent based backends "deprecated" in the Antelope cycle, we would remove the code no earlier than the "C" release of OpenStack (per the OpenStack deprecation policy[7]). In the meantime, issue #1 has been worked around by overriding the dnspython opcode validation[7] as needed in the Designate code (similar to a monkey patch). This is not a sustainable long term solution. The following backend agent based drivers would be marked "deprecated" in addition to the agent framework itself: Bind9 (Agent) Denominator Microsoft DNS (Agent) Djbdns (Agent) Gdnsd (Agent) Knot2 (Agent) We plan to propose patches for the deprecation over the next week, but will not merge them until at least January 24th to allow time for comment from the community. If you have concerns about this deprecation plan or are interested in rewriting the agent framework protocol to address the above issues, please reply to this announcement. Michael [1] https://docs.openstack.org/designate/latest/admin/support-matrix.html [2] https://docs.openstack.org/designate/latest/admin/backends/agent.html [3] https://github.com/openstack/designate/blob/master/designate/backend/private_codes.py [4] https://www.rfc-editor.org/rfc/rfc6895.html#section-2.2 [5] https://www.iana.org/assignments/dns-parameters/dns-parameters.xhtml#dns-parameters-5 [6] https://www.ietf.org/archive/id/draft-ietf-dnsop-dns-catalog-zones-08.txt [7] https://docs.openstack.org/project-team-guide/deprecation.html [8] https://review.opendev.org/c/openstack/designate/+/870678 From songwenping at inspur.com Tue Jan 17 02:30:51 2023 From: songwenping at inspur.com (=?utf-8?B?QWxleCBTb25nICjlrovmloflubMp?=) Date: Tue, 17 Jan 2023 02:30:51 +0000 Subject: =?utf-8?B?562U5aSNOiDnrZTlpI06IEV4cGVyaWVuY2Ugd2l0aCBWR1BVcw==?= In-Reply-To: <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: Hi, Ulrich: Sean is expert on VGPU management from nova side. I complete the usage steps if you are using Nova to manage MIGs for example: 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one 1g.10gb, one 2g.20gb and one 3g.40gb) 2.add the device config in nova.conf: [devices] enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 [mdev_nvidia-699] device_addresses = 0000:84:00.1 [mdev_nvidia-700] device_addresses = 0000:84:00.2 [mdev_nvidia-701] device_addresses = 0000:84:00.3 3.config the flavor metadata with VGPU:1 and create vm use the flavor, the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: [devices] enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 [mdev_nvidia-699] device_addresses = 0000:84:00.1, 0000:3b:00.1 [mdev_nvidia-700] device_addresses = 0000:84:00.2 [mdev_nvidia-701] device_addresses = 0000:84:00.3, [mdev_nvidia-702] device_addresses = 0000:3b:00.3 In our product, we use Cyborg to manage the MIGs, from the legacy style we also need config the mig like Nova, this is difficult to maintain, especially deploy openstack on k8s, so we remove these config and automatically discovery the MIGs and support divide MIG by cyborg api. By creating device profile with vgpu type traits(nvidia-699, nvidia-700), we can appoint MIG size to create VMs. Kind regards -----????----- ???: Sean Mooney [mailto:smooney at redhat.com] ????: 2023?1?16? 19:33 ???: Ulrich Schwickerath ; openstack-discuss at lists.openstack.org ??: Re: ??: Experience with VGPUs On Mon, 2023-01-16 at 11:38 +0100, Ulrich Schwickerath wrote: > Hi, all, > > just to add to the discussion, at CERN we have recently deployed a > bunch of A100 GPUs in PCI passthrough mode, and are now looking into > improving their usage by using MIG. From the NOVA point of view things > seem to work OK, we can schedule VMs requesting a VGPU, the client > starts up and gets a license token from our NVIDIA license server > (distributing license keys is our private cloud is relatively easy in > our case). It's a PoC only for the time being, and we're not ready to > put that forward as we're facing issues with CUDA on the client (it > fails immediately in memory operations with 'not supported', still > investigating why this happens). > > Once we get that working it would be nice to be able to have a more > fine grained scheduling so that people can ask for MIG devices of > different size. The other challenge is how to set limits on GPU > resources. Once the above issues have been sorted out we may want to > look into cyborg as well thus we are quite interested in first experiences with this. so those two usecasue can kind of be fulfilled in yoga. in yoga we finally merged supprot for unified limits via keystone https://specs.openstack.org/openstack/nova-specs/specs/yoga/implemented/unified-limits-nova.html this allow yout to create quotas/limits on any reslouce class. that is our intended way for you to set limits on GPU resources by leveraging the generic mdev support in xena to map differnt mdev types to differnt resouce classes. https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html you can also use the provider confugration files https://specs.openstack.org/openstack/nova-specs/specs/victoria/implemented/provider-config-file.html to simplfy adding traits to the gpu resouces in a declaritive way to enabel better schduling for example adding traits for the CUDA version supported by a given vGPU on a host. so you coudl do something like this assuming you have 2 gpus types Alice and Bob Alice support CUDA 3 and has a small amount of vram (i.e. you older generate of gpus) Bob is the new kid on the block with CUDA 9000 support and all the vram you could ask for ( the latest and greates GPU) using the nova geneic mdev feature you can map the Alice GPUS to CUSTOM_VGPU_ALICE and BOB to CUSTOM_VGPU_BOB and using unifed limits you can set a limit/quota of 10 CUSTOM_VGPU_ALICE reoscues and 1 CUSTOM_VGPU_BOB resouces on a given project using provider.yaml you can tag the Alice gpus with CUSTOM_CUDA_3 and the BOB gpus with CUSTOM_CUDA_9000 in the useing flavors you can create flavor defintion that request the diferent GPU types using resouce:CUSTOM_VGPU_ALICE=1 but if you want to prevent images that need CUDA 9000 form being schduled using the ALICE GPU simply add traits:CUSTOM_CUDA_9000 to the image. so if you have yoga you have all of the above features avaiabel. xena does nto give you the quota enforcement but youc and do all the schduling bits provider.yaml is entirly optionalbut that has been aournd the longest. some of this would also just work for cyborg since it shoudl be using custom resocue classes to model the gpus in placment already. we started adding geneic pci devices to placemnt in zed and we are completeing it this cycle https://specs.openstack.org/openstack/nova-specs/specs/2023.1/approved/pci-device-tracking-in-placement.html so the same unified limits appoch will work for pci passthoguh going forward too. hopefully this helps you meet those usecasues. we dont really have any good produciton example of peopel combining all of the above featues so if you do use them as descibed feedback is welcome. we designed these features to all work together in this way but since they are relitivly new addtions we suspect may operators have not used them yet or know about there existance. > > Kind regards, > > Ulrich > > On 13.01.23 21:06, Dmitriy Rabotyagov wrote: > > To have that said, deb/rpm packages they are providing doesn't help > > much, as: > > * There is no repo for them, so you need to download them manually > > from enterprise portal > > * They can't be upgraded anyway, as driver version is part of the > > package name. And each package conflicts with any another one. So > > you need to explicitly remove old package and only then install new one. > > And yes, you must stop all VMs before upgrading driver and no, you > > can't live migrate GPU mdev devices due to that now being > > implemented in qemu. So deb/rpm/generic driver doesn't matter at the end tbh. > > > > > > ??, 13 ???. 2023 ?., 20:56 Cedric : > > > > > > Ended up with the very same conclusions than Dimitry regarding the > > use of Nvidia Vgrid for the VGPU use case with Nova, it works > > pretty well but: > > > > - respecting the licensing model as operationnal constraints, note > > that guests need to reach a license server in order to get a token > > (could be via the Nvidia SaaS service or on-prem) > > - drivers for both guest and hypervisor are not easy to implement > > and maintain on large scale. A year ago, hypervisors drivers were > > not packaged to Debian/Ubuntu, but builded though a bash script, > > thus requiering additional automatisation work and careful > > attention regarding kernel update/reboot of Nova hypervisors. > > > > Cheers > > > > > > On Fri, Jan 13, 2023 at 4:21 PM Dmitriy Rabotyagov > > wrote: > > > > > > You are saying that, like Nvidia GRID drivers are open-sourced while > > > in fact they're super far from being that. In order to download > > > drivers not only for hypervisors, but also for guest VMs you need to > > > have an account in their Enterprise Portal. It took me roughly 6 > > weeks > > > of discussions with hardware vendors and Nvidia support to get a > > > proper account there. And that happened only after applying for > > their > > > Partner Network (NPN). > > > That still doesn't solve the issue of how to provide drivers to > > > guests, except pre-build a series of images with these drivers > > > pre-installed (we ended up with making a DIB element for that [1]). > > > Not saying about the need to distribute license tokens for > > guests and > > > the whole mess with compatibility between hypervisor and guest > > drivers > > > (as guest driver can't be newer then host one, and HVs can't be too > > > new either). > > > > > > It's not that I'm protecting AMD, but just saying that Nvidia is not > > > that straightforward either, and at least on paper AMD vGPUs look > > > easier both for operators and end-users. > > > > > > [1] https://github.com/citynetwork/dib-elements/tree/main/nvgrid > > > > > > > > > > > As for AMD cards, AMD stated that some of their MI series card > > supports SR-IOV for vGPUs. However, those drivers are never open > > source or provided closed source to public, only large cloud > > providers are able to get them. So I don't really recommend > > getting AMD cards for vGPU unless you are able to get support from > > them. > > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3774 bytes Desc: not available URL: From amonster369 at gmail.com Tue Jan 17 03:22:39 2023 From: amonster369 at gmail.com (A Monster) Date: Tue, 17 Jan 2023 04:22:39 +0100 Subject: Enable fstrim automatically on cinder thin lvm provisioning Message-ID: I deployed openstack using kolla ansible, and used LVM as storage backend for my cinder service, however I noticed that the lvm thin pool size keeps increasing even though the space used by instances volumes is the same, and after a bit of investigating I found out that I had to enable fstrim because the data deleted inside the logical volumes was still allocated from the thin pool perspective and I had to do fstrim on those volumes, how can I enable this automatically in openstack? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Tue Jan 17 04:41:27 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 17 Jan 2023 10:11:27 +0530 Subject: Enable fstrim automatically on cinder thin lvm provisioning In-Reply-To: References: Message-ID: Hi, We've a config option 'report_discard_supported'[1] which can be added to cinder.conf that will enable trim/unmap support. Also I would like to suggest not creating new openstack-discuss threads for the same issue and reuse the first one created. As I can see these are the 3 threads for the same issue[2][3][4]. [1] https://docs.openstack.org/cinder/latest/configuration/block-storage/config-options.html [2] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031789.html [3] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031797.html [4] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031805.html Thanks Rajat Dhasmana On Tue, Jan 17, 2023 at 8:57 AM A Monster wrote: > I deployed openstack using kolla ansible, and used LVM as storage backend > for my cinder service, however I noticed that the lvm thin pool size keeps > increasing even though the space used by instances volumes is the same, and > after a bit of investigating I found out that I had to enable fstrim > because the data deleted inside the logical volumes was still allocated > from the thin pool perspective and I had to do fstrim on those volumes, > > how can I enable this automatically in openstack? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Tue Jan 17 08:54:03 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Tue, 17 Jan 2023 08:54:03 +0000 Subject: Experience with VGPUs In-Reply-To: <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: <220CE3FB-C139-492E-ADD1-BC1ECBEAE65E@binero.com> Hello, We are using vGPUs with Nova on OpenStack Xena release and we?ve had a fairly good experience integration NVIDIA A10 GPUs into our cloud. As we see it there is some painpoints that just goes with mantaining the GPU feature. - There is a very tight coupling of the NVIDIA driver in the guest (instance) and on the compute node that needs to be managed. - Doing maintainance need more planning i.e powering off instances, NVIDIA driver on compute node needs to be rebuilt on hypervisor if kernel is upgraded unless you?ve implemented DKMS for that. - Because we?ve different flavor of GPU (we split the A10 cards into different flavors for maximum utilization of other compute resources) we added custom traits in the Placement service to handle that, handling that with a script since doing anything manually related to GPUs you will get confused quickly. [1] - Since Nova does not handle recreation of mdevs (or use the new libvirt autostart feature for mdevs) we have a systemd unit that executes before the nova-compute service that walks all the libvirt domains and does lookups in Placement to recreate the mdevs before nova-compute start. [2] [3] [4] Best regards Tobias DISCLAIMER: Below is provided without any warranty of actually working for you or your setup and does very specific things that we need and is only provided to give you some insight and help. Use at your own risk. [1] https://paste.opendev.org/show/b6FdfwDHnyJXR0G3XarE/ [2] https://paste.opendev.org/show/bGtO6aIE519uysvytWv0/ [3] https://paste.opendev.org/show/bftOEIPxlpLptkosxlL6/ [4] https://paste.opendev.org/show/bOYBV6lhRON4ntQKYPkb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Jan 17 09:11:44 2023 From: zigo at debian.org (Thomas Goirand) Date: Tue, 17 Jan 2023 10:11:44 +0100 Subject: [designate] Proposal to deprecate the agent framework and agent based backends In-Reply-To: References: Message-ID: <46a43b97-063d-ed46-6dc1-94f7e0d12e5e@debian.org> On 1/17/23 01:52, Michael Johnson wrote: > TLDR: The Designate team would like to deprecate the backend agent > framework and the agent based backends due to lack of development and > design issues with the current implementation. The following backends > would be deprecated: Bind9 (Agent), Denominator, Microsoft DNS > (Agent), Djbdns (Agent), Gdnsd (Agent), and Knot2 (Agent). Hi Michael, Thanks for this. Now, if we're going to get rid of the code soonish, can we just get rid of the unit tests, rather than attempting to monkey-patch dnspython? That feels safer, no? With Eventlet, I have the experience that monkey patching is dangerous and often leads to disaster. Cheers, Thomas Goirand (zigo) From sbauza at redhat.com Tue Jan 17 10:04:59 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 17 Jan 2023 11:04:59 +0100 Subject: Experience with VGPUs In-Reply-To: <220CE3FB-C139-492E-ADD1-BC1ECBEAE65E@binero.com> References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> <220CE3FB-C139-492E-ADD1-BC1ECBEAE65E@binero.com> Message-ID: Le mar. 17 janv. 2023 ? 10:00, Tobias Urdin a ?crit : > Hello, > > We are using vGPUs with Nova on OpenStack Xena release and we?ve had a > fairly good experience integration > NVIDIA A10 GPUs into our cloud. > > Great to hear, thanks for your feedback, much appreciated Tobias. > As we see it there is some painpoints that just goes with mantaining the > GPU feature. > > - There is a very tight coupling of the NVIDIA driver in the guest > (instance) and on the compute node that needs to > be managed. > > As nvidia provides proprietary drivers, there isn't much we can move on upstream, even for CI testing. Many participants in this thread explained this as a common concern and I understand their pain, but yeah you need third-party tooling for managing both the driver installation and the licensing servers. > - Doing maintainance need more planning i.e powering off instances, NVIDIA > driver on compute node needs to be > rebuilt on hypervisor if kernel is upgraded unless you?ve implemented > DKMS for that. > > Ditto, unfortunately I wish the driver could be less kernel-dependent but I don't see a foreseenable future for this. > - Because we?ve different flavor of GPU (we split the A10 cards into > different flavors for maximum utilization of > other compute resources) we added custom traits in the Placement service > to handle that, handling that with > a script since doing anything manually related to GPUs you will get > confused quickly. [1] > True, that's why you can also use generic mdevs which will create different resource classes (but ssssht) or use the placement.yaml file to manage your inventories. https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html > - Since Nova does not handle recreation of mdevs (or use the new libvirt > autostart feature for mdevs) we have > a systemd unit that executes before the nova-compute service that walks > all the libvirt domains and does lookups > in Placement to recreate the mdevs before nova-compute start. [2] [3] [4] > > This is a known issue and we agreed on the last PTG for a direction. Patches on review. https://review.opendev.org/c/openstack/nova/+/864418 Thanks, -Sylvain > Best regards > Tobias > > DISCLAIMER: Below is provided without any warranty of actually working for > you or your setup and does > very specific things that we need and is only provided to give you some > insight and help. Use at your own risk. > > [1] https://paste.opendev.org/show/b6FdfwDHnyJXR0G3XarE/ > [2] https://paste.opendev.org/show/bGtO6aIE519uysvytWv0/ > [3] https://paste.opendev.org/show/bftOEIPxlpLptkosxlL6/ > [4] https://paste.opendev.org/show/bOYBV6lhRON4ntQKYPkb/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Tue Jan 17 11:16:24 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 17 Jan 2023 12:16:24 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: Oh, wait a second, can you have multiple different types on 1 GPU? As I don't think you can, or maybe it's limited to MIG mode only - I'm using mostly vGPUs so not 100% sure about MIG mode. But eventually on vGPU, once you create 1 type, all others become unavailable. So originally each comand like # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 1 BUT, once you create an mdev of specific type, rest will not report as available anymore. # echo ${uuidgen} > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/create # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 0 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 0 Please, correct me if I'm wrong here and Nvidia did some changes with recent drivers or it's applicable only for vGPUs and it's not a case for the MIG mode. ??, 17 ???. 2023 ?., 03:37 Alex Song (???) : > > > Hi, Ulrich: > > Sean is expert on VGPU management from nova side. I complete the usage steps if you are using Nova to manage MIGs for example: > 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one 1g.10gb, one 2g.20gb and one 3g.40gb) > 2.add the device config in nova.conf: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3 > 3.config the flavor metadata with VGPU:1 and create vm use the flavor, the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] > On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1, 0000:3b:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3, > [mdev_nvidia-702] > device_addresses = 0000:3b:00.3 > > In our product, we use Cyborg to manage the MIGs, from the legacy style we also need config the mig like Nova, this is difficult to maintain, especially deploy openstack on k8s, so we remove these config and automatically discovery the MIGs and support divide MIG by cyborg api. By creating device profile with vgpu type traits(nvidia-699, nvidia-700), we can appoint MIG size to create VMs. > > Kind regards > From sbauza at redhat.com Tue Jan 17 11:46:46 2023 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 17 Jan 2023 12:46:46 +0100 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: Le mar. 17 janv. 2023 ? 12:22, Dmitriy Rabotyagov a ?crit : > Oh, wait a second, can you have multiple different types on 1 GPU? As > I don't think you can, or maybe it's limited to MIG mode only - I'm > using mostly vGPUs so not 100% sure about MIG mode. > But eventually on vGPU, once you create 1 type, all others become > unavailable. So originally each comand like > # cat > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances > 1 > > BUT, once you create an mdev of specific type, rest will not report as > available anymore. > # echo ${uuidgen} > > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/create > # cat > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances > 0 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances > 0 > > Please, correct me if I'm wrong here and Nvidia did some changes with > recent drivers or it's applicable only for vGPUs and it's not a case > for the MIG mode. > > No, you're unfortunately right. https://docs.nvidia.com/grid/latest/grid-vgpu-user-guide/index.html#valid-vgpu-configurations-one-gpu For time-slices vGPUs, you need to use the same type for one pGPU. Of course, if a card has multiple pGPUs, you can have multiple types, one per PCI ID. Technically, nvidia says you need to use the same framebuffer size, but that eventually means the same. For MIG-backed vGPUs, surely you can mix types after creating MIG instances. -S ??, 17 ???. 2023 ?., 03:37 Alex Song (???) : > > > > > > Hi, Ulrich: > > > > Sean is expert on VGPU management from nova side. I complete the usage > steps if you are using Nova to manage MIGs for example: > > 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one > 1g.10gb, one 2g.20gb and one 3g.40gb) > > 2.add the device config in nova.conf: > > [devices] > > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 > > [mdev_nvidia-699] > > device_addresses = 0000:84:00.1 > > [mdev_nvidia-700] > > device_addresses = 0000:84:00.2 > > [mdev_nvidia-701] > > device_addresses = 0000:84:00.3 > > 3.config the flavor metadata with VGPU:1 and create vm use the flavor, > the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] > > On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the > other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: > > [devices] > > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 > > [mdev_nvidia-699] > > device_addresses = 0000:84:00.1, 0000:3b:00.1 > > [mdev_nvidia-700] > > device_addresses = 0000:84:00.2 > > [mdev_nvidia-701] > > device_addresses = 0000:84:00.3, > > [mdev_nvidia-702] > > device_addresses = 0000:3b:00.3 > > > > In our product, we use Cyborg to manage the MIGs, from the legacy style > we also need config the mig like Nova, this is difficult to maintain, > especially deploy openstack on k8s, so we remove these config and > automatically discovery the MIGs and support divide MIG by cyborg api. By > creating device profile with vgpu type traits(nvidia-699, nvidia-700), we > can appoint MIG size to create VMs. > > > > Kind regards > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Tue Jan 17 11:50:49 2023 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Tue, 17 Jan 2023 11:50:49 +0000 Subject: =?gb2312?B?UmU6ILTwuLQ6IEV4cGVyaWVuY2Ugd2l0aCBWR1BVcw==?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: MIG allows for a limited variation of instance types on the same card unlike vGPU which requires a heterogenous implementation. see https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#supported-profiles for more details. ________________________________ From: Dmitriy Rabotyagov Sent: 17 January 2023 11:16 Cc: openstack-discuss Subject: Re: ??: Experience with VGPUs CAUTION: This email originates from outside THG Oh, wait a second, can you have multiple different types on 1 GPU? As I don't think you can, or maybe it's limited to MIG mode only - I'm using mostly vGPUs so not 100% sure about MIG mode. But eventually on vGPU, once you create 1 type, all others become unavailable. So originally each comand like # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 1 BUT, once you create an mdev of specific type, rest will not report as available anymore. # echo ${uuidgen} > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/create # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 0 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 0 Please, correct me if I'm wrong here and Nvidia did some changes with recent drivers or it's applicable only for vGPUs and it's not a case for the MIG mode. ??, 17 ???. 2023 ?., 03:37 Alex Song (???) : > > > Hi, Ulrich: > > Sean is expert on VGPU management from nova side. I complete the usage steps if you are using Nova to manage MIGs for example: > 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one 1g.10gb, one 2g.20gb and one 3g.40gb) > 2.add the device config in nova.conf: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3 > 3.config the flavor metadata with VGPU:1 and create vm use the flavor, the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] > On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1, 0000:3b:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3, > [mdev_nvidia-702] > device_addresses = 0000:3b:00.3 > > In our product, we use Cyborg to manage the MIGs, from the legacy style we also need config the mig like Nova, this is difficult to maintain, especially deploy openstack on k8s, so we remove these config and automatically discovery the MIGs and support divide MIG by cyborg api. By creating device profile with vgpu type traits(nvidia-699, nvidia-700), we can appoint MIG size to create VMs. > > Kind regards > Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Tue Jan 17 11:52:04 2023 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Tue, 17 Jan 2023 11:52:04 +0000 Subject: =?gb2312?B?UmU6ILTwuLQ6IEV4cGVyaWVuY2Ugd2l0aCBWR1BVcw==?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: sorry, meant to say vGPU requires a homogeneous implementation. ________________________________ From: Danny Webb Sent: 17 January 2023 11:50 To: Dmitriy Rabotyagov Cc: openstack-discuss Subject: Re: ??: Experience with VGPUs MIG allows for a limited variation of instance types on the same card unlike vGPU which requires a heterogenous implementation. see https://docs.nvidia.com/datacenter/tesla/mig-user-guide/#supported-profiles for more details. ________________________________ From: Dmitriy Rabotyagov Sent: 17 January 2023 11:16 Cc: openstack-discuss Subject: Re: ??: Experience with VGPUs CAUTION: This email originates from outside THG Oh, wait a second, can you have multiple different types on 1 GPU? As I don't think you can, or maybe it's limited to MIG mode only - I'm using mostly vGPUs so not 100% sure about MIG mode. But eventually on vGPU, once you create 1 type, all others become unavailable. So originally each comand like # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 1 BUT, once you create an mdev of specific type, rest will not report as available anymore. # echo ${uuidgen} > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/create # cat /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances 0 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances 1 # cat /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances 0 Please, correct me if I'm wrong here and Nvidia did some changes with recent drivers or it's applicable only for vGPUs and it's not a case for the MIG mode. ??, 17 ???. 2023 ?., 03:37 Alex Song (???) : > > > Hi, Ulrich: > > Sean is expert on VGPU management from nova side. I complete the usage steps if you are using Nova to manage MIGs for example: > 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one 1g.10gb, one 2g.20gb and one 3g.40gb) > 2.add the device config in nova.conf: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3 > 3.config the flavor metadata with VGPU:1 and create vm use the flavor, the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] > On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: > [devices] > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 > [mdev_nvidia-699] > device_addresses = 0000:84:00.1, 0000:3b:00.1 > [mdev_nvidia-700] > device_addresses = 0000:84:00.2 > [mdev_nvidia-701] > device_addresses = 0000:84:00.3, > [mdev_nvidia-702] > device_addresses = 0000:3b:00.3 > > In our product, we use Cyborg to manage the MIGs, from the legacy style we also need config the mig like Nova, this is difficult to maintain, especially deploy openstack on k8s, so we remove these config and automatically discovery the MIGs and support divide MIG by cyborg api. By creating device profile with vgpu type traits(nvidia-699, nvidia-700), we can appoint MIG size to create VMs. > > Kind regards > Danny Webb Principal OpenStack Engineer Danny.Webb at thehutgroup.com [THG Ingenuity Logo] www.thg.com [https://i.imgur.com/wbpVRW6.png] [https://i.imgur.com/c3040tr.png] -------------- next part -------------- An HTML attachment was scrubbed... URL: From afaninthehouse at gmail.com Mon Jan 16 21:19:39 2023 From: afaninthehouse at gmail.com (Adisa Nicholson aka Tynamite) Date: Mon, 16 Jan 2023 21:19:39 +0000 Subject: When is client-side modified being added? Message-ID: Hello Openstack When is client-side modified being added as a new feature? Right now I'm paying for file hosting and when I upload a file, the "date modified" isn't respected for that file. It always takes the "date modified" for the file as the current date, not the one on the file I was uploading from my computer. I've contacted the company I brought it from, and they said that client-side modified isn't supported in the Openstack API . What API method would this be exactly? Would it be the "object tagging" method? Do you lot have any idea when this feature would be added in, so my file uploads will respect the "date modified" metadata? -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliette.tux at gmail.com Mon Jan 16 12:07:24 2023 From: juliette.tux at gmail.com (Juliette Tux) Date: Mon, 16 Jan 2023 15:07:24 +0300 Subject: [Blazar][Heat][OpenStack-I18n] Need help in making sense of a message in Heat In-Reply-To: References: Message-ID: I think I got it from here. Thanks a lot for your help! (: On Mon, 16 Jan 2023 at 06:23, Takashi Kajinami wrote: > > Let me add the Blazar tag. > > That text comes from the explanation of the "degrade" property of the OS::Blazar::Lease resource type. > > My basic understanding about Blazar is that it allows users to "reserve" some resources in advance. > So "the amount of reserved resources is less than the request" would indicate that Blazar could not reserve the amount > of resources users requested (for example a user tried to reserve 10 instances but Blazar could reserve only 8). > On the other hand, "reserved resources were changed." would indicate the situation where the user updated > the reservation request and Blazar is still processing the update. > > I'd appreciate any double-check from the Blazar team because I'm not really familiar with Blazar and > the team would have clear understanding about the feature (and possibly the better explanation) > > On Sat, Jan 14, 2023 at 8:07 PM Ian Y. Choi wrote: >> >> (Adding openstack-discuss mailing list and adding "[Heat]" on the subject) >> >> Hi, >> >> It would be so great if OpenStack Heat users or developers can help >> Juliette's question on i18n thread! >> >> >> Thank you all, >> >> /Ian >> >> On Fri, Jan 13, 2023 at 11:39 PM Juliette Tux wrote: >> > >> > Hello, >> > Could anybody kindly elaborate on the meaning of a condition in a message: >> > >> > "The flag which represents condition of reserved resources of the >> > lease. If it is true, the amount of reserved resources is less than >> > the request or reserved resources were changed." >> > >> > The obscure part for me is "less than the request or reserved >> > resources were changed." >> > >> > Context: heat/engine/resources/openstack/blazar/lease.py:233 >> > >> > TY! >> > >> > -- >> > ? ?????????, ??????? ???? >> > >> > _______________________________________________ >> > OpenStack-I18n mailing list >> > OpenStack-I18n at lists.openstack.org >> -- ? ?????????, ??????? ???? From kkloppenborg at rwts.com.au Tue Jan 17 10:18:16 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Tue, 17 Jan 2023 10:18:16 +0000 Subject: Experience with VGPUs (Tobias Urdin) In-Reply-To: References: Message-ID: Hi Tobias, I saw your message, interesting method to get around the transient mdev issue. Have you looked into implementing cyborg as a method to alleviate this? We are currently assessing it for a different project using nvidia A40?s. Would be keen to swap war stories and see if we can make a better solution than the current vGPU mdev support going on. Kind Regards, Karl. You can book a 30-minute meeting with me by clicking this link. -- Karl Kloppenborg, Systems Engineering (BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA Linux+ XK0-004) Real World Technology Solutions - IT People you can trust Voice | Data | IT Procurement | Managed IT rwts.com.au | 1300 798 718 [uc%3fexport=download&id=1M0bR7j1-rXl-e7k5f1Rhwot6K_vfuAvn&revid=0B4fBbZ0cwq-1WFdQSExlR28rOEtUanJjOGcvQnJjMFhEMlEwPQ] Real World is a DellEMC Gold Partner This document should be read only by those persons to whom it is addressed and its content is not intended for use by any other persons. If you have received this message in error, please notify us immediately. Please also destroy and delete the message from your computer. Any unauthorised form of reproduction of this message is strictly prohibited. We are not liable for the proper and complete transmission of the information contained in this communication, nor for any delay in its receipt. Please consider the environment before printing this e-mail. From: openstack-discuss-request at lists.openstack.org Date: Tuesday, 17 January 2023 at 9:06 pm To: openstack-discuss at lists.openstack.org Subject: openstack-discuss Digest, Vol 51, Issue 51 Send openstack-discuss mailing list submissions to openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to openstack-discuss-request at lists.openstack.org You can reach the person managing the list at openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. Re: Enable fstrim automatically on cinder thin lvm provisioning (Rajat Dhasmana) 2. Re: Experience with VGPUs (Tobias Urdin) 3. Re: [designate] Proposal to deprecate the agent framework and agent based backends (Thomas Goirand) 4. Re: Experience with VGPUs (Sylvain Bauza) ---------------------------------------------------------------------- Message: 1 Date: Tue, 17 Jan 2023 10:11:27 +0530 From: Rajat Dhasmana To: A Monster Cc: openstack-discuss Subject: Re: Enable fstrim automatically on cinder thin lvm provisioning Message-ID: Content-Type: text/plain; charset="utf-8" Hi, We've a config option 'report_discard_supported'[1] which can be added to cinder.conf that will enable trim/unmap support. Also I would like to suggest not creating new openstack-discuss threads for the same issue and reuse the first one created. As I can see these are the 3 threads for the same issue[2][3][4]. [1] https://docs.openstack.org/cinder/latest/configuration/block-storage/config-options.html [2] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031789.html [3] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031797.html [4] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031805.html Thanks Rajat Dhasmana On Tue, Jan 17, 2023 at 8:57 AM A Monster wrote: > I deployed openstack using kolla ansible, and used LVM as storage backend > for my cinder service, however I noticed that the lvm thin pool size keeps > increasing even though the space used by instances volumes is the same, and > after a bit of investigating I found out that I had to enable fstrim > because the data deleted inside the logical volumes was still allocated > from the thin pool perspective and I had to do fstrim on those volumes, > > how can I enable this automatically in openstack? > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Tue, 17 Jan 2023 08:54:03 +0000 From: Tobias Urdin To: openstack-discuss Subject: Re: Experience with VGPUs Message-ID: <220CE3FB-C139-492E-ADD1-BC1ECBEAE65E at binero.com> Content-Type: text/plain; charset="utf-8" Hello, We are using vGPUs with Nova on OpenStack Xena release and we?ve had a fairly good experience integration NVIDIA A10 GPUs into our cloud. As we see it there is some painpoints that just goes with mantaining the GPU feature. - There is a very tight coupling of the NVIDIA driver in the guest (instance) and on the compute node that needs to be managed. - Doing maintainance need more planning i.e powering off instances, NVIDIA driver on compute node needs to be rebuilt on hypervisor if kernel is upgraded unless you?ve implemented DKMS for that. - Because we?ve different flavor of GPU (we split the A10 cards into different flavors for maximum utilization of other compute resources) we added custom traits in the Placement service to handle that, handling that with a script since doing anything manually related to GPUs you will get confused quickly. [1] - Since Nova does not handle recreation of mdevs (or use the new libvirt autostart feature for mdevs) we have a systemd unit that executes before the nova-compute service that walks all the libvirt domains and does lookups in Placement to recreate the mdevs before nova-compute start. [2] [3] [4] Best regards Tobias DISCLAIMER: Below is provided without any warranty of actually working for you or your setup and does very specific things that we need and is only provided to give you some insight and help. Use at your own risk. [1] https://paste.opendev.org/show/b6FdfwDHnyJXR0G3XarE/ [2] https://paste.opendev.org/show/bGtO6aIE519uysvytWv0/ [3] https://paste.opendev.org/show/bftOEIPxlpLptkosxlL6/ [4] https://paste.opendev.org/show/bOYBV6lhRON4ntQKYPkb/ -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Tue, 17 Jan 2023 10:11:44 +0100 From: Thomas Goirand To: openstack-discuss Subject: Re: [designate] Proposal to deprecate the agent framework and agent based backends Message-ID: <46a43b97-063d-ed46-6dc1-94f7e0d12e5e at debian.org> Content-Type: text/plain; charset=UTF-8; format=flowed On 1/17/23 01:52, Michael Johnson wrote: > TLDR: The Designate team would like to deprecate the backend agent > framework and the agent based backends due to lack of development and > design issues with the current implementation. The following backends > would be deprecated: Bind9 (Agent), Denominator, Microsoft DNS > (Agent), Djbdns (Agent), Gdnsd (Agent), and Knot2 (Agent). Hi Michael, Thanks for this. Now, if we're going to get rid of the code soonish, can we just get rid of the unit tests, rather than attempting to monkey-patch dnspython? That feels safer, no? With Eventlet, I have the experience that monkey patching is dangerous and often leads to disaster. Cheers, Thomas Goirand (zigo) ------------------------------ Message: 4 Date: Tue, 17 Jan 2023 11:04:59 +0100 From: Sylvain Bauza To: Tobias Urdin Cc: openstack-discuss Subject: Re: Experience with VGPUs Message-ID: Content-Type: text/plain; charset="utf-8" Le mar. 17 janv. 2023 ? 10:00, Tobias Urdin a ?crit : > Hello, > > We are using vGPUs with Nova on OpenStack Xena release and we?ve had a > fairly good experience integration > NVIDIA A10 GPUs into our cloud. > > Great to hear, thanks for your feedback, much appreciated Tobias. > As we see it there is some painpoints that just goes with mantaining the > GPU feature. > > - There is a very tight coupling of the NVIDIA driver in the guest > (instance) and on the compute node that needs to > be managed. > > As nvidia provides proprietary drivers, there isn't much we can move on upstream, even for CI testing. Many participants in this thread explained this as a common concern and I understand their pain, but yeah you need third-party tooling for managing both the driver installation and the licensing servers. > - Doing maintainance need more planning i.e powering off instances, NVIDIA > driver on compute node needs to be > rebuilt on hypervisor if kernel is upgraded unless you?ve implemented > DKMS for that. > > Ditto, unfortunately I wish the driver could be less kernel-dependent but I don't see a foreseenable future for this. > - Because we?ve different flavor of GPU (we split the A10 cards into > different flavors for maximum utilization of > other compute resources) we added custom traits in the Placement service > to handle that, handling that with > a script since doing anything manually related to GPUs you will get > confused quickly. [1] > True, that's why you can also use generic mdevs which will create different resource classes (but ssssht) or use the placement.yaml file to manage your inventories. https://specs.openstack.org/openstack/nova-specs/specs/xena/implemented/generic-mdevs.html > - Since Nova does not handle recreation of mdevs (or use the new libvirt > autostart feature for mdevs) we have > a systemd unit that executes before the nova-compute service that walks > all the libvirt domains and does lookups > in Placement to recreate the mdevs before nova-compute start. [2] [3] [4] > > This is a known issue and we agreed on the last PTG for a direction. Patches on review. https://review.opendev.org/c/openstack/nova/+/864418 Thanks, -Sylvain > Best regards > Tobias > > DISCLAIMER: Below is provided without any warranty of actually working for > you or your setup and does > very specific things that we need and is only provided to give you some > insight and help. Use at your own risk. > > [1] https://paste.opendev.org/show/b6FdfwDHnyJXR0G3XarE/ > [2] https://paste.opendev.org/show/bGtO6aIE519uysvytWv0/ > [3] https://paste.opendev.org/show/bftOEIPxlpLptkosxlL6/ > [4] https://paste.opendev.org/show/bOYBV6lhRON4ntQKYPkb/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org ------------------------------ End of openstack-discuss Digest, Vol 51, Issue 51 ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 21610 bytes Desc: image001.jpg URL: From johnsomor at gmail.com Tue Jan 17 15:31:11 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 17 Jan 2023 07:31:11 -0800 Subject: [designate] Proposal to deprecate the agent framework and agent based backends In-Reply-To: <46a43b97-063d-ed46-6dc1-94f7e0d12e5e@debian.org> References: <46a43b97-063d-ed46-6dc1-94f7e0d12e5e@debian.org> Message-ID: To maintain compatibility and support for the agent framework for the two release cycles required of the deprecation process, we will need to do the monkey patching (it turns out it's an enum class, so a bit more than a simple monkey patch) for the two depreciation cycles. There is definitely a risk that dnspython will change again in a way that breaks the monkey patch, but we have a check job that runs with the most recent version of dnspython, so we should catch any RCx releases that cause us problems. In fact, that job did catch this issue, but unfortunately it was over the holiday so we didn't get on top of it as quickly as we'd hoped. Michael On Tue, Jan 17, 2023 at 1:16 AM Thomas Goirand wrote: > > On 1/17/23 01:52, Michael Johnson wrote: > > TLDR: The Designate team would like to deprecate the backend agent > > framework and the agent based backends due to lack of development and > > design issues with the current implementation. The following backends > > would be deprecated: Bind9 (Agent), Denominator, Microsoft DNS > > (Agent), Djbdns (Agent), Gdnsd (Agent), and Knot2 (Agent). > > Hi Michael, > > Thanks for this. > > Now, if we're going to get rid of the code soonish, can we just get rid > of the unit tests, rather than attempting to monkey-patch dnspython? > That feels safer, no? With Eventlet, I have the experience that monkey > patching is dangerous and often leads to disaster. > > Cheers, > > Thomas Goirand (zigo) > > From fungi at yuggoth.org Tue Jan 17 15:48:45 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 17 Jan 2023 15:48:45 +0000 Subject: [OSSA-2023-001] Swift: Arbitrary file access through custom S3 XML entities (CVE-2022-47950) Message-ID: <20230117154845.gkh62fbl2xuix6j3@yuggoth.org> =================================================================== OSSA-2023-001: Arbitrary file access through custom S3 XML entities =================================================================== :Date: January 17, 2023 :CVE: CVE-2022-47950 Affects ~~~~~~~ - Swift: <2.28.1, >=2.29.0 <2.29.2, ==2.30.0 Description ~~~~~~~~~~~ S?bastien Meriot (OVH) reported a vulnerability in Swift's S3 XML parser. By supplying specially crafted XML files an authenticated user may coerce the S3 API into returning arbitrary file contents from the host server resulting in unauthorized read access to potentially sensitive data; this impacts both s3api deployments (Rocky or later), and swift3 deployments (Queens and earlier, no longer actively developed). Only deployments with S3 compatibility enabled are affected. Patches ~~~~~~~ - https://review.opendev.org/870823 (2023.1/antelope) - https://review.opendev.org/870828 (Wallaby) - https://review.opendev.org/870827 (Xena) - https://review.opendev.org/870826 (Yoga) - https://review.opendev.org/870825 (Zed) Credits ~~~~~~~ - S?bastien Meriot from OVH (CVE-2022-47950) References ~~~~~~~~~~ - https://launchpad.net/bugs/1998625 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-47950 Notes ~~~~~ - The stable/wallaby branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -- Jeremy Stanley OpenStack Vulnerability Management Team -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ralonsoh at redhat.com Tue Jan 17 16:05:14 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 17 Jan 2023 17:05:14 +0100 Subject: [neutron] The meetings "ping list" Message-ID: Hello Neutrinos: As suggested by some community members, we now have a "ping list" for our Neutron meetings. If you want to be pinged just before the meeting starts, you can add your IRC nickname to the meeting agenda, in the "ping list" paragraph. As commented during the Neutron meeting, this is only a courtesy ping. Please check the meeting agendas: * Neutron meeting: https://wiki.openstack.org/wiki/Network/Meetings * CI meeting: https://etherpad.opendev.org/p/neutron-ci-meetings * Drivers meeting: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers If you can't edit the agenda meetings, you can request that to any Neutron core in the #openstack-neutron channel. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adamcooper at uchicago.edu Tue Jan 17 20:00:28 2023 From: adamcooper at uchicago.edu (Adam Cooper) Date: Tue, 17 Jan 2023 14:00:28 -0600 Subject: Difficulties Developing a Horizon Customization Module Message-ID: <5165924ca83fdd4d8ce1d666210f5af987431310.camel@uchicago.edu> Hello! I sent a message to the #openstack IRC last week, but did not receive any replies. I am attempting to create a customization module for Horizon and having difficulties getting things to function. I'm on a fork of the Xena release, and following these docs: https://docs.openstack.org/horizon/9.1.1/topics/customizing.html. At the present, I'm trying to figure out some weird behavior where having the file `openstack_dashboard/local/local_settings.py` causes Horizon to respond to every request with a 400 error. There is no information regarding this in the logs or the HTTP response as far as I can see. It does this with an empty module, or if it just contains HORIZON_CONFIG = {...}. Is there something I'm missing? I would just like to be able to debug this issue so I can actually develop the module. If this is not the right way to ask for help, I would appreciate a pointer to the correct channels. Thanks! -- Adam Cooper Cloud Computing Engineer, Security Point of Contact Chameleon Cloud University of Chicago, Argonne National Laboratory -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part URL: From mikal at stillhq.com Tue Jan 17 20:50:57 2023 From: mikal at stillhq.com (Michael Still) Date: Wed, 18 Jan 2023 07:50:57 +1100 Subject: [DIB][diskimage-builder] Debian Testing boot issues Message-ID: Has anyone else tried building Debian Testing (bookworm) images with DIB? I'm having troubles with getting them to boot, with errors that look like missing components in the initrd to me. The command I am using to build is: export ELEMENTS_PATH=elements:diskimage-builder/diskimage_builder/elements export DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack, NoCloud" export DIB_APT_MINIMAL_CREATE_INTERFACES=0 export DIB_PYTHON_VERSION=3 export DIB_RELEASE=bookworm export DIB_SF_AGENT_PACKAGE=shakenfist-agent export build_args="cloud-init cloud-init-datasources sf-agent vm" disk-image-create utilities debian debian-systemd ${build_args} -o temp.qcow2 Where shakenfist-agent / sf-agent are a custom element not really relevant to this query (I've left it for completeness). When I boot the output image, I get this: ... [ 1.906949] virtio_blk virtio2: 2/0/0 default/read/poll queues [ 1.908318] virtio_blk virtio2: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) [ 1.916912] vda: vda1 [ 1.917646] virtio_blk virtio3: 2/0/0 default/read/poll queues [ 1.917676] virtio_net virtio1 ens11: renamed from eth0 [ 1.918965] virtio_blk virtio3: [vdb] 184 512-byte logical blocks (94.2 kB/92.0 KiB) Begin: Loading essential drivers ... done. Begin: Running /scripts/init-premount ... done. Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. Begin: Running /scripts/local-premount ... done. Warning: fsck not present, so skipping root file system [ 2.024351] EXT4-fs (vda1): mounted filesystem with ordered data mode. Quota mode: none. done. Begin: Running /scripts/local-bottom ... [ 2.041352] EXT4-fs (vda1): unmounting filesystem. GROWROOT: WARNING: resize failed: failed [flock:127] flock -x 9 /sbin/growpart: line 714: flock: not found FAILED: Error while obtaining exclusive lock on /dev/vda [ 2.057556] vda: vda1 /scripts/local-bottom/growroot: line 97: wait-for-root: not found done. Begin: Running /scripts/init-bottom ... mount: mounting /dev on /root/dev failed: No such file or directory mount: mounting /dev on /root/dev failed: No such file or directory done. mount: mounting /run on /root/run failed: No such file or directory BusyBox v1.35.0 (Debian 1:1.35.0-4+b1) multi-call binary. Usage: run-init [-d CAP,CAP...] [-n] [-c CONSOLE_DEV] NEW_ROOT NEW_INIT [ARGS] Free initramfs and switch to another root fs: chroot to NEW_ROOT, delete all in /, move NEW_ROOT to /, execute NEW_INIT. PID must be 1. NEW_ROOT must be a mountpoint. -c DEV Reopen stdio to DEV after switch -d CAPS Drop capabilities -n Dry run Target filesystem doesn't have requested /sbin/init. BusyBox v1.35.0 (Debian 1:1.35.0-4+b1) multi-call binary. Usage: run-init [-d CAP,CAP...] [-n] [-c CONSOLE_DEV] NEW_ROOT NEW_INIT [ARGS] Free initramfs and switch to another root fs: chroot to NEW_ROOT, delete all in /, move NEW_ROOT to /, execute NEW_INIT. PID must be 1. NEW_ROOT must be a mountpoint. ... Which looks to me like I am missing fsck, flock and wait-for-root, probably in the initrd? Adding cloud-init-growpart and growroot as elements did not help, neither did ensuring that the image contained cloud-guest-utils or util-linux. I wonder if anyone else has seen this and has some hints? Thanks, Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jan 18 01:50:34 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 17 Jan 2023 17:50:34 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 18 at 1600 UTC In-Reply-To: <185bc7bef17.f89c43dd822066.5887308196090367322@ghanshyammann.com> References: <185bc7bef17.f89c43dd822066.5887308196090367322@ghanshyammann.com> Message-ID: <185c2921e41.c8061e0e921612.3130270724921200812@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Jan 18 at 1600 UTC. Location: IRC OFTC network in the #openstack-tc channel * Roll call * Follow up on past action items * Gate health check * TC 2023.1 tracker status checks ** https://etherpad.opendev.org/p/tc-2023.1-tracker * Cleanup of PyPI maintainer list for OpenStack Projects ** There are other maintainers present along with 'openstackci', A few examples: *** https://pypi.org/project/murano/ *** https://pypi.org/project/glance/ ** More new maintainers are being added without knowledge to OpenStack and by skipping our contribution process *** Example: https://github.com/openstack/xstatic-font-awesome/pull/2 * Less Active projects status: ** Zaqar *** Gate is broken due to MongoDB not present in ubuntu 22.04 **** https://review.opendev.org/c/openstack/zaqar/+/857924/comments/a0d5d45e_3008683c *** Release team is concerned on its release for 2023.1 ** Mistral situation *** Release team proposing it to mark its release deprecated **** https://review.opendev.org/c/openstack/governance/+/866562 *** Gate is fixed (except python-mistralclient) **** https://review.opendev.org/q/topic:gate-fix-mistral-repo **** Core members are actively fixing/merging the changes now *** Beta release patches **** https://review.opendev.org/c/openstack/releases/+/869470 **** https://review.opendev.org/c/openstack/releases/+/869448 ** Adjutant situation *** Proposal to remove it from Inactive projects list **** https://review.opendev.org/c/openstack/governance/+/869665 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 16 Jan 2023 13:28:36 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2023 Jan 18, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Jan 17 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From sahid.ferdjaoui at industrialdiscipline.com Wed Jan 18 08:57:33 2023 From: sahid.ferdjaoui at industrialdiscipline.com (Sahid Orentino Ferdjaoui) Date: Wed, 18 Jan 2023 08:57:33 +0000 Subject: [neutron] The meetings "ping list" In-Reply-To: References: Message-ID: +1 Thank you Rodolfo ------- Original Message ------- On Tuesday, January 17th, 2023 at 17:05, Rodolfo Alonso Hernandez wrote: > Hello Neutrinos: > > As suggested by some community members, we now have a "ping list" for our Neutron meetings. If you want to be pinged just before the meeting starts, you can add your IRC nickname to the meeting agenda, in the "ping list" paragraph. As commented during the Neutron meeting, this is only a courtesy ping. > > Please check the meeting agendas: > * Neutron meeting: https://wiki.openstack.org/wiki/Network/Meetings > * CI meeting: https://etherpad.opendev.org/p/neutron-ci-meetings > * Drivers meeting: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers > > If you can't edit the agenda meetings, you can request that to any Neutron core in the #openstack-neutron channel. > > Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jan 18 09:22:33 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 18 Jan 2023 14:52:33 +0530 Subject: [cinder] 2023.1 R-9 virtual mid cycle on 18th January, 2023 In-Reply-To: References: Message-ID: Hello Argonauts, This is a reminder that today is the 2nd Mid Cycle of 2023.1 (Antelope) with following details: Date: 18th January 2023 Time: 1400-1600 UTC Meeting link: https://bluejeans.com/556681290 Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles Thanks Rajat Dhasmana On Thu, Jan 12, 2023 at 12:55 PM Rajat Dhasmana wrote: > Hello Argonauts, > > As discussed in yesterday's cinder upstream meeting[1], we will be > conducting our second mid cycle on 18th January, 2023 (R-9 week). Following > are the details: > > Date: 18th January 2023 > Time: 1400-1600 UTC > Meeting link: https://bluejeans.com/556681290 > Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles > > Don't forget to add topics and see you there! > > [1] > https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-01-11-14.00.log.html#l-47 > > Thanks > Rajat Dhasmana > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jan 18 14:03:22 2023 From: openinfradn at gmail.com (open infra) Date: Wed, 18 Jan 2023 19:33:22 +0530 Subject: =?UTF-8?B?UmU6IOetlOWkjTogRXhwZXJpZW5jZSB3aXRoIFZHUFVz?= In-Reply-To: References: <57b8eda3-274b-d2d9-0380-7bea6f9f1392@me.com> <07fea8fdf0e547ceb7a6c153a92c34d4@inspur.com> <0f9174bc14a6bdfb8838641d1f56647bb8054505.camel@redhat.com> Message-ID: On Tue, Jan 17, 2023 at 4:54 PM Dmitriy Rabotyagov wrote: > Oh, wait a second, can you have multiple different types on 1 GPU? As > I don't think you can, or maybe it's limited to MIG mode only - I'm > using mostly vGPUs so not 100% sure about MIG mode. > But eventually on vGPU, once you create 1 type, all others become > unavailable. So originally each comand like > # cat > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances > 1 > > BUT, once you create an mdev of specific type, rest will not report as > available anymore. > # echo ${uuidgen} > > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/create > # cat > /sys/bus/pci/devices/0000\:84\:00.1/mdev_supported_types/nvidia-699/available_instances > 0 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-699/available_instances > 1 > # cat > /sys/bus/pci/devices/0000\:84\:00.2/mdev_supported_types/nvidia-700/available_instances > 0 > > Please, correct me if I'm wrong here and Nvidia did some changes with > recent drivers or it's applicable only for vGPUs and it's not a case > for the MIG mode. > I have created A40-24Q instance out of A40 48GB GPU. But I experience the same. > > ??, 17 ???. 2023 ?., 03:37 Alex Song (???) : > > > > > > Hi, Ulrich: > > > > Sean is expert on VGPU management from nova side. I complete the usage > steps if you are using Nova to manage MIGs for example: > > 1. divide the A100(80G) GPUs to 1g.10gb*1+2g.20gb*1+3g.40gb*1(one > 1g.10gb, one 2g.20gb and one 3g.40gb) > > 2.add the device config in nova.conf: > > [devices] > > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701 > > [mdev_nvidia-699] > > device_addresses = 0000:84:00.1 > > [mdev_nvidia-700] > > device_addresses = 0000:84:00.2 > > [mdev_nvidia-701] > > device_addresses = 0000:84:00.3 > > 3.config the flavor metadata with VGPU:1 and create vm use the flavor, > the vm will randomly allocate one MIG from [1g.10gb,2g,20gb,3g.40gb] > > On step 2, if you have 2 A100(80G) GPUs on one node to use MIG, and the > other GPU divide to 1g.10gb*3+4g.40gb*1, the config maybe like this: > > [devices] > > enabled_mdev_types = nvidia-699,nvidia-700,nvidia-701,nvidia-702 > > [mdev_nvidia-699] > > device_addresses = 0000:84:00.1, 0000:3b:00.1 > > [mdev_nvidia-700] > > device_addresses = 0000:84:00.2 > > [mdev_nvidia-701] > > device_addresses = 0000:84:00.3, > > [mdev_nvidia-702] > > device_addresses = 0000:3b:00.3 > > > > In our product, we use Cyborg to manage the MIGs, from the legacy style > we also need config the mig like Nova, this is difficult to maintain, > especially deploy openstack on k8s, so we remove these config and > automatically discovery the MIGs and support divide MIG by cyborg api. By > creating device profile with vgpu type traits(nvidia-699, nvidia-700), we > can appoint MIG size to create VMs. > > > > Kind regards > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jan 18 14:27:58 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 18 Jan 2023 14:27:58 +0000 Subject: [cinder] Bug Report from 01-18-2023 Message-ID: This is a bug report from 01-11-2022 to 01-18-2023. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinder/+bug/2003179 "Dell PowerFlex: password appears in plain text when creating a volume from an image." Unassigned. Medium - https://bugs.launchpad.net/cinder/+bug/2002535 "[NFS] Server resize failed when image volume cache enabled." Assigned to Jean Pierre Roquesalane. - https://bugs.launchpad.net/cinder/+bug/2002996 "storpool driver: drop the image-to-volume-and-back overridden methods." Unassigned. - https://bugs.launchpad.net/cinder/+bug/2002995 "storpool driver: fix the retype volume flow." Fix proposed to master. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Wed Jan 18 14:37:09 2023 From: amonster369 at gmail.com (A Monster) Date: Wed, 18 Jan 2023 15:37:09 +0100 Subject: [kolla-ansible] [cinder] Setting up multiple LVM cinder backends located on different servers Message-ID: I have an openstack configuration, with 3 controller nodes and multiple compute nodes , one of the controllers has an LVM storage based on HDD drives, while another one has an SDD one, and when I tried to configure the two different types of storage as cinder backends I faced a dilemma since according to the documentation I have to specify the two different backends in the cinder configuration as it is explained here however and since I want to separate disks type when creating volumes, I had to specify different backend names, but I don't know if this configuration should be written in both the storage nodes, or should I specify for each one of these storage nodes the configuration related to its own type of disks. Now, I tried writing the same configuration for both nodes, but I found out that the volume service related to server1 concerning disks in server2 is down, and the volume service in server2 concerning disks in server1 is also down. $ openstack volume service list+------------------+---------------------+------+---------+-------+----------------------------+| Binary | Host | Zone | Status | State | Updated At |+------------------+---------------------+------+---------+-------+----------------------------+| cinder-scheduler | controller-01 | nova | enabled | up | 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || cinder-volume | controller-03 at lvm-1 | nova | enabled | up | 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 | nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | controller-01 at lvm-3 | nova | enabled | down | 2023-01-18T14:09:42.000000 || cinder-volume | controller-03 at lvm-3 | nova | enabled | down | 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+ This is the configuration I have written on the configuration files for cinder_api _cinder_scheduler and cinder_volume for both servers. enabled_backends= lvm-1,lvm-3 [lvm-1] volume_group = cinder-volumes volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_backend_name = lvm-1 target_helper = lioadm target_protocol = iscsi report_discard_supported = true [lvm-3] volume_group=cinder-volumes-ssd volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volume_backend_name=lvm-3 target_helper = lioadm target_protocol = iscsi report_discard_supported = true -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristin at openinfra.dev Wed Jan 18 19:51:11 2023 From: kristin at openinfra.dev (Kristin Barrientos) Date: Wed, 18 Jan 2023 13:51:11 -0600 Subject: OpenInfra Live - Jan. 19 at 9am CT / 15:00 UTC Message-ID: <4C7FFC16-4CC1-4AA4-9414-D4DCF8CDA4F5@openinfra.dev> Hi everyone, This week?s OpenInfra Live episode is brought to you by members of the OpenInfra Triangle user group. Episode: Distributing OpenStack Architecture with BGP and Kubernetes Integration This episode will discuss the shortcomings of layer-2 networks and how layer-3 network protocols help address those. We will take a in depth look at Red Hat OpenStack Platform 17 and its integration with FRRouting (FRR) to implement dynamic routing using BGP protocol as well as use of BFD (Bidirectional Forwarding Detection) protocol which is used for detecting network failures. We will also take a look at how ECMP is used to provide both high availability and load balancing in OpenStack control and dataplane networks. Date and time: Jan. 19 at 9 a.m. CT (15:00 UTC) You can watch us live on: YouTube: https://youtu.be/r8WLM9TM6w4 LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:7004177126690017280/ Facebook: www.facebook.com/events/862505974941677 WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Emilien Macchi, Chris Janiszewski, Maciej Lecki, and Luis Bolivar Have an idea for a future episode? Share it now at ideas.openinfra.live. Thanks, Kristin Barrientos Marketing Coordinator OpenInfra Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Wed Jan 18 22:19:42 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 18 Jan 2023 23:19:42 +0100 Subject: [designate] Proposal to deprecate the agent framework and agent based backends In-Reply-To: References: Message-ID: Hey Michael, openstack-discuss, On 17/01/2023 01:52, Michael Johnson wrote: > 6. The introduction of catalog zones[6] may eliminate the need for > some of the agent based backend drivers. I proposed the spec to have catalog zone support added to Designate, see https://review.opendev.org/c/openstack/designate-specs/+/849109. Thanks for taking the time to discuss this at the PTG and for all your work that went into refining the spec. May I kindly as how you feel about potential acceptance / merge of this spec in the near future? 1)? I did leave a few remarks in regards to the implementation though which we could discuss. 2)? I certainly am still a strong promoter of this standardized approach to distribute the list of zones to secondary servers as there already is native support in major DNS servers. But I also believe having support for catalog zones already available while still working through the deprecation phase of the agent based backends would provide people with a new option to adapt their setups to. And there is the option to not only use catalog zones within the actual secondary DNS server software, but to use it as source for some platform-specific provisioning code or agent to access the zone catalog. Regards Christian From johnsomor at gmail.com Wed Jan 18 22:50:02 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 18 Jan 2023 14:50:02 -0800 Subject: [designate] Proposal to deprecate the agent framework and agent based backends In-Reply-To: References: Message-ID: Hi Christian, I replied in the spec comments this week that I will be reviewing your feedback. It dropped off my radar over the holiday unfortunately. I also completed my followup to your comments today and posted those as well (we were probably typing at the same time, lol). Personally I think the specification is very close to being complete, I think we just need to agree on these last few design items and we can push for reviews. As I was looking at these agent based drivers, it was very obvious that the catalog zones may eliminate the need for some of the agent based drivers. That is awesome and your proposal to add catalog zone support is perfectly timed. Michael On Wed, Jan 18, 2023 at 2:19 PM Christian Rohmann wrote: > > Hey Michael, openstack-discuss, > > On 17/01/2023 01:52, Michael Johnson wrote: > > 6. The introduction of catalog zones[6] may eliminate the need for > > some of the agent based backend drivers. > > I proposed the spec to have catalog zone support added to Designate, see > https://review.opendev.org/c/openstack/designate-specs/+/849109. > > Thanks for taking the time to discuss this at the PTG and for all your > work that went into refining the spec. > May I kindly as how you feel about potential acceptance / merge of this > spec in the near future? > > 1) I did leave a few remarks in regards to the implementation though > which we could discuss. > > 2) I certainly am still a strong promoter of this standardized approach > to distribute the list of zones to secondary servers as there already is > native support in major DNS servers. > But I also believe having support for catalog zones already available > while still working through the deprecation phase of the agent based > backends would provide people with a new option to adapt their setups to. > And there is the option to not only use catalog zones within the actual > secondary DNS server software, but to use it as source for some > platform-specific provisioning code or agent to access the zone catalog. > > > Regards > > Christian > From igene at igene.tw Thu Jan 19 07:04:57 2023 From: igene at igene.tw (Gene Kuo) Date: Thu, 19 Jan 2023 07:04:57 +0000 Subject: [neutron][ovn] Metadata service failed after creating k8s cluster with kops Message-ID: <6Rqv160E4L168Cos6-_OT7h1JPmUHrcYy4LiHYXab4U8NIEud81Sv0blQZAZIlvqGy4pfTVhMMJfQ-ALogoSEC6MWkepLFrZ4UOzWY4bUlY=@igene.tw> Hi all, I'm recently trying to create Kubernetes cluster on top of OpenStack with kops. However, I found out that instances created kops are unable to get metadata from ovn-metadata-agent. I've tried creating a private network and creating instances manually and the instances are able to get metadata. After I use the same private network for kops, the whole metadata service seems to fail on that network, and new instances created manually also couldn't get metadata. From ovn-metadata-agent logs I can see that the haproxy for serving metadata is created but didn't see any logs related to instances connecting and getting metadata. Any ideas on what may happened and directions for debugging/solving the issues? Thanks! Background: OpenStack Version: Zed Deployment Method: Kolla-Ansible, Ubuntu source build Regards, Gene Kuo -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Jan 19 09:11:45 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 19 Jan 2023 14:41:45 +0530 Subject: [cinder] Cinder Midcycle - 2 (R-9) Summary Message-ID: Hello Argonauts, The summary for midcycle-2 held on 18th January, 2023 between 1400-1600 UTC is available here[1]. Please go through the etherpad and recordings for the discussion. [1] https://wiki.openstack.org/wiki/CinderAntelopeMidCycleSummary#Session_Two:_R-9:_18_January_2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jan 19 09:22:52 2023 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 19 Jan 2023 10:22:52 +0100 Subject: Proposed 2023.2 "Bobcat" release schedule Message-ID: <7b09de86-2dbc-f74f-8bc7-49ed8bb441d3@openstack.org> Hi everyone, Here is the proposed schedule for the 2023.2 "Bobcat" release: https://review.opendev.org/c/openstack/releases/+/869976 For easier review, you can access the HTML rendering at: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_280/869976/1/check/openstack-tox-docs/28016f8/docs/bobcat/schedule.html It's a 28-week-long cycle that places final release on Oct 4, 2023. Please comment on the review if you have objections to this or would like to make a case for targeting another week for the 2023.2 release. Thanks! -- Thierry Carrez (ttx) From ralonsoh at redhat.com Thu Jan 19 17:58:53 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 19 Jan 2023 18:58:53 +0100 Subject: [neutron] Neutron drivers meeting Message-ID: Hello Neutrinos: Due to the lack of agenda [1], the drivers meeting is cancelled tomorrow. See you online. [1]https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Fri Jan 20 05:05:03 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 20 Jan 2023 10:35:03 +0530 Subject: [cinder] festival of XS reviews 20th January 2023 Message-ID: Hello Argonauts, We will be having our monthly festival of XS reviews today i.e. 20th January (Friday) from 1400-1600 UTC. Following are some additional details: Date: 20th January, 2023 Time: 1400-1600 UTC Meeting link: https://bluejeans.com/556681290 etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews See you there! Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From root.mch at gmail.com Fri Jan 20 07:16:14 2023 From: root.mch at gmail.com (=?UTF-8?Q?=C4=B0zzettin_Erdem?=) Date: Fri, 20 Jan 2023 10:16:14 +0300 Subject: [cinder] [glance] Image certificate validation when booting from volume Message-ID: Hello everyone, I have a problem about booting signed images from cinder volumes. I am currently working on OpenStack Ussuri and I have Ceph storage as cinder backend. I have completed the necessary steps to enable glance image verification according to this document [1]. Now, I can create VMs from signed images -if I do not choose the *create new volume* option-. If I try to boot from volume, it throws an error message: "Image certificate validation is not supported when booting from volume". According to [2], Cinder already has an option to use signed images and it is enabled by default, but it seems it does not work. As opposed to this, [3] explains that Cinder has no ability to verify trusted images: *"As of the 18.0.0 Rocky release, trusted image certification validation is not supported with volume-backed (boot from volume) instances. The block storage service support may be available in a future release"* Is there any way to use trusted/signed images when booting from volume? Thanks. 1 - https://docs.openstack.org/nova/ussuri/user/certificate-validation.html 2 - https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/cinder.conf.html 3 - https://docs.openstack.org/nova/ussuri/user/certificate-validation.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jan 20 10:04:05 2023 From: smooney at redhat.com (Sean Mooney) Date: Fri, 20 Jan 2023 10:04:05 +0000 Subject: [cinder] [glance] Image certificate validation when booting from volume In-Reply-To: References: Message-ID: <773e20b3318ba897223067b2c1efe16002164ead.camel@redhat.com> On Fri, 2023-01-20 at 10:16 +0300, ?zzettin Erdem wrote: > Hello everyone, > > I have a problem about booting signed images from cinder volumes. I am > currently working on OpenStack Ussuri and I have Ceph storage as cinder > backend. I have completed the necessary steps to enable glance image > verification according to this document [1]. Now, I can create VMs from > signed images -if I do not choose the *create new volume* option-. > > If I try to boot from volume, it throws an error message: "Image > certificate validation is not supported when booting from volume". > According to [2], Cinder already has an option to use signed images and it > is enabled by default, but it seems it does not work. As opposed to this, > [3] explains that Cinder has no ability to verify trusted images: *"As of > the 18.0.0 Rocky release, trusted image certification validation is not > supported with volume-backed (boot from volume) instances. The block > storage service support may be available in a future release"* > > Is there any way to use trusted/signed images when booting from volume? not as far as i am aware. nova cannot verify the signiture of the image when [libvirt]/images_type=rbd becasue a rogue admin could have gone to the ceph cluster after it was uploaded and modifed the base image in some way. we dont actully download the image form glance or the ceph cluster in that configuration so we can loop over it and calulate the hash then compare it to the one in glance. the boot form voluem case is similer nova cannot verify the content of the volume iteslf. at the time supprt was dded to nova in rocky cinder did not yet support doign the signiture verification when it created the volume. the cinder docs say # # Enable image signature verification. # # Cinder uses the image signature metadata from Glance and # verifies the signature of a signed image while downloading # that image. There are two options here. # # 1. ``enabled``: verify when image has signature metadata. # 2. ``disabled``: verification is turned off. # # If the image signature cannot be verified or if the image # signature metadata is incomplete when required, then Cinder # will not create the volume and update it into an error # state. This provides end users with stronger assurances # of the integrity of the image data they are using to # create volumes. # (string value) # Possible values: # disabled - # enabled - #verify_glance_signatures = enabled but if cinder and glance are using the same sotrage backend e.g. both are using rbd or galnce is using cinder as the backend then i dont know if cinder actully supprot verifyign the signiture when doing a thing clone. for volume backed glance images an admin could have mounted the cinder volume to a guest and modifed it and you woudl not have a way to tell. im not saying you can protect from a malissue admin, you cant. that is not really what this feature is for. but the point im making is when a volume is cloned we do not to my knolsage reverify that the signiture still matches so if there has been bitrot or any tamperign there is no point where the signiture can be reverifed. > > Thanks. > > > 1 - https://docs.openstack.org/nova/ussuri/user/certificate-validation.html > 2 - > https://docs.openstack.org/cinder/latest/configuration/block-storage/samples/cinder.conf.html > 3 - https://docs.openstack.org/nova/ussuri/user/certificate-validation.html From wassilij.kaiser at dhbw-mannheim.de Fri Jan 20 12:09:29 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Fri, 20 Jan 2023 13:09:29 +0100 (CET) Subject: upgrade from Yoda to Zed according to the documentation Message-ID: <1492055586.6794.1674216569247@ox.dhbw-mannheim.de> Hallo, I'm trying the openstack system upgrade from Yoda to Zed according to the documentation https://docs.openstack.org/openstack-ansible/zed/admin/upgrades/major-upgrades.html But I get the following errors: TASK [Clone git repos (parallel)] *********************************************************************************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: Value of unknown type: , [["Failed to fetch /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly'"], ["Failed to fetch /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly'"], ["Role {'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd' https://github.com/noonedeadpunk/ansible-etcd' , 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\n"]] fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"\", line 107, in \n File \"\", line 99, in _ansiballz_main\n File \"\", line 47, in invoke_module\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", line 333, in \n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", line 329, in main\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", line 1533, in fail_json\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", line 1506, in _return_formatted\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", line 887, in remove_values\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", line 461, in _remove_values_conditions\nTypeError: Value of unknown type: , [[\"Failed to fetch /etc/ansible/roles/etcd\\nCmd('git') failed due to: exit code(128)\\n cmdline: git fetch --force --shallow-since=2022-05-27\\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up unexpectedly'\"], [\"Failed to fetch /etc/ansible/roles/etcd\\nCmd('git') failed due to: exit code(128)\\n cmdline: git fetch --force --shallow-since=2022-05-27\\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up unexpectedly'\"], [\"Role {'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd' https://github.com/noonedeadpunk/ansible-etcd' , 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\\n\"]]\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} TASK [Clone git repos (with git)]******** failed: [localhost] (item={'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd' https://github.com/noonedeadpunk/ansible-etcd' , 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": {"name": "etcd", "scm": "git", "shallow_since": "2022-05-27", "src": "https://github.com/noonedeadpunk/ansible-etcd", "trackbranch": "master", "version": "master"}, "msg": "Failed to download remote objects and refs: fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly\n"} Does anyone of you have the idea? What should I do? For this reason, no further playbooks will be released. Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.weinmann at me.com Fri Jan 20 12:24:49 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Fri, 20 Jan 2023 12:24:49 -0000 Subject: [Kolla-ansible] upgrade from yoga to zed on Rocky Linux In-Reply-To: References: Message-ID: <35bd03cd-2e92-4e3d-99a0-e7bf640475ac@me.com> Hi,I was just about to ask again whether someone has infos on this topic and found the answer on the kayobe page:https://docs.openstack.org/kayobe/latest/upgrading.htmlSo there will be rocky 9 support for YOGA in future release. For now I just stick with Yoga then. :)On Jan 8, 2023, at 9:56 PM, Oliver Weinmann wrote:Hi,That is a good question. I?m also running yoga on rocky 8 and due to some problems with yoga I would like to upgrade to zed too soon. I have created a very simple staging deployment on a single ESXi host with 3 controllers and 2 compute nodes with the same config that I use in the production cluster. This lets me try the upgrade path. I assume while there is the possibility to upgrade from rocky 8 to 9, I wouldn?t do that. Instead I would do a fresh install of rocky9. I can only think of the docs not being 100% accurate and you can run yoga on rocky9 too. I will give it a try.Cheers,OliverVon meinem iPhone gesendetAm 08.01.2023 um 10:25 schrieb wodel youchi :?Hi,Reading the kolla documentation, I saw that Yoga is supported on Rocky 8 only and Zed is supported on Rokcy 9 only, how to do the upgrade from Yoga to Zed since we have to do OS upgrade also???Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmarcelo.alencar at gmail.com Fri Jan 20 12:37:34 2023 From: jmarcelo.alencar at gmail.com (jmarcelo.alencar at gmail.com) Date: Fri, 20 Jan 2023 09:37:34 -0300 Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup Message-ID: Hello Community, I am trying to create a two machine deployment following Openstack Ansible Deployment Guide (https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/). The two machines are named targethost01 and targethost02, and I am running Ansible from deploymenthost. Every machine has 4-Core CPUs, 8 GB of RAM, and 240 GB SSD. I am using Ubuntu 22.04.1 LTS. The machine targethost01 has the following network configuration: network: version: 2 ethernets: enp5s0: dhcp4: true enp6s0: {} enp7s0: {} enp8s0: {} enp9s0: {} vlans: vlan.10: id: 10 link: enp6s0 addresses: [ ] vlan.20: id: 20 link: enp7s0 addresses: [ ] vlan.30: id: 30 link: enp8s0 addresses: [ ] vlan.40: id: 40 link: enp9s0 addresses: [ ] bridges: br-mgmt: addresses: [ 172.29.236.101/22 ] mtu: 1500 interfaces: - vlan.10 br-storage: addresses: [ 172.29.244.101/22 ] mtu: 1500 interfaces: - vlan.20 br-vlan: addresses: [] mtu: 1500 interfaces: - vlan.30 br-vxlan: addresses: [ 172.29.240.101/22 ] mtu: 1500 interfaces: - vlan.40 And targethost02 has the following network configuration: network: version: 2 ethernets: enp5s0: dhcp4: true enp6s0: {} enp7s0: {} enp8s0: {} enp9s0: {} vlans: vlan.10: id: 10 link: enp6s0 addresses: [ ] vlan.20: id: 20 link: enp7s0 addresses: [ ] vlan.30: id: 30 link: enp8s0 addresses: [ ] vlan.40: id: 40 link: enp9s0 addresses: [ ] bridges: br-mgmt: addresses: [ 172.29.236.102/22 ] mtu: 1500 interfaces: - vlan.10 br-storage: addresses: [ 172.29.244.102/22 ] mtu: 1500 interfaces: - vlan.20 br-vlan: addresses: [] mtu: 1500 interfaces: - vlan.30 br-vxlan: addresses: [ 172.29.240.102/22 ] mtu: 1500 interfaces: - vlan.40 On the deploymenthost, /etc/openstack_deploy/openstack_user_config.yml has the following: --- cidr_networks: container: 172.29.236.0/22 tunnel: 172.29.240.0/22 storage: 172.29.244.0/22 used_ips: - 172.29.236.1 - "172.29.236.100,172.29.236.200" - "172.29.240.100,172.29.240.200" - "172.29.244.100,172.29.244.200" global_overrides: internal_lb_vip_address: 172.29.236.101 external_lb_vip_address: "{{ bootstrap_host_public_address | default(ansible_facts['default_ipv4']['address']) }}" management_bridge: "br-mgmt" provider_networks: - network: group_binds: - all_containers - hosts type: "raw" container_bridge: "br-mgmt" container_interface: "eth1" container_type: "veth" ip_from_q: "container" is_container_address: true - network: group_binds: - glance_api - cinder_api - cinder_volume - nova_compute type: "raw" container_bridge: "br-storage" container_type: "veth" container_interface: "eth2" container_mtu: "9000" ip_from_q: "storage" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vxlan" container_type: "veth" container_interface: "eth10" container_mtu: "9000" ip_from_q: "tunnel" type: "vxlan" range: "1:1000" net_name: "vxlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth11" type: "vlan" range: "101:200,301:400" net_name: "vlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat" shared-infra_hosts: targethost01: ip: 172.29.236.101 repo-infra_hosts: targethost01: ip: 172.29.236.101 coordination_hosts: targethost01: ip: 172.29.236.101 os-infra_hosts: targethost01: ip: 172.29.236.101 identity_hosts: targethost01: ip: 172.29.236.101 network_hosts: targethost01: ip: 172.29.236.101 compute_hosts: targethost01: ip: 172.29.236.101 targethost02: ip: 172.29.236.102 storage-infra_hosts: targethost01: ip: 172.29.236.101 storage_hosts: targethost01: ip: 172.29.236.101 Also on the deploymenthost, /etc/openstack_deploy/conf.d/haproxy.yml has the following: haproxy_hosts: targethost01: ip: 172.29.236.101 At the Run Playbooks step of the guide, the following two Ansible commands return with unreachable=0 failed=0: # openstack-ansible setup-hosts.yml # openstack-ansible setup-infrastructure.yml And verifying the database also returns no error: root at deploymenthost:/opt/openstack-ansible/playbooks# ansible galera_container -m shell \ -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e @/etc/openstack_deploy/user_variables.yml " [WARNING]: Unable to parse /etc/openstack_deploy/inventory.ini as an inventory source targethost01_galera_container-5aa8474a | CHANGED | rc=0 >> Variable_name Value wsrep_cluster_weight 1 wsrep_cluster_capabilities wsrep_cluster_conf_id 1 wsrep_cluster_size 1 wsrep_cluster_state_uuid e7a0c332-97fe-11ed-b0d4-26b30049826d wsrep_cluster_status Primary But when I execute openstack-ansible setup-openstack.yml, I get this: TASK [os_keystone : Fact for apache module mod_auth_openidc to be installed] *** ok: [targethost01_keystone_container-76e9b31b] TASK [include_role : openstack.osa.db_setup] *********************************** TASK [openstack.osa.db_setup : Create database for service] ******************** failed: [targethost01_keystone_container-76e9b31b -> targethost01_utility_container-dc05dc90(172.29.238.59)] (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} fatal: [targethost01_keystone_container-76e9b31b -> {{ _oslodb_setup_host }}]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} PLAY RECAP ********************************************************************* targethost01_keystone_container-76e9b31b : ok=33 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 targethost01_utility_container-dc05dc90 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 EXIT NOTICE [Playbook execution failure] ************************************** =============================================================================== First, how can I disable the "censored" warning? I wonder if the uncensored running could give me more clues. Second, it appears to be a problem creating the database (keystone db sync?) How can I test the database execution inside the LXC containers? I tried to log into one of the containers and ping the hosts IP and it works, so they have connectivity. I set up the passwords with: # cd /opt/openstack-ansible # ./scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml Any help? Best Regards. -- __________________________________ Jo?o Marcelo Uch?a de Alencar jmarcelo.alencar(at)gmail.com __________________________________ From james.denton at rackspace.com Fri Jan 20 14:18:49 2023 From: james.denton at rackspace.com (James Denton) Date: Fri, 20 Jan 2023 14:18:49 +0000 Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup In-Reply-To: References: Message-ID: Hi ? The ansible command to test the DB hits the Galera container directly, while the Ansible playbooks are likely using the VIP managed by HAproxy. I suspect that HAproxy has not started properly or is otherwise not serving traffic directed toward the internal_lb_vip_address. My suggestion at the moment is to check out the logs on the haproxy node to see if it?s working properly, and try testing connectivity from the deploy node via 172.29.236.101:3306. The haproxy logs will likely provide some insight here. -- James Denton Principal Architect Rackspace Private Cloud - OpenStack james.denton at rackspace.com From: jmarcelo.alencar at gmail.com Date: Friday, January 20, 2023 at 6:45 AM To: openstack-discuss at lists.openstack.org Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello Community, I am trying to create a two machine deployment following Openstack Ansible Deployment Guide (https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fproject-deploy-guide%2Fopenstack-ansible%2Flatest%2F&data=05%7C01%7Cjames.denton%40rackspace.com%7C2030b246126f4b053abd08dafae42aba%7C570057f473ef41c8bcbb08db2fc15c2b%7C0%7C0%7C638098155124685217%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=jBqnF439N%2BD4e05ZoWzz11rMrtu1gxA7fxYStBnRXnw%3D&reserved=0). The two machines are named targethost01 and targethost02, and I am running Ansible from deploymenthost. Every machine has 4-Core CPUs, 8 GB of RAM, and 240 GB SSD. I am using Ubuntu 22.04.1 LTS. The machine targethost01 has the following network configuration: network: version: 2 ethernets: enp5s0: dhcp4: true enp6s0: {} enp7s0: {} enp8s0: {} enp9s0: {} vlans: vlan.10: id: 10 link: enp6s0 addresses: [ ] vlan.20: id: 20 link: enp7s0 addresses: [ ] vlan.30: id: 30 link: enp8s0 addresses: [ ] vlan.40: id: 40 link: enp9s0 addresses: [ ] bridges: br-mgmt: addresses: [ 172.29.236.101/22 ] mtu: 1500 interfaces: - vlan.10 br-storage: addresses: [ 172.29.244.101/22 ] mtu: 1500 interfaces: - vlan.20 br-vlan: addresses: [] mtu: 1500 interfaces: - vlan.30 br-vxlan: addresses: [ 172.29.240.101/22 ] mtu: 1500 interfaces: - vlan.40 And targethost02 has the following network configuration: network: version: 2 ethernets: enp5s0: dhcp4: true enp6s0: {} enp7s0: {} enp8s0: {} enp9s0: {} vlans: vlan.10: id: 10 link: enp6s0 addresses: [ ] vlan.20: id: 20 link: enp7s0 addresses: [ ] vlan.30: id: 30 link: enp8s0 addresses: [ ] vlan.40: id: 40 link: enp9s0 addresses: [ ] bridges: br-mgmt: addresses: [ 172.29.236.102/22 ] mtu: 1500 interfaces: - vlan.10 br-storage: addresses: [ 172.29.244.102/22 ] mtu: 1500 interfaces: - vlan.20 br-vlan: addresses: [] mtu: 1500 interfaces: - vlan.30 br-vxlan: addresses: [ 172.29.240.102/22 ] mtu: 1500 interfaces: - vlan.40 On the deploymenthost, /etc/openstack_deploy/openstack_user_config.yml has the following: --- cidr_networks: container: 172.29.236.0/22 tunnel: 172.29.240.0/22 storage: 172.29.244.0/22 used_ips: - 172.29.236.1 - "172.29.236.100,172.29.236.200" - "172.29.240.100,172.29.240.200" - "172.29.244.100,172.29.244.200" global_overrides: internal_lb_vip_address: 172.29.236.101 external_lb_vip_address: "{{ bootstrap_host_public_address | default(ansible_facts['default_ipv4']['address']) }}" management_bridge: "br-mgmt" provider_networks: - network: group_binds: - all_containers - hosts type: "raw" container_bridge: "br-mgmt" container_interface: "eth1" container_type: "veth" ip_from_q: "container" is_container_address: true - network: group_binds: - glance_api - cinder_api - cinder_volume - nova_compute type: "raw" container_bridge: "br-storage" container_type: "veth" container_interface: "eth2" container_mtu: "9000" ip_from_q: "storage" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vxlan" container_type: "veth" container_interface: "eth10" container_mtu: "9000" ip_from_q: "tunnel" type: "vxlan" range: "1:1000" net_name: "vxlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth11" type: "vlan" range: "101:200,301:400" net_name: "vlan" - network: group_binds: - neutron_linuxbridge_agent container_bridge: "br-vlan" container_type: "veth" container_interface: "eth12" host_bind_override: "eth12" type: "flat" net_name: "flat" shared-infra_hosts: targethost01: ip: 172.29.236.101 repo-infra_hosts: targethost01: ip: 172.29.236.101 coordination_hosts: targethost01: ip: 172.29.236.101 os-infra_hosts: targethost01: ip: 172.29.236.101 identity_hosts: targethost01: ip: 172.29.236.101 network_hosts: targethost01: ip: 172.29.236.101 compute_hosts: targethost01: ip: 172.29.236.101 targethost02: ip: 172.29.236.102 storage-infra_hosts: targethost01: ip: 172.29.236.101 storage_hosts: targethost01: ip: 172.29.236.101 Also on the deploymenthost, /etc/openstack_deploy/conf.d/haproxy.yml has the following: haproxy_hosts: targethost01: ip: 172.29.236.101 At the Run Playbooks step of the guide, the following two Ansible commands return with unreachable=0 failed=0: # openstack-ansible setup-hosts.yml # openstack-ansible setup-infrastructure.yml And verifying the database also returns no error: root at deploymenthost:/opt/openstack-ansible/playbooks# ansible galera_container -m shell \ -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e @/etc/openstack_deploy/user_variables.yml " [WARNING]: Unable to parse /etc/openstack_deploy/inventory.ini as an inventory source targethost01_galera_container-5aa8474a | CHANGED | rc=0 >> Variable_name Value wsrep_cluster_weight 1 wsrep_cluster_capabilities wsrep_cluster_conf_id 1 wsrep_cluster_size 1 wsrep_cluster_state_uuid e7a0c332-97fe-11ed-b0d4-26b30049826d wsrep_cluster_status Primary But when I execute openstack-ansible setup-openstack.yml, I get this: TASK [os_keystone : Fact for apache module mod_auth_openidc to be installed] *** ok: [targethost01_keystone_container-76e9b31b] TASK [include_role : openstack.osa.db_setup] *********************************** TASK [openstack.osa.db_setup : Create database for service] ******************** failed: [targethost01_keystone_container-76e9b31b -> targethost01_utility_container-dc05dc90(172.29.238.59)] (item=None) => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} fatal: [targethost01_keystone_container-76e9b31b -> {{ _oslodb_setup_host }}]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": false} PLAY RECAP ********************************************************************* targethost01_keystone_container-76e9b31b : ok=33 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 targethost01_utility_container-dc05dc90 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 EXIT NOTICE [Playbook execution failure] ************************************** =============================================================================== First, how can I disable the "censored" warning? I wonder if the uncensored running could give me more clues. Second, it appears to be a problem creating the database (keystone db sync?) How can I test the database execution inside the LXC containers? I tried to log into one of the containers and ping the hosts IP and it works, so they have connectivity. I set up the passwords with: # cd /opt/openstack-ansible # ./scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml Any help? Best Regards. -- __________________________________ Jo?o Marcelo Uch?a de Alencar jmarcelo.alencar(at)gmail.com __________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jmarcelo.alencar at gmail.com Fri Jan 20 15:19:48 2023 From: jmarcelo.alencar at gmail.com (jmarcelo.alencar at gmail.com) Date: Fri, 20 Jan 2023 12:19:48 -0300 Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup In-Reply-To: References: Message-ID: Hi James Denton, Thanks for your quick response!!! So as far as I understand, running "openstack-ansible setup-openstack.yml" will start a keystone installation TASK that connects to HAProxy, which in turn sends the connection to the galera container. The machine targethost01 runs both the containers and HAProxy. From deploymenthost, there is some connectivity to HAProxy: root at deploymenthost:/opt/openstack-ansible/playbooks# telnet 172.29.236.101 3306 Trying 172.29.236.101... Connected to 172.29.236.101. Escape character is '^]'. Connection closed by foreign host. It appears that HAProxy is listening, but cannot provide a proper reply, so the connection closes. Following your suggestion, on targethost01, HAProxy is running, but complains about no galera backend: root at targethost01:~# systemctl status haproxy.service ? haproxy.service - HAProxy Load Balancer Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2023-01-20 11:35:40 -03; 33min ago Docs: man:haproxy(1) file:/usr/share/doc/haproxy/configuration.txt.gz Process: 276870 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS) Main PID: 276873 (haproxy) Tasks: 5 (limit: 8192) Memory: 13.1M CPU: 2.165s CGroup: /system.slice/haproxy.service ??276873 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock ??276875 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock Jan 20 11:35:48 targethost01 haproxy[276875]: Server nova_console-back/targethost01_nova_api_container-56e92564 is DOWN, reason: Layer4 connection problem, info: "Conn> Jan 20 11:35:48 targethost01 haproxy[276875]: backend nova_console-back has no server available! Jan 20 11:35:49 targethost01 haproxy[276875]: [WARNING] (276875) : Server placement-back/targethost01_placement_container-90ccebb6 is DOWN, reason: Layer4 connection > Jan 20 11:35:49 targethost01 haproxy[276875]: Server placement-back/targethost01_placement_container-90ccebb6 is DOWN, reason: Layer4 connection problem, info: "Connec> Jan 20 11:35:49 targethost01 haproxy[276875]: [ALERT] (276875) : backend 'placement-back' has no server available! Jan 20 11:35:49 targethost01 haproxy[276875]: backend placement-back has no server available! Jan 20 11:35:53 targethost01 haproxy[276875]: [WARNING] (276875) : Server galera-back/targethost01_galera_container-5aa8474a is DOWN, reason: Layer4 timeout, check du> Jan 20 11:35:53 targethost01 haproxy[276875]: [ALERT] (276875) : backend 'galera-back' has no server available! Jan 20 11:35:53 targethost01 haproxy[276875]: Server galera-back/targethost01_galera_container-5aa8474a is DOWN, reason: Layer4 timeout, check duration: 12001ms. 0 act> Jan 20 11:35:53 targethost01 haproxy[276875]: backend galera-back has no server available! It also warns about the other services, but since they are not installed yet, I believe that it is the expected behavior. But galera should have a functional backend, right? The container is running: root at targethost01:~# lxc-ls targethost01_cinder_api_container-b7ec9bdd targethost01_galera_container-5aa8474a targethost01_glance_container-b3ce5a33 targethost01_heat_api_container-57ec2a00 targethost01_horizon_container-c99d168e targethost01_keystone_container-76e9b31b targethost01_memcached_container-8edca03c targethost01_neutron_server_container-fba7cb77 targethost01_nova_api_container-56e92564 targethost01_placement_container-90ccebb6 targethost01_rabbit_mq_container-2e5c5470 targethost01_repo_container-00531c23 targethost01_utility_container-dc05dc90 targethost01_zookeeper_container-294429e8 ubuntu-22-amd64 root at targethost01:~# lxc-info targethost01_galera_container-5aa8474a Name: targethost01_galera_container-5aa8474a State: RUNNING PID: 102446 IP: 10.0.3.53 IP: 172.29.238.177 Link: 5aa8474a_eth0 TX bytes: 811.30 KiB RX bytes: 57.49 MiB Total bytes: 58.28 MiB Link: 5aa8474a_eth1 TX bytes: 84.35 KiB RX bytes: 1.06 MiB Total bytes: 1.14 MiB I can establish a connection and the server waits for a password: root at targethost01:~# telnet 172.29.238.177 3306 Trying 172.29.238.177... Connected to 172.29.238.177. Escape character is '^]'. u 5.5.5-10.6.10-MariaDB-1:10.6.10+maria~ubu2204-log:8PmS7Y:W'Yn=#6%Vbjmcmysql_native_password Any hints? Best regards. On Fri, Jan 20, 2023 at 11:18 AM James Denton wrote: > > Hi ? > > > > The ansible command to test the DB hits the Galera container directly, while the Ansible playbooks are likely using the VIP managed by HAproxy. I suspect that HAproxy has not started properly or is otherwise not serving traffic directed toward the internal_lb_vip_address. > > > > My suggestion at the moment is to check out the logs on the haproxy node to see if it?s working properly, and try testing connectivity from the deploy node via 172.29.236.101:3306. The haproxy logs will likely provide some insight here. > > > > -- > > James Denton > > Principal Architect > > Rackspace Private Cloud - OpenStack > > james.denton at rackspace.com > > > > From: jmarcelo.alencar at gmail.com > Date: Friday, January 20, 2023 at 6:45 AM > To: openstack-discuss at lists.openstack.org > Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup > > CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! > > > Hello Community, > > I am trying to create a two machine deployment following Openstack > Ansible Deployment Guide > (https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fproject-deploy-guide%2Fopenstack-ansible%2Flatest%2F&data=05%7C01%7Cjames.denton%40rackspace.com%7C2030b246126f4b053abd08dafae42aba%7C570057f473ef41c8bcbb08db2fc15c2b%7C0%7C0%7C638098155124685217%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=jBqnF439N%2BD4e05ZoWzz11rMrtu1gxA7fxYStBnRXnw%3D&reserved=0). > The two machines are named targethost01 and targethost02, and I am > running Ansible from deploymenthost. Every machine has 4-Core CPUs, 8 > GB of RAM, and 240 GB SSD. I am using Ubuntu 22.04.1 LTS. > > The machine targethost01 has the following network configuration: > > network: > version: 2 > ethernets: > enp5s0: > dhcp4: true > enp6s0: {} > enp7s0: {} > enp8s0: {} > enp9s0: {} > vlans: > vlan.10: > id: 10 > link: enp6s0 > addresses: [ ] > vlan.20: > id: 20 > link: enp7s0 > addresses: [ ] > vlan.30: > id: 30 > link: enp8s0 > addresses: [ ] > vlan.40: > id: 40 > link: enp9s0 > addresses: [ ] > bridges: > br-mgmt: > addresses: [ 172.29.236.101/22 ] > mtu: 1500 > interfaces: > - vlan.10 > br-storage: > addresses: [ 172.29.244.101/22 ] > mtu: 1500 > interfaces: > - vlan.20 > br-vlan: > addresses: [] > mtu: 1500 > interfaces: > - vlan.30 > br-vxlan: > addresses: [ 172.29.240.101/22 ] > mtu: 1500 > interfaces: > - vlan.40 > > > And targethost02 has the following network configuration: > > > network: > version: 2 > ethernets: > enp5s0: > dhcp4: true > enp6s0: {} > enp7s0: {} > enp8s0: {} > enp9s0: {} > vlans: > vlan.10: > id: 10 > link: enp6s0 > addresses: [ ] > vlan.20: > id: 20 > link: enp7s0 > addresses: [ ] > vlan.30: > id: 30 > link: enp8s0 > addresses: [ ] > vlan.40: > id: 40 > link: enp9s0 > addresses: [ ] > bridges: > br-mgmt: > addresses: [ 172.29.236.102/22 ] > mtu: 1500 > interfaces: > - vlan.10 > br-storage: > addresses: [ 172.29.244.102/22 ] > mtu: 1500 > interfaces: > - vlan.20 > br-vlan: > addresses: [] > mtu: 1500 > interfaces: > - vlan.30 > br-vxlan: > addresses: [ 172.29.240.102/22 ] > mtu: 1500 > interfaces: > - vlan.40 > > > On the deploymenthost, /etc/openstack_deploy/openstack_user_config.yml > has the following: > > > --- > cidr_networks: > container: 172.29.236.0/22 > tunnel: 172.29.240.0/22 > storage: 172.29.244.0/22 > used_ips: > - 172.29.236.1 > - "172.29.236.100,172.29.236.200" > - "172.29.240.100,172.29.240.200" > - "172.29.244.100,172.29.244.200" > global_overrides: > internal_lb_vip_address: 172.29.236.101 > external_lb_vip_address: "{{ bootstrap_host_public_address | > default(ansible_facts['default_ipv4']['address']) }}" > management_bridge: "br-mgmt" > provider_networks: > - network: > group_binds: > - all_containers > - hosts > type: "raw" > container_bridge: "br-mgmt" > container_interface: "eth1" > container_type: "veth" > ip_from_q: "container" > is_container_address: true > - network: > group_binds: > - glance_api > - cinder_api > - cinder_volume > - nova_compute > type: "raw" > container_bridge: "br-storage" > container_type: "veth" > container_interface: "eth2" > container_mtu: "9000" > ip_from_q: "storage" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vxlan" > container_type: "veth" > container_interface: "eth10" > container_mtu: "9000" > ip_from_q: "tunnel" > type: "vxlan" > range: "1:1000" > net_name: "vxlan" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vlan" > container_type: "veth" > container_interface: "eth11" > type: "vlan" > range: "101:200,301:400" > net_name: "vlan" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vlan" > container_type: "veth" > container_interface: "eth12" > host_bind_override: "eth12" > type: "flat" > net_name: "flat" > shared-infra_hosts: > targethost01: > ip: 172.29.236.101 > repo-infra_hosts: > targethost01: > ip: 172.29.236.101 > coordination_hosts: > targethost01: > ip: 172.29.236.101 > os-infra_hosts: > targethost01: > ip: 172.29.236.101 > identity_hosts: > targethost01: > ip: 172.29.236.101 > network_hosts: > targethost01: > ip: 172.29.236.101 > compute_hosts: > targethost01: > ip: 172.29.236.101 > targethost02: > ip: 172.29.236.102 > storage-infra_hosts: > targethost01: > ip: 172.29.236.101 > storage_hosts: > targethost01: > ip: 172.29.236.101 > > > Also on the deploymenthost, /etc/openstack_deploy/conf.d/haproxy.yml > has the following: > > > haproxy_hosts: > targethost01: > ip: 172.29.236.101 > > > At the Run Playbooks step of the guide, the following two Ansible > commands return with unreachable=0 failed=0: > > # openstack-ansible setup-hosts.yml > # openstack-ansible setup-infrastructure.yml > > And verifying the database also returns no error: > > > root at deploymenthost:/opt/openstack-ansible/playbooks# ansible > galera_container -m shell \ > -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" > Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e > @/etc/openstack_deploy/user_variables.yml " > [WARNING]: Unable to parse /etc/openstack_deploy/inventory.ini as an > inventory source > targethost01_galera_container-5aa8474a | CHANGED | rc=0 >> > Variable_name Value > wsrep_cluster_weight 1 > wsrep_cluster_capabilities > wsrep_cluster_conf_id 1 > wsrep_cluster_size 1 > wsrep_cluster_state_uuid e7a0c332-97fe-11ed-b0d4-26b30049826d > wsrep_cluster_status Primary > > > But when I execute openstack-ansible setup-openstack.yml, I get this: > > > TASK [os_keystone : Fact for apache module mod_auth_openidc to be installed] *** > ok: [targethost01_keystone_container-76e9b31b] > TASK [include_role : openstack.osa.db_setup] *********************************** > TASK [openstack.osa.db_setup : Create database for service] ******************** > failed: [targethost01_keystone_container-76e9b31b -> > targethost01_utility_container-dc05dc90(172.29.238.59)] (item=None) => > {"censored": "the output has been hidden due to the fact that 'no_log: > true' was specified for this result", "changed": false} > fatal: [targethost01_keystone_container-76e9b31b -> {{ > _oslodb_setup_host }}]: FAILED! => {"censored": "the output has been > hidden due to the fact that 'no_log: true' was specified for this > result", "changed": false} > PLAY RECAP ********************************************************************* > targethost01_keystone_container-76e9b31b : ok=33 changed=0 > unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 > targethost01_utility_container-dc05dc90 : ok=3 changed=0 > unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 > EXIT NOTICE [Playbook execution failure] ************************************** > =============================================================================== > > > First, how can I disable the "censored" warning? I wonder if the > uncensored running could give me more clues. Second, it appears to be > a problem creating the database (keystone db sync?) How can I test the > database execution inside the LXC containers? I tried to log into one > of the containers and ping the hosts IP and it works, so they have > connectivity. I set up the passwords with: > > # cd /opt/openstack-ansible > # ./scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml > > > Any help? > > Best Regards. > > > > > -- > __________________________________ > > Jo?o Marcelo Uch?a de Alencar > jmarcelo.alencar(at)gmail.com > __________________________________ -- __________________________________ Jo?o Marcelo Uch?a de Alencar jmarcelo.alencar(at)gmail.com __________________________________ From james.denton at rackspace.com Fri Jan 20 17:21:17 2023 From: james.denton at rackspace.com (James Denton) Date: Fri, 20 Jan 2023 17:21:17 +0000 Subject: upgrade from Yoda to Zed according to the documentation In-Reply-To: <1492055586.6794.1674216569247@ox.dhbw-mannheim.de> References: <1492055586.6794.1674216569247@ox.dhbw-mannheim.de> Message-ID: Hi, This is a known issue that I ran into myself yesterday. The good news is that there?s a patch available for testing that should merge soon, which will hopefully resolve the issue for you. https://review.opendev.org/c/openstack/openstack-ansible/+/871296 -- James Denton Principal Architect Rackspace Private Cloud - OpenStack james.denton at rackspace.com From: Kaiser Wassilij Date: Friday, January 20, 2023 at 6:18 AM To: openstack-discuss at lists.openstack.org Subject: upgrade from Yoda to Zed according to the documentation CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hallo, I'm trying the openstack system upgrade from Yoda to Zed according to the documentation https://docs.openstack.org/openstack-ansible/zed/admin/upgrades/major-upgrades.html But I get the following errors: TASK [Clone git repos (parallel)] *********************************************************************************************************************************************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: Value of unknown type: , [["Failed to fetch /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly'"], ["Failed to fetch /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly'"], ["Role {'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\n"]] fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "Traceback (most recent call last):\n File \"\", line 107, in \n File \"\", line 99, in _ansiballz_main\n File \"\", line 47, in invoke_module\n File \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n _run_code(code, mod_globals, init_globals,\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", line 333, in \n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", line 329, in main\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", line 1533, in fail_json\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", line 1506, in _return_formatted\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", line 887, in remove_values\n File \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", line 461, in _remove_values_conditions\nTypeError: Value of unknown type: , [[\"Failed to fetch /etc/ansible/roles/etcd\\nCmd('git') failed due to: exit code(128)\\n cmdline: git fetch --force --shallow-since=2022-05-27\\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up unexpectedly'\"], [\"Failed to fetch /etc/ansible/roles/etcd\\nCmd('git') failed due to: exit code(128)\\n cmdline: git fetch --force --shallow-since=2022-05-27\\n stderr: 'fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up unexpectedly'\"], [\"Role {'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\\n\"]]\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} TASK [Clone git repos (with git)]******** failed: [localhost] (item={'name': 'etcd', 'scm': 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd', 'version': 'master', 'trackbranch': 'master', 'shallow_since': '2022-05-27'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": {"name": "etcd", "scm": "git", "shallow_since": "2022-05-27", "src": "https://github.com/noonedeadpunk/ansible-etcd", "trackbranch": "master", "version": "master"}, "msg": "Failed to download remote objects and refs: fatal: error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up unexpectedly\n"} Does anyone of you have the idea? What should I do? For this reason, no further playbooks will be released. Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.denton at rackspace.com Fri Jan 20 17:17:34 2023 From: james.denton at rackspace.com (James Denton) Date: Fri, 20 Jan 2023 17:17:34 +0000 Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup In-Reply-To: References: Message-ID: Hi, Thanks for the details. The MariaDB/Galera healthcheck occurs on port 9200, which may not be functioning. You can verify that in the /etc/haproxy/haproxy.cfg file. In the Galera container, there is a file, /etc/systemd/system/mariadbcheck.socket, which has the details, including the ?allow? list. Might be worth looking at that to ensure the haproxy node IP is allowed. -- James Denton Principal Architect Rackspace Private Cloud - OpenStack james.denton at rackspace.com From: jmarcelo.alencar at gmail.com Date: Friday, January 20, 2023 at 9:20 AM To: James Denton , openstack-discuss at lists.openstack.org Subject: Re: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hi James Denton, Thanks for your quick response!!! So as far as I understand, running "openstack-ansible setup-openstack.yml" will start a keystone installation TASK that connects to HAProxy, which in turn sends the connection to the galera container. The machine targethost01 runs both the containers and HAProxy. From deploymenthost, there is some connectivity to HAProxy: root at deploymenthost:/opt/openstack-ansible/playbooks# telnet 172.29.236.101 3306 Trying 172.29.236.101... Connected to 172.29.236.101. Escape character is '^]'. Connection closed by foreign host. It appears that HAProxy is listening, but cannot provide a proper reply, so the connection closes. Following your suggestion, on targethost01, HAProxy is running, but complains about no galera backend: root at targethost01:~# systemctl status haproxy.service ? haproxy.service - HAProxy Load Balancer Loaded: loaded (/lib/systemd/system/haproxy.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2023-01-20 11:35:40 -03; 33min ago Docs: man:haproxy(1) file:/usr/share/doc/haproxy/configuration.txt.gz Process: 276870 ExecStartPre=/usr/sbin/haproxy -Ws -f $CONFIG -c -q $EXTRAOPTS (code=exited, status=0/SUCCESS) Main PID: 276873 (haproxy) Tasks: 5 (limit: 8192) Memory: 13.1M CPU: 2.165s CGroup: /system.slice/haproxy.service ??276873 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock ??276875 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -S /run/haproxy-master.sock Jan 20 11:35:48 targethost01 haproxy[276875]: Server nova_console-back/targethost01_nova_api_container-56e92564 is DOWN, reason: Layer4 connection problem, info: "Conn> Jan 20 11:35:48 targethost01 haproxy[276875]: backend nova_console-back has no server available! Jan 20 11:35:49 targethost01 haproxy[276875]: [WARNING] (276875) : Server placement-back/targethost01_placement_container-90ccebb6 is DOWN, reason: Layer4 connection > Jan 20 11:35:49 targethost01 haproxy[276875]: Server placement-back/targethost01_placement_container-90ccebb6 is DOWN, reason: Layer4 connection problem, info: "Connec> Jan 20 11:35:49 targethost01 haproxy[276875]: [ALERT] (276875) : backend 'placement-back' has no server available! Jan 20 11:35:49 targethost01 haproxy[276875]: backend placement-back has no server available! Jan 20 11:35:53 targethost01 haproxy[276875]: [WARNING] (276875) : Server galera-back/targethost01_galera_container-5aa8474a is DOWN, reason: Layer4 timeout, check du> Jan 20 11:35:53 targethost01 haproxy[276875]: [ALERT] (276875) : backend 'galera-back' has no server available! Jan 20 11:35:53 targethost01 haproxy[276875]: Server galera-back/targethost01_galera_container-5aa8474a is DOWN, reason: Layer4 timeout, check duration: 12001ms. 0 act> Jan 20 11:35:53 targethost01 haproxy[276875]: backend galera-back has no server available! It also warns about the other services, but since they are not installed yet, I believe that it is the expected behavior. But galera should have a functional backend, right? The container is running: root at targethost01:~# lxc-ls targethost01_cinder_api_container-b7ec9bdd targethost01_galera_container-5aa8474a targethost01_glance_container-b3ce5a33 targethost01_heat_api_container-57ec2a00 targethost01_horizon_container-c99d168e targethost01_keystone_container-76e9b31b targethost01_memcached_container-8edca03c targethost01_neutron_server_container-fba7cb77 targethost01_nova_api_container-56e92564 targethost01_placement_container-90ccebb6 targethost01_rabbit_mq_container-2e5c5470 targethost01_repo_container-00531c23 targethost01_utility_container-dc05dc90 targethost01_zookeeper_container-294429e8 ubuntu-22-amd64 root at targethost01:~# lxc-info targethost01_galera_container-5aa8474a Name: targethost01_galera_container-5aa8474a State: RUNNING PID: 102446 IP: 10.0.3.53 IP: 172.29.238.177 Link: 5aa8474a_eth0 TX bytes: 811.30 KiB RX bytes: 57.49 MiB Total bytes: 58.28 MiB Link: 5aa8474a_eth1 TX bytes: 84.35 KiB RX bytes: 1.06 MiB Total bytes: 1.14 MiB I can establish a connection and the server waits for a password: root at targethost01:~# telnet 172.29.238.177 3306 Trying 172.29.238.177... Connected to 172.29.238.177. Escape character is '^]'. u 5.5.5-10.6.10-MariaDB-1:10.6.10+maria~ubu2204-log:8PmS7Y:W'Yn=#6%Vbjmcmysql_native_password Any hints? Best regards. On Fri, Jan 20, 2023 at 11:18 AM James Denton wrote: > > Hi ? > > > > The ansible command to test the DB hits the Galera container directly, while the Ansible playbooks are likely using the VIP managed by HAproxy. I suspect that HAproxy has not started properly or is otherwise not serving traffic directed toward the internal_lb_vip_address. > > > > My suggestion at the moment is to check out the logs on the haproxy node to see if it?s working properly, and try testing connectivity from the deploy node via 172.29.236.101:3306. The haproxy logs will likely provide some insight here. > > > > -- > > James Denton > > Principal Architect > > Rackspace Private Cloud - OpenStack > > james.denton at rackspace.com > > > > From: jmarcelo.alencar at gmail.com > Date: Friday, January 20, 2023 at 6:45 AM > To: openstack-discuss at lists.openstack.org > Subject: [openstack-ansible] Installing OpenStack with Ansible fails during Keystone playbook on TASK openstack.osa.db_setup > > CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! > > > Hello Community, > > I am trying to create a two machine deployment following Openstack > Ansible Deployment Guide > (https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.openstack.org%2Fproject-deploy-guide%2Fopenstack-ansible%2Flatest%2F&data=05%7C01%7Cjames.denton%40rackspace.com%7Ca0d5435aeb294d38bbcb08dafaf9ccd7%7C570057f473ef41c8bcbb08db2fc15c2b%7C0%7C0%7C638098248039916228%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=9Guhh4n3xlExA0biSyHR5iXrxmzrkZyF0xJh2cf8zrk%3D&reserved=0). > The two machines are named targethost01 and targethost02, and I am > running Ansible from deploymenthost. Every machine has 4-Core CPUs, 8 > GB of RAM, and 240 GB SSD. I am using Ubuntu 22.04.1 LTS. > > The machine targethost01 has the following network configuration: > > network: > version: 2 > ethernets: > enp5s0: > dhcp4: true > enp6s0: {} > enp7s0: {} > enp8s0: {} > enp9s0: {} > vlans: > vlan.10: > id: 10 > link: enp6s0 > addresses: [ ] > vlan.20: > id: 20 > link: enp7s0 > addresses: [ ] > vlan.30: > id: 30 > link: enp8s0 > addresses: [ ] > vlan.40: > id: 40 > link: enp9s0 > addresses: [ ] > bridges: > br-mgmt: > addresses: [ 172.29.236.101/22 ] > mtu: 1500 > interfaces: > - vlan.10 > br-storage: > addresses: [ 172.29.244.101/22 ] > mtu: 1500 > interfaces: > - vlan.20 > br-vlan: > addresses: [] > mtu: 1500 > interfaces: > - vlan.30 > br-vxlan: > addresses: [ 172.29.240.101/22 ] > mtu: 1500 > interfaces: > - vlan.40 > > > And targethost02 has the following network configuration: > > > network: > version: 2 > ethernets: > enp5s0: > dhcp4: true > enp6s0: {} > enp7s0: {} > enp8s0: {} > enp9s0: {} > vlans: > vlan.10: > id: 10 > link: enp6s0 > addresses: [ ] > vlan.20: > id: 20 > link: enp7s0 > addresses: [ ] > vlan.30: > id: 30 > link: enp8s0 > addresses: [ ] > vlan.40: > id: 40 > link: enp9s0 > addresses: [ ] > bridges: > br-mgmt: > addresses: [ 172.29.236.102/22 ] > mtu: 1500 > interfaces: > - vlan.10 > br-storage: > addresses: [ 172.29.244.102/22 ] > mtu: 1500 > interfaces: > - vlan.20 > br-vlan: > addresses: [] > mtu: 1500 > interfaces: > - vlan.30 > br-vxlan: > addresses: [ 172.29.240.102/22 ] > mtu: 1500 > interfaces: > - vlan.40 > > > On the deploymenthost, /etc/openstack_deploy/openstack_user_config.yml > has the following: > > > --- > cidr_networks: > container: 172.29.236.0/22 > tunnel: 172.29.240.0/22 > storage: 172.29.244.0/22 > used_ips: > - 172.29.236.1 > - "172.29.236.100,172.29.236.200" > - "172.29.240.100,172.29.240.200" > - "172.29.244.100,172.29.244.200" > global_overrides: > internal_lb_vip_address: 172.29.236.101 > external_lb_vip_address: "{{ bootstrap_host_public_address | > default(ansible_facts['default_ipv4']['address']) }}" > management_bridge: "br-mgmt" > provider_networks: > - network: > group_binds: > - all_containers > - hosts > type: "raw" > container_bridge: "br-mgmt" > container_interface: "eth1" > container_type: "veth" > ip_from_q: "container" > is_container_address: true > - network: > group_binds: > - glance_api > - cinder_api > - cinder_volume > - nova_compute > type: "raw" > container_bridge: "br-storage" > container_type: "veth" > container_interface: "eth2" > container_mtu: "9000" > ip_from_q: "storage" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vxlan" > container_type: "veth" > container_interface: "eth10" > container_mtu: "9000" > ip_from_q: "tunnel" > type: "vxlan" > range: "1:1000" > net_name: "vxlan" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vlan" > container_type: "veth" > container_interface: "eth11" > type: "vlan" > range: "101:200,301:400" > net_name: "vlan" > - network: > group_binds: > - neutron_linuxbridge_agent > container_bridge: "br-vlan" > container_type: "veth" > container_interface: "eth12" > host_bind_override: "eth12" > type: "flat" > net_name: "flat" > shared-infra_hosts: > targethost01: > ip: 172.29.236.101 > repo-infra_hosts: > targethost01: > ip: 172.29.236.101 > coordination_hosts: > targethost01: > ip: 172.29.236.101 > os-infra_hosts: > targethost01: > ip: 172.29.236.101 > identity_hosts: > targethost01: > ip: 172.29.236.101 > network_hosts: > targethost01: > ip: 172.29.236.101 > compute_hosts: > targethost01: > ip: 172.29.236.101 > targethost02: > ip: 172.29.236.102 > storage-infra_hosts: > targethost01: > ip: 172.29.236.101 > storage_hosts: > targethost01: > ip: 172.29.236.101 > > > Also on the deploymenthost, /etc/openstack_deploy/conf.d/haproxy.yml > has the following: > > > haproxy_hosts: > targethost01: > ip: 172.29.236.101 > > > At the Run Playbooks step of the guide, the following two Ansible > commands return with unreachable=0 failed=0: > > # openstack-ansible setup-hosts.yml > # openstack-ansible setup-infrastructure.yml > > And verifying the database also returns no error: > > > root at deploymenthost:/opt/openstack-ansible/playbooks# ansible > galera_container -m shell \ > -a "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'" > Variable files: "-e @/etc/openstack_deploy/user_secrets.yml -e > @/etc/openstack_deploy/user_variables.yml " > [WARNING]: Unable to parse /etc/openstack_deploy/inventory.ini as an > inventory source > targethost01_galera_container-5aa8474a | CHANGED | rc=0 >> > Variable_name Value > wsrep_cluster_weight 1 > wsrep_cluster_capabilities > wsrep_cluster_conf_id 1 > wsrep_cluster_size 1 > wsrep_cluster_state_uuid e7a0c332-97fe-11ed-b0d4-26b30049826d > wsrep_cluster_status Primary > > > But when I execute openstack-ansible setup-openstack.yml, I get this: > > > TASK [os_keystone : Fact for apache module mod_auth_openidc to be installed] *** > ok: [targethost01_keystone_container-76e9b31b] > TASK [include_role : openstack.osa.db_setup] *********************************** > TASK [openstack.osa.db_setup : Create database for service] ******************** > failed: [targethost01_keystone_container-76e9b31b -> > targethost01_utility_container-dc05dc90(172.29.238.59)] (item=None) => > {"censored": "the output has been hidden due to the fact that 'no_log: > true' was specified for this result", "changed": false} > fatal: [targethost01_keystone_container-76e9b31b -> {{ > _oslodb_setup_host }}]: FAILED! => {"censored": "the output has been > hidden due to the fact that 'no_log: true' was specified for this > result", "changed": false} > PLAY RECAP ********************************************************************* > targethost01_keystone_container-76e9b31b : ok=33 changed=0 > unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 > targethost01_utility_container-dc05dc90 : ok=3 changed=0 > unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 > EXIT NOTICE [Playbook execution failure] ************************************** > =============================================================================== > > > First, how can I disable the "censored" warning? I wonder if the > uncensored running could give me more clues. Second, it appears to be > a problem creating the database (keystone db sync?) How can I test the > database execution inside the LXC containers? I tried to log into one > of the containers and ping the hosts IP and it works, so they have > connectivity. I set up the passwords with: > > # cd /opt/openstack-ansible > # ./scripts/pw-token-gen.py --file /etc/openstack_deploy/user_secrets.yml > > > Any help? > > Best Regards. > > > > > -- > __________________________________ > > Jo?o Marcelo Uch?a de Alencar > jmarcelo.alencar(at)gmail.com > __________________________________ -- __________________________________ Jo?o Marcelo Uch?a de Alencar jmarcelo.alencar(at)gmail.com __________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jan 20 18:42:43 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 20 Jan 2023 19:42:43 +0100 Subject: upgrade from Yoda to Zed according to the documentation In-Reply-To: References: <1492055586.6794.1674216569247@ox.dhbw-mannheim.de> Message-ID: Hi, As a fast workaround for the issue, you can create a file /etc/openstack_deploy/user-role-requirements.yml with following content: - name: etcd scm: git src: https://github.com/noonedeadpunk/ansible-etcd version: dc35da415cd908f85bbbd733a2f554e00ba0e1d4 trackbranch: master shallow_since: '2022-06-22' - name: pacemaker_corosync scm: git src: https://github.com/noonedeadpunk/ansible-pacemaker-corosync version: dacff1ed6ede207b8afcbfff5e990d875580893b trackbranch: master shallow_since: '2022-06-14' ``` and re-run bootstrap-ansible.sh after that. Also we expect new Zed tag to be released withing couple of days, which will include important bugfixes and also will improve upgrade process, which will be reflected in documentation and upgrade script. ??, 20 ???. 2023 ?., 18:23 James Denton : > Hi, > > > > This is a known issue that I ran into myself yesterday. The good news is > that there?s a patch available for testing that should merge soon, which > will hopefully resolve the issue for you. > > > > https://review.opendev.org/c/openstack/openstack-ansible/+/871296 > > > > -- > > James Denton > > Principal Architect > > Rackspace Private Cloud - OpenStack > > james.denton at rackspace.com > > > > *From: *Kaiser Wassilij > *Date: *Friday, January 20, 2023 at 6:18 AM > *To: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *upgrade from Yoda to Zed according to the documentation > > *CAUTION:* This message originated externally, please use caution when > clicking on links or opening attachments! > > > > Hallo, > > > > I'm trying the openstack system upgrade from Yoda to Zed according to the > documentation > > > https://docs.openstack.org/openstack-ansible/zed/admin/upgrades/major-upgrades.html > > > But I get the following errors: > TASK [Clone git repos (parallel)] > *********************************************************************************************************************************************************************** > An exception occurred during task execution. To see the full traceback, > use -vvv. The error was: TypeError: Value of unknown type: 'multiprocessing.managers.ListProxy'>, [["Failed to fetch > /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n > cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: > error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: > the remote end hung up unexpectedly'"], ["Failed to fetch > /etc/ansible/roles/etcd\nCmd('git') failed due to: exit code(128)\n > cmdline: git fetch --force --shallow-since=2022-05-27\n stderr: 'fatal: > error in object: unshallow 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: > the remote end hung up unexpectedly'"], ["Role {'name': 'etcd', 'scm': > 'git', 'src': 'https://github.com/noonedeadpunk/ansible-etcd' > , > 'version': 'master', 'trackbranch': 'master', 'shallow_since': > '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, > 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\n"]] > fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": > "Traceback (most recent call last):\n File \"\", line 107, in > \n File \"\", line 99, in _ansiballz_main\n File > \"\", line 47, in invoke_module\n File > \"/usr/lib/python3.8/runpy.py\", line 207, in run_module\n return > _run_module_code(code, init_globals, run_name, mod_spec)\n File > \"/usr/lib/python3.8/runpy.py\", line 97, in _run_module_code\n > _run_code(code, mod_globals, init_globals,\n File > \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\n exec(code, > run_globals)\n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", > line 333, in \n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/modules/openstack/osa/git_requirements.py\", > line 329, in main\n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", > line 1533, in fail_json\n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/basic.py\", > line 1506, in _return_formatted\n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", > line 887, in remove_values\n File > \"/tmp/ansible_openstack.osa.git_requirements_payload_m0tl1pcb/ansible_openstack.osa.git_requirements_payload.zip/ansible/module_utils/common/parameters.py\", > line 461, in _remove_values_conditions\nTypeError: Value of unknown type: > , [[\"Failed to fetch > /etc/ansible/roles/etcd\\nCmd('git') failed due to: exit code(128)\\n > cmdline: git fetch --force --shallow-since=2022-05-27\\n stderr: 'fatal: > error in object: unshallow > 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up > unexpectedly'\"], [\"Failed to fetch /etc/ansible/roles/etcd\\nCmd('git') > failed due to: exit code(128)\\n cmdline: git fetch --force > --shallow-since=2022-05-27\\n stderr: 'fatal: error in object: unshallow > 29996b0d15ebfaacb7626ce889e26b209ed53434\\nfatal: the remote end hung up > unexpectedly'\"], [\"Role {'name': 'etcd', 'scm': 'git', 'src': ' > https://github.com/noonedeadpunk/ansible-etcd' > , > 'version': 'master', 'trackbranch': 'master', 'shallow_since': > '2022-05-27', 'path': '/etc/ansible/roles', 'refspec': None, 'depth': 20, > 'dest': '/etc/ansible/roles/etcd'} failed after 2 retries\\n\"]]\n", > "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the > exact error", "rc": 1} > > TASK [Clone git repos (with git)]******** > > failed: [localhost] (item={'name': 'etcd', 'scm': 'git', 'src': ' > https://github.com/noonedeadpunk/ansible-etcd' > , > 'version': 'master', 'trackbranch': 'master', 'shallow_since': > '2022-05-27'}) => {"ansible_loop_var": "item", "attempts": 2, "changed": > false, "cmd": ["/usr/bin/git", "fetch", "--depth", "20", "--force", > "origin", "+refs/heads/master:refs/remotes/origin/master"], "item": > {"name": "etcd", "scm": "git", "shallow_since": "2022-05-27", "src": " > https://github.com/noonedeadpunk/ansible-etcd > ", > "trackbranch": "master", "version": "master"}, "msg": "Failed to download > remote objects and refs: fatal: error in object: unshallow > 29996b0d15ebfaacb7626ce889e26b209ed53434\nfatal: the remote end hung up > unexpectedly\n"} > > Does anyone of you have the idea? > What should I do? > > For this reason, no further playbooks will be released. > > Kind regards > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Fri Jan 20 19:50:53 2023 From: abishop at redhat.com (Alan Bishop) Date: Fri, 20 Jan 2023 11:50:53 -0800 Subject: [kolla-ansible] [cinder] Setting up multiple LVM cinder backends located on different servers In-Reply-To: References: Message-ID: On Wed, Jan 18, 2023 at 6:38 AM A Monster wrote: > I have an openstack configuration, with 3 controller nodes and multiple > compute nodes , one of the controllers has an LVM storage based on HDD > drives, while another one has an SDD one, and when I tried to configure the > two different types of storage as cinder backends I faced a dilemma since > according to the documentation I have to specify the two different backends > in the cinder configuration as it is explained here > > however and since I want to separate disks type when creating volumes, I > had to specify different backend names, but I don't know if this > configuration should be written in both the storage nodes, or should I > specify for each one of these storage nodes the configuration related to > its own type of disks. > The key factor in understanding how to configure the cinder-volume services for your use case is knowing how the volume services operate and how they interact with the other cinder services. In short, you only define backends in the cinder-volume service that "owns" that backend. If controller-X only handles lvm-X, then you only define that backend on that controller. Don't include any mention of lvm-Y if that one is handled by another controller. The other services (namely the api and schedulers) learn about the backends when each of them reports its status via cinder's internal RPC framework. This means your lvm-1 service running on one controller should only have the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all to the lvm-3 backend on the other controller. Likewise, the other controller should only contain the lvm-3 backend, with its enabled_backends=lvm-3. > Now, I tried writing the same configuration for both nodes, but I found > out that the volume service related to server1 concerning disks in server2 > is down, and the volume service in server2 concerning disks in server1 is > also down. > > $ openstack volume service > list+------------------+---------------------+------+---------+-------+----------------------------+| > Binary | Host | Zone | Status | State | Updated At > |+------------------+---------------------+------+---------+-------+----------------------------+| > cinder-scheduler | controller-01 | nova | enabled | up | > 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | > enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | > controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || > cinder-volume | controller-03 at lvm-1 | nova | enabled | up | > 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 | nova > | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | > controller-01 at lvm-3 | nova | enabled | down | 2023-01-18T14:09:42.000000 > || cinder-volume | controller-03 at lvm-3 | nova | enabled | down | > 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+ > > Unless you do a fresh deployment, you will need to remove the invalid services that will always be down. Those would be the ones on controller-X where the backend is actually on controller-Y. You'll use the cinder-manage command to do that. From the data you supplied, it seems the lvm-1 backend is up on controller03, and the lvm-3 backend on that controller is down. The numbering seems backwards, but I stick with this example. To delete the lvm-3 backend, which is down because that backend is actually on another controller, you'd issue this command: $ cinder-manage service remove cinder-volume controller-03 at lvm-3 Don't worry if you accidentally delete a "good" service. The list will be refreshed each time the cinder-volume services refresh their status. > This is the configuration I have written on the configuration files for > cinder_api _cinder_scheduler and cinder_volume for both servers. > > enabled_backends= lvm-1,lvm-3 > [lvm-1] > volume_group = cinder-volumes > volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver > volume_backend_name = lvm-1 > target_helper = lioadm > target_protocol = iscsi > report_discard_supported = true > [lvm-3] > volume_group=cinder-volumes-ssd > volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver > volume_backend_name=lvm-3 > target_helper = lioadm > target_protocol = iscsi > report_discard_supported = true > At a minimum, on each controller you need to remove all references to the backend that's actually on the other controller. The cinder-api and cinder-scheduler services don't need any backend configuration. That's because the backend sections and enabled_backends options are only relevant to the cinder-volume service, and are ignored by the other services. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Jan 20 21:22:44 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 20 Jan 2023 22:22:44 +0100 Subject: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? Message-ID: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> Hey openstack-discuss, while there is support for OpenID Connect and its various flows in the openstack client (https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html#envvar-OS_AUTH_TYPE). I would like to have the user authenticate only with central IdP login via a web page and then receive an access token and not have each user's openstack cli be a full OIDC client handling credentials and authenticating against the IdP via the users password itself. The tricky bit here is having good tooling for users to authenticate via the existing SSO and then to get and refresh tokens which are then fed to the openstack CLI. I was wondering if anybody knows of some nice integrations / plugins / hooks to make it easy for users to deal with the authentication (usually via some web site) and then to inject the token (v3oidcaccesstoken) into openstack-cli? I found that Fedcloud.eu (https://www.fedcloud.eu/) does something like this (see https://fedcloudclient.fedcloud.eu/usage.html#authentication) via OIDC-Agent. But most platforms making use of OIDC seem to configure the openstack client with client_id and secret and have it authenticate directly with the IdP. Regards, Christian From gmann at ghanshyammann.com Fri Jan 20 23:36:08 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 Jan 2023 15:36:08 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup Message-ID: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Hi PTLs, As you might know or have seen for your project package on PyPi, OpenStack deliverables on PyPi have additional maintainers, For example, https://pypi.org/project/murano/, https://pypi.org/project/glance/ We should keep only 'openstackci' as a maintainer in PyPi so that releases of OpenStack deliverables can be managed in a single place. Otherwise, we might face the two sets of maintainers' places and packages might get released in PyPi by additional maintainers without the OpenStack project team knowing about it. One such case is in Horizon repo 'xstatic-font-awesome' where a new maintainer is added by an existing additional maintainer and this package was released without the Horizon team knowing about the changes and release. - https://github.com/openstack/xstatic-font-awesome/pull/2 To avoid the 'xstatic-font-awesome' case for other packages, TC discussed it in their weekly meetings[1] and agreed to audit all the OpenStack packages and then clean up the additional maintainers in PyPi (keep only 'openstackci' as maintainers). To help in this task, TC requests project PTL to perform the audit for their project's repo and add comments in the below etherpad. - https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup Thanks to knikolla to automate the listing of the OpenStack packages with additional maintainers in PyPi which you can find the result in output.txt at the bottom of this link. I have added the project list of who needs to check their repo in etherpad. - https://gist.github.com/knikolla/7303a65a5ddaa2be553fc6e54619a7a1 Please complete the audit for your project before March 15 so that TC can discuss the next step in vPTG. [1] https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-11-16.00.log.html#l-41 -gmann From gmann at ghanshyammann.com Sat Jan 21 00:58:57 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 20 Jan 2023 16:58:57 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2023 Jan 20: Reading: 5 min Message-ID: <185d1d5f193.116275b1e116229.107235622525165880@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Jan 18. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-18-16.00.log.html * The next TC weekly meeting will be on Jan 25 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Jan 24. 2. What we completed this week: ========================= * Removed Adjutant from inactive project list[2] 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[3]. Open Reviews ----------------- * Four open reviews for ongoing activities[4]. Cleanup of PyPI maintainer list for OpenStack Projects ---------------------------------------------------------------- The horizon team discussed the 'xstatic-font-awesome' repo PyPi maintainer topic[5] in their weekly meetings and decided to hand over the maintenance to additional external maintainers. The horizon team will retire the repo from OpenStack. To clean up other OpenStack packages, I have sent a separate email[6] to audit first and then we can cleanup based on the audit results. Requesting every PTL to look and do the required step. Project updates ------------------- * Add Cinder Huawei charm[7] * Add the woodpecker charm to Openstack charms[8] Less Active/Inactive projects: ~~~~~~~~~~~~~~~~~~~~~~ * Zaqar status TC discussed and set a deadline of Jan 25th (next week's TC meeting) to check the status and take a final call on Zaqar to be included in this cycle release or not. Zaqar PTL is trying to fix the gate[9]. * Mistral status: python-mistralclient gate is also green now and their beta releases are in progress[10][11]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[12]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [13] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions [2] https://review.opendev.org/c/openstack/governance/+/869665 [3] https://etherpad.opendev.org/p/tc-2023.1-tracker [4] https://review.opendev.org/q/projects:openstack/governance+status:open [5] https://github.com/openstack/xstatic-font-awesome/pull/2 [6] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html [7] https://review.opendev.org/c/openstack/governance/+/867588 [8] https://review.opendev.org/c/openstack/governance/+/869752 [9] https://review.opendev.org/c/openstack/zaqar/+/857924/ [10] https://review.opendev.org/c/openstack/releases/+/869470 [11] https://review.opendev.org/c/openstack/releases/+/869448 [12] hhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [13] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From vincentlee676 at gmail.com Sat Jan 21 07:07:50 2023 From: vincentlee676 at gmail.com (vincent lee) Date: Sat, 21 Jan 2023 01:07:50 -0600 Subject: Pulling plugins from my repository Message-ID: Hi all, I would like to ask where exactly I can change the default path in which plugins such as zun_ui, blazar_dashboard, etc... can be pulled from my GitHub instead of the default repository. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sat Jan 21 07:31:52 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sat, 21 Jan 2023 08:31:52 +0100 Subject: Pulling plugins from my repository In-Reply-To: References: Message-ID: Hi, Vincent I assume you're asking about replacing path for git repos in some specific deployment tool, but you haven't mentioned what you use for your deployment. So it's quite hard to answer your question not having that kind of information. ??, 21 ???. 2023 ?., 08:11 vincent lee : > Hi all, I would like to ask where exactly I can change the default path in > which plugins such as zun_ui, blazar_dashboard, etc... can be pulled from > my GitHub instead of the default repository. > > Best, > Vincent > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Sat Jan 21 12:39:24 2023 From: amonster369 at gmail.com (A Monster) Date: Sat, 21 Jan 2023 13:39:24 +0100 Subject: [kolla-ansible] [cinder] Setting up multiple LVM cinder backends located on different servers In-Reply-To: References: Message-ID: First of all thank you for your answer, it's exactly what I was looking for, What is still ambiguous for me is the name of the volume group I specified in globals.yml file before running the deployment, the default value is cinder-volumes, however after I added the second lvm backend, I kept the same volume group for lvm-1 but chooses another name for lvm-2, was it possible to keep the same nomination for both ? If not how can I specify the different backends directly from globals.yml file if possible. On Fri, Jan 20, 2023, 20:51 Alan Bishop wrote: > > > On Wed, Jan 18, 2023 at 6:38 AM A Monster wrote: > >> I have an openstack configuration, with 3 controller nodes and multiple >> compute nodes , one of the controllers has an LVM storage based on HDD >> drives, while another one has an SDD one, and when I tried to configure the >> two different types of storage as cinder backends I faced a dilemma since >> according to the documentation I have to specify the two different backends >> in the cinder configuration as it is explained here >> >> however and since I want to separate disks type when creating volumes, I >> had to specify different backend names, but I don't know if this >> configuration should be written in both the storage nodes, or should I >> specify for each one of these storage nodes the configuration related to >> its own type of disks. >> > > The key factor in understanding how to configure the cinder-volume > services for your use case is knowing how the volume services operate and > how they interact with the other cinder services. In short, you only define > backends in the cinder-volume service that "owns" that backend. If > controller-X only handles lvm-X, then you only define that backend on that > controller. Don't include any mention of lvm-Y if that one is handled by > another controller. The other services (namely the api and schedulers) > learn about the backends when each of them reports its status via cinder's > internal RPC framework. > > This means your lvm-1 service running on one controller should only have > the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all > to the lvm-3 backend on the other controller. Likewise, the other > controller should only contain the lvm-3 backend, with its > enabled_backends=lvm-3. > > >> Now, I tried writing the same configuration for both nodes, but I found >> out that the volume service related to server1 concerning disks in server2 >> is down, and the volume service in server2 concerning disks in server1 is >> also down. >> >> $ openstack volume service >> list+------------------+---------------------+------+---------+-------+----------------------------+| >> Binary | Host | Zone | Status | State | Updated At >> |+------------------+---------------------+------+---------+-------+----------------------------+| >> cinder-scheduler | controller-01 | nova | enabled | up | >> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | >> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | >> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || >> cinder-volume | controller-03 at lvm-1 | nova | enabled | up | >> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 | nova >> | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | >> controller-01 at lvm-3 | nova | enabled | down | 2023-01-18T14:09:42.000000 >> || cinder-volume | controller-03 at lvm-3 | nova | enabled | down | >> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+ >> >> > Unless you do a fresh deployment, you will need to remove the invalid > services that will always be down. Those would be the ones on controller-X > where the backend is actually on controller-Y. You'll use the cinder-manage > command to do that. From the data you supplied, it seems the lvm-1 backend > is up on controller03, and the lvm-3 backend on that controller is down. > The numbering seems backwards, but I stick with this example. To delete the > lvm-3 backend, which is down because that backend is actually on another > controller, you'd issue this command: > > $ cinder-manage service remove cinder-volume controller-03 at lvm-3 > > Don't worry if you accidentally delete a "good" service. The list will be > refreshed each time the cinder-volume services refresh their status. > > >> This is the configuration I have written on the configuration files for >> cinder_api _cinder_scheduler and cinder_volume for both servers. >> >> enabled_backends= lvm-1,lvm-3 >> [lvm-1] >> volume_group = cinder-volumes >> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver >> volume_backend_name = lvm-1 >> target_helper = lioadm >> target_protocol = iscsi >> report_discard_supported = true >> [lvm-3] >> volume_group=cinder-volumes-ssd >> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver >> volume_backend_name=lvm-3 >> target_helper = lioadm >> target_protocol = iscsi >> report_discard_supported = true >> > > At a minimum, on each controller you need to remove all references to the > backend that's actually on the other controller. The cinder-api and > cinder-scheduler services don't need any backend configuration. That's > because the backend sections and enabled_backends options are only relevant > to the cinder-volume service, and are ignored by the other services. > > Alan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 22 14:54:24 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 22 Jan 2023 21:54:24 +0700 Subject: [magnum] Other Distro for k8s Message-ID: Hello guys. I know that Magnum is using Fedora Coreos for k8s. Why don't we use a long-term distro such as Ubuntu for this project? I will be more stable. and this project seems obsolete with the old version for k8s. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 22 14:59:17 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 22 Jan 2023 21:59:17 +0700 Subject: [Magnum]enable cluster user trust Message-ID: Hello guys. I am going to use Magnum for production but I see that https://nvd.nist.gov/vuln/detail/CVE-2016-7404 if I want to use cinder for k8s cluster. Is there any way to fix or minimize this problem? Thanks. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gajuambi at gmail.com Sat Jan 21 23:07:26 2023 From: gajuambi at gmail.com (Gajendra D Ambi) Date: Sun, 22 Jan 2023 04:37:26 +0530 Subject: Reddit query for openstack magnum for enterprise core component maker Message-ID: https://www.reddit.com/r/openstack/comments/10hu68s/container_orchestrator_for_openstack/ . Hi team, request anyone of you from this project to please help us out. We also mean to contribute to the project because we know that we will need to add a lot more features to it that what api endpoints are already providing to us. When we do, it will all be contributed to the project after it is being tested for months in production. I am leaning towards openstack magnum and I do not have a lot of time to convince others of the same. Thanks and Regards, https://ambig.one/2/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Mon Jan 23 06:31:01 2023 From: vincentlee676 at gmail.com (vincent lee) Date: Mon, 23 Jan 2023 00:31:01 -0600 Subject: Pulling plugins from my repository Message-ID: Hi everyone, I would like to know where exactly I can replace the path for the GitHub repository. For example, I want to pull some plugins, such as zun_ui, blazar_dashboard, etc., from my own GitHub repository instead of the default GitHub repository. I am currently using Kolla-ansible for deploying OpenStack in the yoga version. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Mon Jan 23 07:35:34 2023 From: bshephar at redhat.com (Brendan Shephard) Date: Mon, 23 Jan 2023 17:35:34 +1000 Subject: [heat][release] Proposing to EOL Train, Ussuri and Victoria Message-ID: <5AE17A78-C140-4362-BD95-B3C267E904FD@redhat.com> Hi folks, We?re looking to move Train, Ussuri and Victoria branches to EOL for Heat projects. Just reaching out to see if there are any objections, if I don?t hear anything back I?ll move forward with that this week. Cheers, Brendan Shephard Senior Software Engineer Red Hat Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Mon Jan 23 10:09:49 2023 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 23 Jan 2023 10:09:49 +0000 Subject: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? In-Reply-To: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> References: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> Message-ID: <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> On 20/01/2023 21:22, Christian Rohmann wrote: > > I found that Fedcloud.eu (https://www.fedcloud.eu/) does something > like this (see > https://fedcloudclient.fedcloud.eu/usage.html#authentication) via > OIDC-Agent. But most platforms making use of OIDC seem to configure > the openstack client with client_id and secret and have it > authenticate directly with the IdP. > My team contributed patches to https://github.com/IFCA/keystoneauth-oidc to use PKCE so that a client ID and client secret do not need to be given to users. Hope this is useful, Jon. From sahid.ferdjaoui at industrialdiscipline.com Mon Jan 23 10:32:01 2023 From: sahid.ferdjaoui at industrialdiscipline.com (Sahid Orentino Ferdjaoui) Date: Mon, 23 Jan 2023 10:32:01 +0000 Subject: [osprofiler] Regarding contribution process Message-ID: Hello, We would like to make some contributions on OSprofiler but not sure of the path that we are taking. For the first one we would like to make a contribution that is adding an option for Jaeger drivers. The point is to add a prefix for the service name. A bug report has been opened for it [0]. Another contribution would be to introduce a set of tags to jaeger spans, still from an option. And finally we would like to add support of Open Telemetry driver. Any suggestions regarding the process that we should use? Spec, blueprint, bug report. Also any blockers that the community may think about? Thanks, s. From christian.rohmann at inovex.de Mon Jan 23 12:19:05 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 23 Jan 2023 13:19:05 +0100 Subject: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? In-Reply-To: <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> References: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> Message-ID: <23e1d227-807c-8ef1-a861-deef17aaa1f0@inovex.de> Thanks Jonathan for your response! On 23/01/2023 11:09, Jonathan Rosser wrote: > My team contributed patches to > https://github.com/IFCA/keystoneauth-oidc to use PKCE so that a client > ID and client secret do not need to be given to users. That sounds interesting - I suppose this patch would extend the auth plugins listed at https://docs.openstack.org/keystoneauth/latest/plugin-options.html#available-plugins ? Could you elaborate a little more on the architecture and auth workflow you have using this patch? Do you have any plans to push this upstream to become part of the standard plugins by any chance? Thanks again and with kind regards, Christian From jonathan.rosser at rd.bbc.co.uk Mon Jan 23 12:59:28 2023 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 23 Jan 2023 12:59:28 +0000 Subject: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? In-Reply-To: <23e1d227-807c-8ef1-a861-deef17aaa1f0@inovex.de> References: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> <23e1d227-807c-8ef1-a861-deef17aaa1f0@inovex.de> Message-ID: Hi Christian, We deploy openstack with keystone behind Apache and mod_oidc, using Keycloak as an IdP with the client set as 'public' to enable PKCE. We provide a 'helper' git repo to setup a correctly configured virtualenv for users which also installs keystoneauth-oidc. A script in that repo lets a user trigger the login flow (essentially openstack token issue) which launches a local browser window to complete the SSO / 2FA process. Environment vars including OS_TOKEN are exported by the script. If my memory serves correctly I did approach the Keystone team in IRC to have one of my developers contribute better support for OIDC in keystoneauth, but there was a preference for a much more significant rewrite of parts of keystone. Unfortunately time has passed and I think that an external plugin is still needed for a secure OIDC cli experience using a modern auth flow. Jon. On 23/01/2023 12:19, Christian Rohmann wrote: > Thanks Jonathan for your response! > > On 23/01/2023 11:09, Jonathan Rosser wrote: >> My team contributed patches to >> https://github.com/IFCA/keystoneauth-oidc to use PKCE so that a >> client ID and client secret do not need to be given to users. > > That sounds interesting - I suppose this patch would extend the auth > plugins listed at > https://docs.openstack.org/keystoneauth/latest/plugin-options.html#available-plugins > ? > Could you elaborate a little more on the architecture and auth > workflow you have using this patch? > > Do you have any plans to push this upstream to become part of the > standard plugins by any chance? > > > > Thanks again and with kind regards, > > > Christian > > > > From tonykarera at gmail.com Mon Jan 23 13:18:06 2023 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 23 Jan 2023 15:18:06 +0200 Subject: Snapshots disappear during saving Message-ID: Dear Team, I am using Openstack Wallaby deployed using kolla-ansible. I installed Glance with the ceph backend and all was well. However when I create snapshots, they disappear when they are saved. Any idea on how to resolve this? Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Mon Jan 23 13:59:09 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 23 Jan 2023 13:59:09 +0000 Subject: Snapshots disappear during saving In-Reply-To: References: Message-ID: Hi Karera, hope this email finds you well We need more information in order to reproduce this issue. - Do you mind sharing c-vol logs of the operation to see if there's any errors? - How do you create the snapshot? Do you mind sharing the steps to reproduce this? Thanks in advance, Sofia On Mon, Jan 23, 2023 at 1:20 PM Karera Tony wrote: > Dear Team, > > I am using Openstack Wallaby deployed using kolla-ansible. > > I installed Glance with the ceph backend and all was well. > However when I create snapshots, they disappear when they are saved. > > Any idea on how to resolve this? > > Regards > > Tony Karera > > > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From wassilij.kaiser at dhbw-mannheim.de Mon Jan 23 14:39:39 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Mon, 23 Jan 2023 15:39:39 +0100 (CET) Subject: Snapshot error Message-ID: <260436956.18866.1674484779580@ox.dhbw-mannheim.de> Hello, I upgraded from Victoria to Yoga. DISTRIB_ID="OSA" DISTRIB_RELEASE="25.2.0" DISTRIB_CODENAME="Yoga" DISTRIB_DESCRIPTION="OpenStack-Ansible" I have error infra1-glance-container-dc13a04b glance-wsgi-api[282311]: 2023-01-23 13:25:06.745 282311 INFO glance.api.v2.image_data [req-10359154-0be4-4da1-9e85-2d94079c17b4 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-23 14:09:28.890 282298 DEBUG glance.api.v2.images [req-bf237e2c-9aca-4089-a56f-dfa245afdcc6 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] The 'locations' list of image 2995602a-b5da-4758-a1f9-f6b083815f9b is empty _format_image /openstack/venvs/glance-25.2.0/lib/python3.8/site-packages/glance/api/v2/images.py https://bugs.launchpad.net/glance/+bug/1916052 Do any have the same error and how did you solve it Kind regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Mon Jan 23 14:56:16 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 23 Jan 2023 15:56:16 +0100 Subject: Snapshot error In-Reply-To: <260436956.18866.1674484779580@ox.dhbw-mannheim.de> References: <260436956.18866.1674484779580@ox.dhbw-mannheim.de> Message-ID: Hi there, Can you kindly describe what the error actually is? Also what backend driver do you use for glance? As the mentioned bug should be already covered by 25.2.0 release and it was limited to Swift backend from what I got. ??, 23 ???. 2023 ?. ? 15:42, Kaiser Wassilij : > > > > Hello, > > I upgraded from Victoria to Yoga. > > DISTRIB_ID="OSA" > DISTRIB_RELEASE="25.2.0" > DISTRIB_CODENAME="Yoga" > DISTRIB_DESCRIPTION="OpenStack-Ansible" > > I have error > > infra1-glance-container-dc13a04b glance-wsgi-api[282311]: 2023-01-23 13:25:06.745 282311 INFO glance.api.v2.image_data [req-10359154-0be4-4da1-9e85-2d94079c17b4 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. > > 2023-01-23 14:09:28.890 282298 DEBUG glance.api.v2.images [req-bf237e2c-9aca-4089-a56f-dfa245afdcc6 b2dca74976034d5b9925bdcb03470603 021ce436ab004cde851055bac66370bc - default default] The 'locations' list of image 2995602a-b5da-4758-a1f9-f6b083815f9b is empty _format_image /openstack/venvs/glance-25.2.0/lib/python3.8/site-packages/glance/api/v2/images.py > > https://bugs.launchpad.net/glance/+bug/1916052 > > > > Do any have the same error and how did you solve it > > > Kind regards > > From tonykarera at gmail.com Mon Jan 23 14:56:31 2023 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 23 Jan 2023 16:56:31 +0200 Subject: Snapshots disappear during saving In-Reply-To: References: Message-ID: Hello Sofia, It is actually Instance snapshot not Volume snapshot. I click on create Snapshot on the Instance options. Regards Tony Karera On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez wrote: > Hi Karera, hope this email finds you well > > We need more information in order to reproduce this issue. > > - Do you mind sharing c-vol logs of the operation to see if there's any > errors? > - How do you create the snapshot? Do you mind sharing the steps to > reproduce this? > > Thanks in advance, > Sofia > > On Mon, Jan 23, 2023 at 1:20 PM Karera Tony wrote: > >> Dear Team, >> >> I am using Openstack Wallaby deployed using kolla-ansible. >> >> I installed Glance with the ceph backend and all was well. >> However when I create snapshots, they disappear when they are saved. >> >> Any idea on how to resolve this? >> >> Regards >> >> Tony Karera >> >> >> > > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Mon Jan 23 15:29:12 2023 From: abishop at redhat.com (Alan Bishop) Date: Mon, 23 Jan 2023 07:29:12 -0800 Subject: [kolla-ansible] [cinder] Setting up multiple LVM cinder backends located on different servers In-Reply-To: References: Message-ID: On Sat, Jan 21, 2023 at 4:39 AM A Monster wrote: > First of all thank you for your answer, it's exactly what I was looking > for, > What is still ambiguous for me is the name of the volume group I specified > in globals.yml file before running the deployment, the default value is > cinder-volumes, however after I added the second lvm backend, I kept the > same volume group for lvm-1 but chooses another name for lvm-2, was it > possible to keep the same nomination for both ? If not how can I specify > the different backends directly from globals.yml file if possible. > The LVM driver's volume_group option is significant to each LVM backend, but only to the LVM backends on that controller. In other words, two controllers can each have an LVM backend using the same "cinder-volumes" volume group. But if a controller is configured with multiple LVM backends, each backend must be configured with a unique volume_group. So, the answer to your question, "was it possible to keep the same nomination for both?" is yes. I'm not familiar with kolla-ansible and its globals.yml file, so I don't know if that file can be leveraged to provide a different volume_group value to each controller. The file name suggests it contains global settings that would be common to every node. You'll need to find a way to specify the value for the lvm-2 backend (the one that doesn't use "cinder-volumes"). Also bear in mind that "cinder-volumes" is the default value [1], so you don't even need to specify that for the backend that *is* using that value. [1] https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48 Alan On Fri, Jan 20, 2023, 20:51 Alan Bishop wrote: > >> >> >> On Wed, Jan 18, 2023 at 6:38 AM A Monster wrote: >> >>> I have an openstack configuration, with 3 controller nodes and multiple >>> compute nodes , one of the controllers has an LVM storage based on HDD >>> drives, while another one has an SDD one, and when I tried to configure the >>> two different types of storage as cinder backends I faced a dilemma since >>> according to the documentation I have to specify the two different backends >>> in the cinder configuration as it is explained here >>> >>> however and since I want to separate disks type when creating volumes, I >>> had to specify different backend names, but I don't know if this >>> configuration should be written in both the storage nodes, or should I >>> specify for each one of these storage nodes the configuration related to >>> its own type of disks. >>> >> >> The key factor in understanding how to configure the cinder-volume >> services for your use case is knowing how the volume services operate and >> how they interact with the other cinder services. In short, you only define >> backends in the cinder-volume service that "owns" that backend. If >> controller-X only handles lvm-X, then you only define that backend on that >> controller. Don't include any mention of lvm-Y if that one is handled by >> another controller. The other services (namely the api and schedulers) >> learn about the backends when each of them reports its status via cinder's >> internal RPC framework. >> >> This means your lvm-1 service running on one controller should only have >> the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all >> to the lvm-3 backend on the other controller. Likewise, the other >> controller should only contain the lvm-3 backend, with its >> enabled_backends=lvm-3. >> >> >>> Now, I tried writing the same configuration for both nodes, but I found >>> out that the volume service related to server1 concerning disks in server2 >>> is down, and the volume service in server2 concerning disks in server1 is >>> also down. >>> >>> $ openstack volume service >>> list+------------------+---------------------+------+---------+-------+----------------------------+| >>> Binary | Host | Zone | Status | State | Updated At >>> |+------------------+---------------------+------+---------+-------+----------------------------+| >>> cinder-scheduler | controller-01 | nova | enabled | up | >>> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | >>> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | >>> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || >>> cinder-volume | controller-03 at lvm-1 | nova | enabled | up | >>> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 | >>> nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | >>> controller-01 at lvm-3 | nova | enabled | down | >>> 2023-01-18T14:09:42.000000 || cinder-volume | controller-03 at lvm-3 | >>> nova | enabled | down | >>> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+ >>> >>> >> Unless you do a fresh deployment, you will need to remove the invalid >> services that will always be down. Those would be the ones on controller-X >> where the backend is actually on controller-Y. You'll use the cinder-manage >> command to do that. From the data you supplied, it seems the lvm-1 backend >> is up on controller03, and the lvm-3 backend on that controller is down. >> The numbering seems backwards, but I stick with this example. To delete the >> lvm-3 backend, which is down because that backend is actually on another >> controller, you'd issue this command: >> >> $ cinder-manage service remove cinder-volume controller-03 at lvm-3 >> >> Don't worry if you accidentally delete a "good" service. The list will be >> refreshed each time the cinder-volume services refresh their status. >> >> >>> This is the configuration I have written on the configuration files for >>> cinder_api _cinder_scheduler and cinder_volume for both servers. >>> >>> enabled_backends= lvm-1,lvm-3 >>> [lvm-1] >>> volume_group = cinder-volumes >>> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver >>> volume_backend_name = lvm-1 >>> target_helper = lioadm >>> target_protocol = iscsi >>> report_discard_supported = true >>> [lvm-3] >>> volume_group=cinder-volumes-ssd >>> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver >>> volume_backend_name=lvm-3 >>> target_helper = lioadm >>> target_protocol = iscsi >>> report_discard_supported = true >>> >> >> At a minimum, on each controller you need to remove all references to the >> backend that's actually on the other controller. The cinder-api and >> cinder-scheduler services don't need any backend configuration. That's >> because the backend sections and enabled_backends options are only relevant >> to the cinder-volume service, and are ignored by the other services. >> >> Alan >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Mon Jan 23 15:46:19 2023 From: jlibosva at redhat.com (Jakub Libosvar) Date: Mon, 23 Jan 2023 10:46:19 -0500 Subject: [Neutron] Bug Deputy Report January 16 - 23 Message-ID: <89679253-7779-4576-A27B-1120F9096048@redhat.com> Hi, I was bug deputy last week. There is one critical bug but Slawek is on top of it. Here is the rest of the report: Critical - tempest slow jobs fails https://bugs.launchpad.net/neutron/+bug/2003063 Proposed patch: https://review.opendev.org/c/openstack/neutron/+/871272 Assigned to Slawek High - [FT] Error in ?test_arp_spoof_doesnt_block_ipv6? https://bugs.launchpad.net/neutron/+bug/2003196 Patch to gather more information: https://review.opendev.org/c/openstack/neutron/+/871101 Assigned to Rodolfo - [ovn] MTU issues due to centralized vlan provider networks https://bugs.launchpad.net/neutron/+bug/2003455 Assigned to Luis Medium - DVR HA router gets stuck in backup state https://bugs.launchpad.net/neutron/+bug/2003359 Needs an assignee - Floating IP stuck in snat-ns after binding host to associated fixed IP https://bugs.launchpad.net/neutron/+bug/2003532 Needs an assignee but Rodolfo attempts to reproduce - Some port attributes are ignored in bulk port create: allowed_address_pairs, extra_dhcp_opts https://bugs.launchpad.net/neutron/+bug/2003553 Proposed patch: https://review.opendev.org/c/openstack/neutron/+/871294 Assigned to Bence - [OVN] Security group logging only logs half of the connection https://bugs.launchpad.net/neutron/+bug/2003706 Assigned to Elvira Low - neutron-keepalived-state-change unconditional debug mode https://bugs.launchpad.net/neutron/+bug/2003534 Proposed fix: https://review.opendev.org/c/openstack/neutron/+/871274 Assigned to Rodolfo RFEs: - [RFE] Provide Port Binding Information for Manila Share Server Live Migration https://bugs.launchpad.net/neutron/+bug/2003095 From senrique at redhat.com Mon Jan 23 15:52:18 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 23 Jan 2023 15:52:18 +0000 Subject: [outreachy] Call for Outreachy mentors and mentoring organizations for May 2023 internships Message-ID: Outreachy is seeking open source and open science communities to mentor interns! Outreachy is hosting a live streamed chat with past Outreachy mentors. If you know someone interested in mentoring, please encourage them to attend! See the "Mentor Chats" section below for details. Important dates: - Jan. 17 at 3pm UTC - Mentor chat on YouTube and PeerTube - Feb. 6 at 3pm UTC - Mentor chat on YouTube and PeerTube - Feb. 10 at 4pm UTC - Deadline for open source and open science communities to sign up to be an Outreachy mentoring organization - Feb. 24 at 4pm UTC - Deadline for mentors to submit project descriptions for May 2023 interns to work on What is Outreachy? --- Outreachy is a paid, remote internship program. Outreachy promotes diversity in open source and open science. Our internships are for people who face under-representation, and discrimination or systemic bias in the technology industry of their country. Open source and open science communities can apply to be an Outreachy mentoring organization here: https://www.outreachy.org/communities/cfp/ Please see our blog post for additional details: https://www.outreachy.org/blog/2023-01-05/may-2023-call-for-mentoring-organizations/ Schedule -------- Jan. 17, 2023 at 3pm UTC - Mentor chat on YouTube and PeerTube Feb. 6, 2023 at 3pm UTC - Mentor chat on YouTube and PeerTube Feb. 10, 2023 - last day for mentoring organizations to sign up Feb. 24, 2023 - last day for mentors to submit projects Mar. 6, 2023 at 4pm UTC - contribution period opens Apr. 3, 2023 at 4pm UTC - contribution period closes May 4, 2023 - interns announced May 29, 2023 to Aug. 25, 2023 - internship -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From wassilij.kaiser at dhbw-mannheim.de Mon Jan 23 16:36:23 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Mon, 23 Jan 2023 17:36:23 +0100 (CET) Subject: Snapshot error In-Reply-To: References: Message-ID: <949381210.20111.1674491784040@ox.dhbw-mannheim.de> Hi Dmitriy Rabotyagov glance --version 3.6.0 Where is this Information,"backend driver do you use for glance??" Did you mean glance-api service? > openstack-discuss-request at lists.openstack.org hat am 23. Januar 2023 um 16:29 > geschrieben: > > > Send openstack-discuss mailing list submissions to > openstack-discuss at lists.openstack.org > > To subscribe or unsubscribe via the World Wide Web, visit > https://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss > > or, via email, send a message with subject or body 'help' to > openstack-discuss-request at lists.openstack.org > > You can reach the person managing the list at > openstack-discuss-owner at lists.openstack.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of openstack-discuss digest..." > > > Today's Topics: > > 1. Re: Snapshot error (Dmitriy Rabotyagov) > 2. Re: Snapshots disappear during saving (Karera Tony) > 3. Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder > backends located on different servers (Alan Bishop) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Mon, 23 Jan 2023 15:56:16 +0100 > From: Dmitriy Rabotyagov > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Snapshot error > Message-ID: > > Content-Type: text/plain; charset="UTF-8" > > Hi there, > > Can you kindly describe what the error actually is? Also what backend > driver do you use for glance? > > As the mentioned bug should be already covered by 25.2.0 release and > it was limited to Swift backend from what I got. > > ??, 23 ???. 2023 ?. ? 15:42, Kaiser Wassilij > : > > > > > > > > Hello, > > > > I upgraded from Victoria to Yoga. > > > > DISTRIB_ID="OSA" > > DISTRIB_RELEASE="25.2.0" > > DISTRIB_CODENAME="Yoga" > > DISTRIB_DESCRIPTION="OpenStack-Ansible" > > > > I have error > > > > infra1-glance-container-dc13a04b glance-wsgi-api[282311]: 2023-01-23 > > 13:25:06.745 282311 INFO glance.api.v2.image_data > > [req-10359154-0be4-4da1-9e85-2d94079c17b4 b2dca74976034d5b9925bdcb03470603 > > 021ce436ab004cde851055bac66370bc - default default] Unable to create trust: > > no such option collect_timing in group [keystone_authtoken] Use the existing > > user token. > > > > 2023-01-23 14:09:28.890 282298 DEBUG glance.api.v2.images > > [req-bf237e2c-9aca-4089-a56f-dfa245afdcc6 b2dca74976034d5b9925bdcb03470603 > > 021ce436ab004cde851055bac66370bc - default default] The 'locations' list of > > image 2995602a-b5da-4758-a1f9-f6b083815f9b is empty _format_image > > /openstack/venvs/glance-25.2.0/lib/python3.8/site-packages/glance/api/v2/images.py > > > > https://bugs.launchpad.net/glance/+bug/1916052 > > > > > > > > Do any have the same error and how did you solve it > > > > > > Kind regards > > > > > > > > ------------------------------ > > Message: 2 > Date: Mon, 23 Jan 2023 16:56:31 +0200 > From: Karera Tony > To: Sofia Enriquez > Cc: openstack-discuss > Subject: Re: Snapshots disappear during saving > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Hello Sofia, > > It is actually Instance snapshot not Volume snapshot. > I click on create Snapshot on the Instance options. > > Regards > > Tony Karera > > > > > On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez wrote: > > > Hi Karera, hope this email finds you well > > > > We need more information in order to reproduce this issue. > > > > - Do you mind sharing c-vol logs of the operation to see if there's any > > errors? > > - How do you create the snapshot? Do you mind sharing the steps to > > reproduce this? > > > > Thanks in advance, > > Sofia > > > > On Mon, Jan 23, 2023 at 1:20 PM Karera Tony wrote: > > > >> Dear Team, > >> > >> I am using Openstack Wallaby deployed using kolla-ansible. > >> > >> I installed Glance with the ceph backend and all was well. > >> However when I create snapshots, they disappear when they are saved. > >> > >> Any idea on how to resolve this? > >> > >> Regards > >> > >> Tony Karera > >> > >> > >> > > > > -- > > > > Sof?a Enriquez > > > > she/her > > > > Software Engineer > > > > Red Hat PnT > > > > IRC: @enriquetaso > > @RedHat Red Hat > > Red Hat > > > > > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Message: 3 > Date: Mon, 23 Jan 2023 07:29:12 -0800 > From: Alan Bishop > To: A Monster > Cc: openstack-discuss > Subject: Re: [kolla-ansible] [cinder] Setting up multiple LVM cinder > backends located on different servers > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > On Sat, Jan 21, 2023 at 4:39 AM A Monster wrote: > > > First of all thank you for your answer, it's exactly what I was looking > > for, > > What is still ambiguous for me is the name of the volume group I specified > > in globals.yml file before running the deployment, the default value is > > cinder-volumes, however after I added the second lvm backend, I kept the > > same volume group for lvm-1 but chooses another name for lvm-2, was it > > possible to keep the same nomination for both ? If not how can I specify > > the different backends directly from globals.yml file if possible. > > > > The LVM driver's volume_group option is significant to each LVM backend, > but only to the LVM backends on that controller. In other words, two > controllers can each have an LVM backend using the same "cinder-volumes" > volume group. But if a controller is configured with multiple LVM backends, > each backend must be configured with a unique volume_group. So, the answer > to your question, "was it possible to keep the same nomination for both?" > is yes. > > I'm not familiar with kolla-ansible and its globals.yml file, so I don't > know if that file can be leveraged to provide a different volume_group > value to each controller. The file name suggests it contains global > settings that would be common to every node. You'll need to find a way to > specify the value for the lvm-2 backend (the one that doesn't use > "cinder-volumes"). Also bear in mind that "cinder-volumes" is the default > value [1], so you don't even need to specify that for the backend that *is* > using that value. > > [1] > https://github.com/openstack/cinder/blob/4c9b76b9373a85f8dfae28f240bb130525e777af/cinder/volume/drivers/lvm.py#L48 > > Alan > > On Fri, Jan 20, 2023, 20:51 Alan Bishop wrote: > > > >> > >> > >> On Wed, Jan 18, 2023 at 6:38 AM A Monster wrote: > >> > >>> I have an openstack configuration, with 3 controller nodes and multiple > >>> compute nodes , one of the controllers has an LVM storage based on HDD > >>> drives, while another one has an SDD one, and when I tried to configure > >>> the > >>> two different types of storage as cinder backends I faced a dilemma since > >>> according to the documentation I have to specify the two different > >>> backends > >>> in the cinder configuration as it is explained here > >>> > >>> however and since I want to separate disks type when creating volumes, I > >>> had to specify different backend names, but I don't know if this > >>> configuration should be written in both the storage nodes, or should I > >>> specify for each one of these storage nodes the configuration related to > >>> its own type of disks. > >>> > >> > >> The key factor in understanding how to configure the cinder-volume > >> services for your use case is knowing how the volume services operate and > >> how they interact with the other cinder services. In short, you only define > >> backends in the cinder-volume service that "owns" that backend. If > >> controller-X only handles lvm-X, then you only define that backend on that > >> controller. Don't include any mention of lvm-Y if that one is handled by > >> another controller. The other services (namely the api and schedulers) > >> learn about the backends when each of them reports its status via cinder's > >> internal RPC framework. > >> > >> This means your lvm-1 service running on one controller should only have > >> the one lvm-1 backend (with enabled_backends=lvm-1), and NO mention at all > >> to the lvm-3 backend on the other controller. Likewise, the other > >> controller should only contain the lvm-3 backend, with its > >> enabled_backends=lvm-3. > >> > >> > >>> Now, I tried writing the same configuration for both nodes, but I found > >>> out that the volume service related to server1 concerning disks in server2 > >>> is down, and the volume service in server2 concerning disks in server1 is > >>> also down. > >>> > >>> $ openstack volume service > >>> list+------------------+---------------------+------+---------+-------+----------------------------+| > >>> Binary | Host | Zone | Status | State | Updated At > >>> |+------------------+---------------------+------+---------+-------+----------------------------+| > >>> cinder-scheduler | controller-01 | nova | enabled | up | > >>> 2023-01-18T14:27:51.000000 || cinder-scheduler | controller-02 | nova | > >>> enabled | up | 2023-01-18T14:27:41.000000 || cinder-scheduler | > >>> controller-03 | nova | enabled | up | 2023-01-18T14:27:50.000000 || > >>> cinder-volume | controller-03 at lvm-1 | nova | enabled | up | > >>> 2023-01-18T14:27:42.000000 || cinder-volume | controller-01 at lvm-1 | > >>> nova | enabled | down | 2023-01-18T14:10:00.000000 || cinder-volume | > >>> controller-01 at lvm-3 | nova | enabled | down | > >>> 2023-01-18T14:09:42.000000 || cinder-volume | controller-03 at lvm-3 | > >>> nova | enabled | down | > >>> 2023-01-18T12:12:19.000000|+------------------+---------------------+------+---------+-------+----------------------------+ > >>> > >>> > >> Unless you do a fresh deployment, you will need to remove the invalid > >> services that will always be down. Those would be the ones on controller-X > >> where the backend is actually on controller-Y. You'll use the cinder-manage > >> command to do that. From the data you supplied, it seems the lvm-1 backend > >> is up on controller03, and the lvm-3 backend on that controller is down. > >> The numbering seems backwards, but I stick with this example. To delete the > >> lvm-3 backend, which is down because that backend is actually on another > >> controller, you'd issue this command: > >> > >> $ cinder-manage service remove cinder-volume controller-03 at lvm-3 > >> > >> Don't worry if you accidentally delete a "good" service. The list will be > >> refreshed each time the cinder-volume services refresh their status. > >> > >> > >>> This is the configuration I have written on the configuration files for > >>> cinder_api _cinder_scheduler and cinder_volume for both servers. > >>> > >>> enabled_backends= lvm-1,lvm-3 > >>> [lvm-1] > >>> volume_group = cinder-volumes > >>> volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver > >>> volume_backend_name = lvm-1 > >>> target_helper = lioadm > >>> target_protocol = iscsi > >>> report_discard_supported = true > >>> [lvm-3] > >>> volume_group=cinder-volumes-ssd > >>> volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver > >>> volume_backend_name=lvm-3 > >>> target_helper = lioadm > >>> target_protocol = iscsi > >>> report_discard_supported = true > >>> > >> > >> At a minimum, on each controller you need to remove all references to the > >> backend that's actually on the other controller. The cinder-api and > >> cinder-scheduler services don't need any backend configuration. That's > >> because the backend sections and enabled_backends options are only relevant > >> to the cinder-volume service, and are ignored by the other services. > >> > >> Alan > >> > >> > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > openstack-discuss mailing list > openstack-discuss at lists.openstack.org > > > ------------------------------ > > End of openstack-discuss Digest, Vol 51, Issue 69 > ************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: image/png Size: 3626 bytes Desc: not available URL: From opensrloo at gmail.com Mon Jan 23 16:51:53 2023 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 23 Jan 2023 11:51:53 -0500 Subject: [ironic] moving on Message-ID: Hi ironic'ers, I'm moving on to a non-Ironic, non-OpenStack world. I'm glad to have been on this journey. The ironic community has been, and continues to be, great -- welcoming and helpful. Keep up the great work and the big visions [1]. I look forward to Ironic taking over the world ;) Thanks to everyone that I encountered; YOU are what made it so special and fun! --ruby [1] https://docs.openstack.org/ironic/latest/contributor/vision.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Mon Jan 23 19:14:52 2023 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 23 Jan 2023 14:14:52 -0500 Subject: [charms] OpenStack Charms Zed release is now available Message-ID: The Zed release of the OpenStack Charms is now available. Please see the Release notes for full details: https://docs.openstack.org/charm-guide/latest/release-notes/zed.html -- This is the second release since the OpenStack Charms project changed from a time-based release to one based on charm payload versions. Individual charms are no longer developed to work with every supported OpenStack release. Charms now leverage Charmhub tracks, each of which determines a supported payload. See the following resource for details: https://docs.openstack.org/charm-guide/latest/project/charm-delivery.html == Highlights == * Channel migrations Much effort in this cycle was directed at migrating previous OpenStack releases (Queens through Xena) to channels in the Charmhub store. * OpenStack Zed OpenStack Zed is now supported on Ubuntu 22.04 LTS (via UCA) and Ubuntu 22.10 natively. * OVN 22.03 on Focal There is better support for OVN 22.03 on Focal via a new OVN specific UCA pocket. * COS Lite support in the ceph-mon charm Support for sending metrics to the prometheus-k8s charm in the COS Lite observability stack has been added to the ceph-mon charm. Consequently, support for the prometheus2 charm is now deprecated. * NVIDIA vGPU Virtual Workstation The Nova vGPU features in the Nova Compute charms were validated for use as the graphical display driver for Virtual Workstation usage. * Documentation updates Documentation highlights include: - Completion of Deploy Guide to Charm Guide content migration - Improvements to: - Getting Started tutorial - Documentation contributor content - Upgrade pages - OVN pages - SR-IOV page - Charm delivery page == OpenStack Charms team == The OpenStack Charms team can be contacted in the Juju user forum [0] (tag 'openstack') or by chat in the OpenStack Charms Mattermost channel [1]. [0]: https://discourse.charmhub.io/tags/c/juju/6/openstack [1]: https://chat.charmhub.io/charmhub/channels/openstack-charms == Thank you == Lots of thanks to the 56 contributors below who squashed bugs, enabled new features, and improved the documentation! Alex Kavanagh Aliaksandr Vasiuk Billy Olsen Brett Milford Chi Wai, Chan Chris MacNaughton Connor Chamberlain Corey Bryant David Andersson Dmitrii Shcherbakov Edin Sarajlic Edward Hope-Morley Erhan Sunar Ethan Myers Felipe Reyes Francesco de Simone Frode Nordahl Gabriel Adrian Samfira Gokhan Cetinkaya Guilherme Maluf Balzana Hemanth Nakkina Hicham El Gharbi James Page John P Lettman Jorge Merlino Juan Pablo Norena Ksawery Dzieko?ski Liam Young Luciano Lo Giudice Marcus Boden Martin Kalcok Mert K?rp?c? Muhammad Ahmad Mustafa Kemal Gilor Nicholas Malacarne Nobuto Murata NucciTheBoss Pedro Castillo Peter Matulis Peter Sabaini Rodrigo Barbieri Samuel Walladge Simon Dodsley Sudeep Bhandari Tiago Pasqualini Tianqi Xiao Tilman Baumann Utkarsh Bhatt dongdong tao fdesi jneo8 ljhuang niuke peppepetra86 sudeephb tushargite96 -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Mon Jan 23 20:10:40 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Mon, 23 Jan 2023 12:10:40 -0800 Subject: [ironic] moving on In-Reply-To: References: Message-ID: Thank you for your years and years of service to OpenStack in general, and Ironic specifically. You've helped shape both the software and many of the people (including me!) who will keep working on it. As mentioned in the Ironic meeting this morning ( https://meetings.opendev.org/meetings/ironic/2023/ironic.2023-01-23-15.01.html), we are moving Ruby to core emeritus status; removing her core contributor access. It will be restored if/when Ruby begins working on Ironic again. --Jay On Mon, Jan 23, 2023 at 9:02 AM Ruby Loo wrote: > Hi ironic'ers, > > I'm moving on to a non-Ironic, non-OpenStack world. I'm glad to have been > on this journey. The ironic community has been, and continues to be, great > -- welcoming and helpful. Keep up the great work and the big visions [1]. I > look forward to Ironic taking over the world ;) > > Thanks to everyone that I encountered; YOU are what made it so special and > fun! > > --ruby > > [1] https://docs.openstack.org/ironic/latest/contributor/vision.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Jan 24 01:18:27 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 23 Jan 2023 17:18:27 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Message-ID: Hi Ghanshyam and TC, This process seems a bit uncomfortable to me and I think we should have a wider discussion on this topic. Full disclosure: I am the creator and maintainer of some projects on PyPi that openstackci releases packages to. Over the years since I created those projects and added openstackci to them, there have been multiple occasions where maintenance was required directly on PyPi (or via twine). This includes updating product descriptions, links, and as of last year enabling mandatory 2FA. As you probably know, not all of that has been possible (or just worked) via the setup.cfg/Readme in the code repository. Historically, I don't think anyone in infra or the release team has monitored the PyPi projects and maintained those settings on a regular basis. We pretty much leave it to the automated release tools and poke at it if something goes wrong. Historically part of the project creation steps required us to already have the PyPi projects setup[1] prior to attempting to become an OpenStack project. The "Project Creator Guide" (Which is no longer part of or linked from the OpenStack documentation[2], so maybe we aren't accepting new projects to OpenStack?) then had us add "openstackci" to the project if we were opting to have the release team release our packages. This is not a documented requirement that I am aware of and may be a gap caused by the openinfra split. It also seems odd that we would remove the project creator from their own project just for contributing it to OpenStack. We don't celebrate the effort and history of contributors or projects much anymore. I think there is value in having more than one account have access to the projects on PyPi. For one, if the openstackci account is compromised (via an insider or other), there is another account that can quickly disable the compromised account and yank a compromised release. Likewise, given the limited availability of folks with access to the openstackci account, there is value in having the project owner be able to yank a compromised release without waiting for folks to return from vacation, etc. All of that said, I see the security implications of having abandoned accounts or excessively wide access (the horizon case) to projects published on PyPi. I'm just not sure removing the project creator's access will really solve all of the issues around software traceability and OpenStack. Releases can still be pushed to PyPi maliciously via openstackci or PyPi compromise. I think we should also discuss the following improvements: 1. We PGP sign these releases with an OpenStack key, but we don't upload the .asc file with the packages to PyPi. Why don't we do this to help folks have an easy way to validate that the package came from the OpenStack releases process? 2. With these signatures, we can automate tools to validate that releases were signed by the OpenStack release process and raise an alert if they are invalid. 3. Maybe we should have a system that subscribes to the PyPi release history RSS feed for each managed OpenStack project and validates the RSS list against the releases repository information. This could then notify a release-team email list that an unexpected release has been posted to PyPi. Anyone should be able to subscribe to this list. 4. If we decide that removing maintainer access to projects is a barrier to adding them to OpenStack, we should document this clearly. I think we have some options to consider beyond the "remove everyone but openstackci from the project" or "kick the project out of OpenStack"[3]. Michael [1] https://github.com/openstack-archive/infra-manual/blob/caa430c1345f1c1aef17919f1c8d228dc652758b/doc/source/creators.rst#give-openstack-permission-to-publish-releases [2] https://docs.openstack.org/zed/ [3] https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup#L17 On Fri, Jan 20, 2023 at 3:36 PM Ghanshyam Mann wrote: > > Hi PTLs, > > As you might know or have seen for your project package on PyPi, OpenStack deliverables on PyPi have > additional maintainers, For example, https://pypi.org/project/murano/, https://pypi.org/project/glance/ > > We should keep only 'openstackci' as a maintainer in PyPi so that releases of OpenStack deliverables > can be managed in a single place. Otherwise, we might face the two sets of maintainers' places and > packages might get released in PyPi by additional maintainers without the OpenStack project team > knowing about it. One such case is in Horizon repo 'xstatic-font-awesome' where a new maintainer is > added by an existing additional maintainer and this package was released without the Horizon team > knowing about the changes and release. > - https://github.com/openstack/xstatic-font-awesome/pull/2 > > To avoid the 'xstatic-font-awesome' case for other packages, TC discussed it in their weekly meetings[1] > and agreed to audit all the OpenStack packages and then clean up the additional maintainers in PyPi > (keep only 'openstackci' as maintainers). > > To help in this task, TC requests project PTL to perform the audit for their project's repo and add comments > in the below etherpad. > > - https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup > > Thanks to knikolla to automate the listing of the OpenStack packages with additional maintainers in PyPi which > you can find the result in output.txt at the bottom of this link. I have added the project list of who needs to check > their repo in etherpad. > > - https://gist.github.com/knikolla/7303a65a5ddaa2be553fc6e54619a7a1 > > Please complete the audit for your project before March 15 so that TC can discuss the next step in vPTG. > > [1] https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-11-16.00.log.html#l-41 > > > -gmann > From gmann at ghanshyammann.com Tue Jan 24 04:07:52 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 23 Jan 2023 20:07:52 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 25 at 1600 UTC Message-ID: <185e1f5fa51.11f7e8c59257921.1166182092977069185@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2023 Jan 25, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Jan 24 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From elfosardo at gmail.com Tue Jan 24 08:52:52 2023 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 24 Jan 2023 09:52:52 +0100 Subject: [ironic] moving on In-Reply-To: References: Message-ID: Very sad news, thanks for all your work for the ironic community, you will be missed! Riccardo On Mon, Jan 23, 2023 at 9:16 PM Jay Faulkner wrote: > Thank you for your years and years of service to OpenStack in general, and > Ironic specifically. You've helped shape both the software and many of the > people (including me!) who will keep working on it. > > As mentioned in the Ironic meeting this morning ( > https://meetings.opendev.org/meetings/ironic/2023/ironic.2023-01-23-15.01.html), > we are moving Ruby to core emeritus status; removing her core contributor > access. It will be restored if/when Ruby begins working on Ironic again. > > --Jay > > On Mon, Jan 23, 2023 at 9:02 AM Ruby Loo wrote: > >> Hi ironic'ers, >> >> I'm moving on to a non-Ironic, non-OpenStack world. I'm glad to have been >> on this journey. The ironic community has been, and continues to be, great >> -- welcoming and helpful. Keep up the great work and the big visions [1]. I >> look forward to Ironic taking over the world ;) >> >> Thanks to everyone that I encountered; YOU are what made it so special >> and fun! >> >> --ruby >> >> [1] https://docs.openstack.org/ironic/latest/contributor/vision.html >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wassilij.kaiser at dhbw-mannheim.de Tue Jan 24 09:42:24 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Tue, 24 Jan 2023 10:42:24 +0100 (CET) Subject: Error Nova Compute during snapshot Message-ID: <1373764481.24691.1674553344599@ox.dhbw-mannheim.de> Hi, I start creating snapshot. Wherever the instance is, I start journalctl -f . The following error occurs in libvirtd # Ansible managed DISTRIB_ID="OSA" DISTRIB_RELEASE="25.2.0" DISTRIB_CODENAME="Yoga" DISTRIB_DESCRIPTION="OpenStack-Ansible" internal error: cannot update AppArmor profile 'libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74' Jan 23 16:04:42 bc2bl13 libvirtd[1321612]: Unable to restore security label on /var/lib/nova/instances/snapshots/tmpyhucdu8x/37698cb66b8a44599601c5166902552c.delta Jan 23 16:09:37 bc2bl13 libvirtd[1321612]: invalid argument: disk vda does not have an active block job Jan 23 16:09:46 bc2bl13 libvirtd[1321612]: internal error: Child process (LIBVIRT_LOG_OUTPUTS=3:stderr /usr/lib/libvirt/virt-aa-helper -r -u libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74) unexpected exit status 1: 2023-01-23 16:09:46.640+0000: 3744159: info : libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2023 libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2022 14:51:12 +0000) 2023-01-23 16:09:46.640+0000: 3744159: info : hostname: bc2bl13 2023-01-23 16:09:46.640+0000: 3744159: error : virDomainDiskDefMirrorParse:8800 : unsupported configuration: unknown mirror job type '' virt-aa-helper: error: could not parse XML virt-aa-helper: error: could not get VM definition Jan 23 16:09:46 bc2bl13 libvirtd[1321612]: internal error: cannot update AppArmor profile 'libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74' Jan 23 16:09:46 bc2bl13 libvirtd[1321612]: Unable to restore security label on /var/lib/nova/instances/snapshots/tmp_a9upl2m/d3d15a89bd6a4dcb8255cfe50d2adb68.delta Jan 24 09:17:56 bc2bl13 libvirtd[1321612]: invalid argument: disk vda does not have an active block job Jan 24 09:18:09 bc2bl13 libvirtd[1321612]: internal error: Child process (LIBVIRT_LOG_OUTPUTS=3:stderr /usr/lib/libvirt/virt-aa-helper -r -u libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74) unexpected exit status 1: 2023-01-24 09:18:09.865+0000: 4081179: info : info : libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2023 libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2022 14:51:12 +0000) 2023-01-24 09:18:09.865+0000: 4081179: info : hostname: bc2bl13 2023-01-24 09:18:09.865+0000: 4081179: error : virDomainDiskDefMirrorParse:8800 : unsupported configuration: unknown mirror job type '' virt-aa-helper: error: could not parse XML virt-aa-helper: error: could not get VM definition Jan 24 09:18:09 bc2bl13 libvirtd[1321612]: internal error: cannot update AppArmor profile 'libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74' Jan 24 09:18:09 bc2bl13 libvirtd[1321612]: Unable to restore security label on /var/lib/nova/instances/snapshots/tmpv4gdu8fe/b0b8a1ac3ea64e3ca74363175cc17bd4.delta has anyone had the same error. What have you done? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jan 24 11:29:18 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 24 Jan 2023 12:29:18 +0100 Subject: [neutron] CI meeting 24.01.2023 cancelled Message-ID: <1847316.rDAACeFtYD@p1> Hi, Due to overlapping internal meeting this week I will not be able to chair neutron ci meeting. There is nothing critical to discuss there so with Rodolfo we decided to cancel that meeting this week. See You all on the CI meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From alex.kavanagh at canonical.com Tue Jan 24 12:16:44 2023 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Tue, 24 Jan 2023 12:16:44 +0000 Subject: [charms] Release of charmhub Openstack dedicated ussuri, victoria, wallaby for Ubuntu 20.04 (focal) LTS Message-ID: Hello As the charm-guide [1] explains, OpenStack, Ceph, OVN and supporting charms moved from a single stable charm (for each OpenStack component) that supported multiple OpenStack releases, to individual tracks that each targets a specific release [2]. The current Zed release also recently [6] became stable. Today, we have (re)released updated individual charms to the ussuri/stable, victoria/stable and wallaby/stable tracks. These are derived from the stable 21.10 charms (that supported queens -> xena). We have also released the Ceph octopus/stable track to support ussuri and victoria, and the existing Ceph pacific (which was already stable) that supports wallaby (and xena). The full list of charms for the Xena, Victoria and Wallaby OpenStack systems, includiong Ceph, OVN, mysql8, hacluster, rabbitmq-server, vault and others can be seen in the docs at [3]. The ussuri, victoria, wallaby, and xena tracks have been designed to be upgradable from the 21.10 charms if you are running the associated OpenStack version, but please do read the docs [4] about how to go about upgrading charms. The relevant track (e.g. victoria/stable) is also able to upgrade from the previous version. Please again consult the docs [5] for upgrading advice. This was a significant piece of work by all those involved, so many, many thanks to all who contributed and made this happen! Thanks! [1] https://docs.openstack.org/charm-guide/latest/project/charm-delivery.html [2] https://juju.is/docs/olm/deploy-a-charm-from-charmhub#heading--specify-a-charmed-operator-channel [3] https://docs.openstack.org/charm-guide/latest/project/charm-delivery.html#channels-and-tracks-for-openstack-charms [4] https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/upgrade-charms.html [5] https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/upgrade-openstack.html [6] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031872.html -- Alex Kavanagh OpenStack Engineering - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 24 13:04:23 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Jan 2023 13:04:23 +0000 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Message-ID: <20230124130423.l3cikmmdnij4r3my@yuggoth.org> On 2023-01-23 17:18:27 -0800 (-0800), Michael Johnson wrote: [...] > Historically part of the project creation steps required us to > already have the PyPi projects setup[1] prior to attempting to > become an OpenStack project. The "Project Creator Guide" (Which is > no longer part of or linked from the OpenStack documentation[2], > so maybe we aren't accepting new projects to OpenStack?) then had > us add "openstackci" to the project if we were opting to have the > release team release our packages. This is not a documented > requirement that I am aware of and may be a gap caused by the > openinfra split. [...] It was removed because it became increasingly impossible to describe reliably. The maintainers for Warehouse (the software which currently implements PyPI) removed the old registration Web form and API methods which allowed pre-creation of projects in order to try to curb name squatting, but also made it so new projects are created automatically at initial upload. This means that in order to pre-create a project on PyPI these days, you have to manually create a minimal package and upload it. This became a significant blocker to people trying to add release jobs, so we made the decision to rely on release automation for project creation and advise new projects to tag or request an alpha release as early as possible in their formation. > 1. We PGP sign these releases with an OpenStack key, but we don't > upload the .asc file with the packages to PyPi. Why don't we do this > to help folks have an easy way to validate that the package came from > the OpenStack releases process? [...] I wanted to do this from the very beginning, but the (then Cheeseshop, later Warehouse) maintainers repeatedly insisted that their opinion was the signature uploads provided no security benefit and they kept saying they were planning to remove that feature any day. Also during the transition from Cheeseshop to Warehouse, there was a span of several years where you could upload signatures but the WebUI didn't link to them anywhere so users couldn't easily find them anyway. When it became clear that work on PEP 458 had stalled out, they relented and made signatures accessible through Warehouse, but kept saying that was only a temporary measure which would be removed as soon as TUF was in place. > 2. With these signatures, we can automate tools to validate that > releases were signed by the OpenStack release process and raise an > alert if they are invalid. [...] We already upload them to tarballs.openstack.org and link them from the pages on releases.openstack.org, which should be sufficient to enable what you describe anyway without needing to also publish signatures to PyPI (the insistence that PyPI was removing signature uploading was a primary factor in our choice to continue hosting our own copies of release artifacts in the first place, for precisely this purpose). > I think we have some options to consider beyond the "remove everyone > but openstackci from the project" or "kick the project out of > OpenStack"[3]. [...] In the case of the project which triggered this discussion, it wasn't so much kicked out of OpenStack as the people in OpenStack with joint access to upload releases for it acknowledged that not everyone who was publishing releases wanted to do so from within OpenStack, so it's being relinquished to the other maintainers and OpenStack will carry a fork instead if it becomes necessary to do so in order to not have two different "official" sources of truth for one package. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tonykarera at gmail.com Tue Jan 24 06:28:40 2023 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 24 Jan 2023 08:28:40 +0200 Subject: Snapshots disappear during saving In-Reply-To: References: Message-ID: Hello Sofia, Below are the logs 24/Jan/2023 06:25:10] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.058022 2023-01-24 06:25:12.381 50 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:12.470 50 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:13.010 51 INFO eventlet.wsgi.server [req-b91e54ba-f3fa-4fad-ab7a-1ef1ad2750fb f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008496 2023-01-24 06:25:15.657 52 INFO eventlet.wsgi.server [req-c9fa16c8-232d-45ff-b425-55504f332597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:15] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008760 2023-01-24 06:25:16.404 50 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception invalid literal for int() with base 16: b'': eventlet.wsgi.ChunkReadError: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.629 50 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPBadRequest: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:25:16] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 400 0 4.384073 2023-01-24 06:25:18.281 49 INFO eventlet.wsgi.server [req-dcd70adf-5dd4-4a15-abe0-f1e074030346 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:18] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.067056 2023-01-24 06:25:20.935 52 INFO eventlet.wsgi.server [req-065021ef-2fa4-456f-bad8-22e4a06fec9a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:20] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.064581 2023-01-24 06:25:23.367 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:23] "GET / HTTP/1.1" 300 1517 0.003469 2023-01-24 06:25:23.583 51 INFO eventlet.wsgi.server [req-a0a51dc6-01c5-4dff-afe9-6c67a279e7fa f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:23] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.072727 2023-01-24 06:25:26.211 49 INFO eventlet.wsgi.server [req-052bc425-90c3-4e4e-94e3-15aba795c96b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:26] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.055309 2023-01-24 06:25:28.039 49 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:28.168 49 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:28.887 51 INFO eventlet.wsgi.server [req-4664a441-2fa6-41fa-b985-b6290712e597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:28] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008411 2023-01-24 06:25:31.541 52 INFO eventlet.wsgi.server [req-31a7d4fc-a266-40a7-9b0d-6ba1e8c0cf47 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:31] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008032 2023-01-24 06:25:34.168 52 INFO eventlet.wsgi.server [req-d8e4513e-9945-4f0c-9b2d-a1e191982f0b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:34] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009528 2023-01-24 06:25:36.798 52 INFO eventlet.wsgi.server [req-02c6e354-6137-4f31-9cf3-a2e9c0258938 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:36] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008272 2023-01-24 06:25:39.435 50 INFO eventlet.wsgi.server [req-a30d0de8-9b34-414f-81ef-9a3457cf2758 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:39] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008886 2023-01-24 06:25:42.059 52 INFO eventlet.wsgi.server [req-409a24f7-7d8e-4bb8-8cab-ef730cf35478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:42] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008929 2023-01-24 06:25:44.709 52 INFO eventlet.wsgi.server [req-ebee0818-8afe-488e-a9f2-52bcdfb65983 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:44] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008267 2023-01-24 06:25:47.346 52 INFO eventlet.wsgi.server [req-71ad7a1a-acee-40f1-a3ed-70fe64428644 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:47] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008255 2023-01-24 06:25:50.005 50 INFO eventlet.wsgi.server [req-a69f1a44-7689-494c-9e84-c434a1ea9838 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:50] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008565 2023-01-24 06:25:52.631 50 INFO eventlet.wsgi.server [req-08c7e2db-c60e-4cfe-ab0e-78269924f60d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:52] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008409 2023-01-24 06:25:53.521 48 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:53] "GET / HTTP/1.1" 300 1517 0.001898 2023-01-24 06:25:55.271 51 INFO eventlet.wsgi.server [req-63d5645d-c9dc-4ec4-86b6-48305b05ebe8 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:55] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.005122 2023-01-24 06:25:57.911 50 INFO eventlet.wsgi.server [req-67b0f1e9-89a1-4e34-8f95-132f32f9c478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:57] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009235 2023-01-24 06:26:00.554 48 INFO eventlet.wsgi.server [req-cc7033b2-698b-4bd5-96cf-589d864485ff f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:00] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008637 2023-01-24 06:26:03.181 50 INFO eventlet.wsgi.server [req-26a2b823-8a26-49b8-9d79-af6bf36c0e2a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:03] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.007718 2023-01-24 06:26:05.835 50 INFO eventlet.wsgi.server [req-1be8a06e-6c3a-4966-9031-a1548b6b0f7a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:05] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008469 2023-01-24 06:26:08.483 50 INFO eventlet.wsgi.server [req-b24990c6-319d-4837-8128-406f49d38530 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:08] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008319 2023-01-24 06:26:11.134 50 INFO eventlet.wsgi.server [req-5393b80e-5069-47ee-a1f0-ef7174c1e09b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:11] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008237 2023-01-24 06:26:13.765 50 INFO eventlet.wsgi.server [req-fcbc3f21-ddb5-4b3f-869c-b489ace9a453 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008331 2023-01-24 06:26:16.405 52 INFO eventlet.wsgi.server [req-20d98cec-4e1e-4582-aaa2-68043927ab7f f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:16] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008366 2023-01-24 06:26:19.028 50 INFO eventlet.wsgi.server [req-2f1bc312-00b8-4e92-91ac-a3ba1f951742 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:19] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008365 2023-01-24 06:26:20.032 49 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.443 49 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to internal error: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Caught error: unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi Traceback (most recent call last): 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1353, in __call__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi action_result = self.dispatch(self.controller, action, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1397, in dispatch 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return method(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 416, in wrapped 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return func(self, req, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 300, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._restore(image_repo, image) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 165, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi image.set_data(data, size, backend=backend) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/domain/proxy.py", line 208, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.base.set_data(data, size, backend=backend, set_active=set_active) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 501, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi _send_notification(notify_error, 'image.upload', msg) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 447, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.repo.set_data(data, size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/policy.py", line 273, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self.image.set_data(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/quota/__init__.py", line 322, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.image.set_data(data, size=size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 567, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._upload_to_store(data, verifier, backend, size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 458, in _upload_to_store 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi multihash, loc_meta) = self.store_api.add_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 398, in add_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_add_to_backend_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 480, in store_add_to_backend_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi (location, size, checksum, multihash, metadata) = store.add( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/driver.py", line 279, in add_adapter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi metadata_dict) = store_add_fun(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/capabilities.py", line 176, in op_checker 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 629, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise exc 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 574, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi for chunk in chunks: 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/common/utils.py", line 73, in chunkiter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = fp.read(chunk_size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 294, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = self.data.read(i) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 121, in readfn 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = fd.read(*args) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/format_inspector.py", line 658, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = self._source.read(size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 221, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self._chunked_read(self.rfile, length) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 192, in _chunked_read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise IOError("unexpected end of file while parsing chunked data") 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi 2023-01-24 06:26:20.570 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:26:20.571 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:26:20] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 500 0 52.610723 2023-01-24 06:26:21.649 51 INFO eventlet.wsgi.server [req-9eea9898-f4e7-4cb1-b02c-019b56fa00a9 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:21] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 4774 0.068947 2023-01-24 06:26:23.635 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:26:23] "GET / HTTP/1.1" 300 1517 0.001586 ^C root at controller1:/home/stack# Regards Tony Karera On Mon, Jan 23, 2023 at 4:56 PM Karera Tony wrote: > Hello Sofia, > > It is actually Instance snapshot not Volume snapshot. > I click on create Snapshot on the Instance options. > > Regards > > Tony Karera > > > > > On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez > wrote: > >> Hi Karera, hope this email finds you well >> >> We need more information in order to reproduce this issue. >> >> - Do you mind sharing c-vol logs of the operation to see if there's any >> errors? >> - How do you create the snapshot? Do you mind sharing the steps to >> reproduce this? >> >> Thanks in advance, >> Sofia >> >> On Mon, Jan 23, 2023 at 1:20 PM Karera Tony wrote: >> >>> Dear Team, >>> >>> I am using Openstack Wallaby deployed using kolla-ansible. >>> >>> I installed Glance with the ceph backend and all was well. >>> However when I create snapshots, they disappear when they are saved. >>> >>> Any idea on how to resolve this? >>> >>> Regards >>> >>> Tony Karera >>> >>> >>> >> >> -- >> >> Sof?a Enriquez >> >> she/her >> >> Software Engineer >> >> Red Hat PnT >> >> IRC: @enriquetaso >> @RedHat Red Hat >> Red Hat >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Jan 24 06:32:01 2023 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 24 Jan 2023 08:32:01 +0200 Subject: Instance Snapshots disappear while being saved Message-ID: Dear Team, I have deployed Openstack Wallby using kolla-ansible. I faced no challenge during the Installation. However when I create an instance snapshot, It disappears while being saved and below are the glance logs that seem to show errors. The Glance_backend_storage is ceph. Please assist 24/Jan/2023 06:25:10] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.058022 2023-01-24 06:25:12.381 50 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:12.470 50 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:13.010 51 INFO eventlet.wsgi.server [req-b91e54ba-f3fa-4fad-ab7a-1ef1ad2750fb f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008496 2023-01-24 06:25:15.657 52 INFO eventlet.wsgi.server [req-c9fa16c8-232d-45ff-b425-55504f332597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:15] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008760 2023-01-24 06:25:16.404 50 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception invalid literal for int() with base 16: b'': eventlet.wsgi.ChunkReadError: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.629 50 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPBadRequest: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:25:16] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 400 0 4.384073 2023-01-24 06:25:18.281 49 INFO eventlet.wsgi.server [req-dcd70adf-5dd4-4a15-abe0-f1e074030346 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:18] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.067056 2023-01-24 06:25:20.935 52 INFO eventlet.wsgi.server [req-065021ef-2fa4-456f-bad8-22e4a06fec9a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:20] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.064581 2023-01-24 06:25:23.367 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:23] "GET / HTTP/1.1" 300 1517 0.003469 2023-01-24 06:25:23.583 51 INFO eventlet.wsgi.server [req-a0a51dc6-01c5-4dff-afe9-6c67a279e7fa f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:23] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.072727 2023-01-24 06:25:26.211 49 INFO eventlet.wsgi.server [req-052bc425-90c3-4e4e-94e3-15aba795c96b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:26] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.055309 2023-01-24 06:25:28.039 49 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:28.168 49 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:28.887 51 INFO eventlet.wsgi.server [req-4664a441-2fa6-41fa-b985-b6290712e597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:28] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008411 2023-01-24 06:25:31.541 52 INFO eventlet.wsgi.server [req-31a7d4fc-a266-40a7-9b0d-6ba1e8c0cf47 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:31] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008032 2023-01-24 06:25:34.168 52 INFO eventlet.wsgi.server [req-d8e4513e-9945-4f0c-9b2d-a1e191982f0b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:34] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009528 2023-01-24 06:25:36.798 52 INFO eventlet.wsgi.server [req-02c6e354-6137-4f31-9cf3-a2e9c0258938 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:36] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008272 2023-01-24 06:25:39.435 50 INFO eventlet.wsgi.server [req-a30d0de8-9b34-414f-81ef-9a3457cf2758 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:39] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008886 2023-01-24 06:25:42.059 52 INFO eventlet.wsgi.server [req-409a24f7-7d8e-4bb8-8cab-ef730cf35478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:42] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008929 2023-01-24 06:25:44.709 52 INFO eventlet.wsgi.server [req-ebee0818-8afe-488e-a9f2-52bcdfb65983 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:44] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008267 2023-01-24 06:25:47.346 52 INFO eventlet.wsgi.server [req-71ad7a1a-acee-40f1-a3ed-70fe64428644 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:47] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008255 2023-01-24 06:25:50.005 50 INFO eventlet.wsgi.server [req-a69f1a44-7689-494c-9e84-c434a1ea9838 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:50] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008565 2023-01-24 06:25:52.631 50 INFO eventlet.wsgi.server [req-08c7e2db-c60e-4cfe-ab0e-78269924f60d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:52] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008409 2023-01-24 06:25:53.521 48 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:53] "GET / HTTP/1.1" 300 1517 0.001898 2023-01-24 06:25:55.271 51 INFO eventlet.wsgi.server [req-63d5645d-c9dc-4ec4-86b6-48305b05ebe8 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:55] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.005122 2023-01-24 06:25:57.911 50 INFO eventlet.wsgi.server [req-67b0f1e9-89a1-4e34-8f95-132f32f9c478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:57] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009235 2023-01-24 06:26:00.554 48 INFO eventlet.wsgi.server [req-cc7033b2-698b-4bd5-96cf-589d864485ff f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:00] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008637 2023-01-24 06:26:03.181 50 INFO eventlet.wsgi.server [req-26a2b823-8a26-49b8-9d79-af6bf36c0e2a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:03] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.007718 2023-01-24 06:26:05.835 50 INFO eventlet.wsgi.server [req-1be8a06e-6c3a-4966-9031-a1548b6b0f7a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:05] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008469 2023-01-24 06:26:08.483 50 INFO eventlet.wsgi.server [req-b24990c6-319d-4837-8128-406f49d38530 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:08] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008319 2023-01-24 06:26:11.134 50 INFO eventlet.wsgi.server [req-5393b80e-5069-47ee-a1f0-ef7174c1e09b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:11] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008237 2023-01-24 06:26:13.765 50 INFO eventlet.wsgi.server [req-fcbc3f21-ddb5-4b3f-869c-b489ace9a453 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008331 2023-01-24 06:26:16.405 52 INFO eventlet.wsgi.server [req-20d98cec-4e1e-4582-aaa2-68043927ab7f f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:16] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008366 2023-01-24 06:26:19.028 50 INFO eventlet.wsgi.server [req-2f1bc312-00b8-4e92-91ac-a3ba1f951742 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:19] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008365 2023-01-24 06:26:20.032 49 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.443 49 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to internal error: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Caught error: unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi Traceback (most recent call last): 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1353, in __call__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi action_result = self.dispatch(self.controller, action, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1397, in dispatch 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return method(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 416, in wrapped 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return func(self, req, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 300, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._restore(image_repo, image) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 165, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi image.set_data(data, size, backend=backend) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/domain/proxy.py", line 208, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.base.set_data(data, size, backend=backend, set_active=set_active) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 501, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi _send_notification(notify_error, 'image.upload', msg) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 447, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.repo.set_data(data, size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/policy.py", line 273, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self.image.set_data(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/quota/__init__.py", line 322, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.image.set_data(data, size=size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 567, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._upload_to_store(data, verifier, backend, size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 458, in _upload_to_store 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi multihash, loc_meta) = self.store_api.add_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 398, in add_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_add_to_backend_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 480, in store_add_to_backend_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi (location, size, checksum, multihash, metadata) = store.add( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/driver.py", line 279, in add_adapter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi metadata_dict) = store_add_fun(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/capabilities.py", line 176, in op_checker 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 629, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise exc 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 574, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi for chunk in chunks: 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/common/utils.py", line 73, in chunkiter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = fp.read(chunk_size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 294, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = self.data.read(i) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 121, in readfn 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = fd.read(*args) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/format_inspector.py", line 658, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = self._source.read(size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 221, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self._chunked_read(self.rfile, length) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 192, in _chunked_read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise IOError("unexpected end of file while parsing chunked data") 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi 2023-01-24 06:26:20.570 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:26:20.571 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:26:20] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 500 0 52.610723 2023-01-24 06:26:21.649 51 INFO eventlet.wsgi.server [req-9eea9898-f4e7-4cb1-b02c-019b56fa00a9 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:21] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 4774 0.068947 2023-01-24 06:26:23.635 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:26:23] "GET / HTTP/1.1" 300 1517 0.001586 ^C root at controller1:/home/stack#24/Jan/2023 06:25:10] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.058022 2023-01-24 06:25:12.381 50 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:12.470 50 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:13.010 51 INFO eventlet.wsgi.server [req-b91e54ba-f3fa-4fad-ab7a-1ef1ad2750fb f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008496 2023-01-24 06:25:15.657 52 INFO eventlet.wsgi.server [req-c9fa16c8-232d-45ff-b425-55504f332597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:15] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008760 2023-01-24 06:25:16.404 50 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception invalid literal for int() with base 16: b'': eventlet.wsgi.ChunkReadError: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.629 50 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to HTTP error: webob.exc.HTTPBadRequest: invalid literal for int() with base 16: b'' 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:25:16] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 400 0 4.384073 2023-01-24 06:25:18.281 49 INFO eventlet.wsgi.server [req-dcd70adf-5dd4-4a15-abe0-f1e074030346 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:18] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.067056 2023-01-24 06:25:20.935 52 INFO eventlet.wsgi.server [req-065021ef-2fa4-456f-bad8-22e4a06fec9a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:20] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.064581 2023-01-24 06:25:23.367 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:23] "GET / HTTP/1.1" 300 1517 0.003469 2023-01-24 06:25:23.583 51 INFO eventlet.wsgi.server [req-a0a51dc6-01c5-4dff-afe9-6c67a279e7fa f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:23] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.072727 2023-01-24 06:25:26.211 49 INFO eventlet.wsgi.server [req-052bc425-90c3-4e4e-94e3-15aba795c96b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:26] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 0.055309 2023-01-24 06:25:28.039 49 INFO glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: no such option collect_timing in group [keystone_authtoken] Use the existing user token. 2023-01-24 06:25:28.168 49 WARNING glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Since image size is zero we will be doing resize-before-write which will be slower than normal 2023-01-24 06:25:28.887 51 INFO eventlet.wsgi.server [req-4664a441-2fa6-41fa-b985-b6290712e597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:28] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008411 2023-01-24 06:25:31.541 52 INFO eventlet.wsgi.server [req-31a7d4fc-a266-40a7-9b0d-6ba1e8c0cf47 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:31] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008032 2023-01-24 06:25:34.168 52 INFO eventlet.wsgi.server [req-d8e4513e-9945-4f0c-9b2d-a1e191982f0b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:34] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009528 2023-01-24 06:25:36.798 52 INFO eventlet.wsgi.server [req-02c6e354-6137-4f31-9cf3-a2e9c0258938 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:36] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008272 2023-01-24 06:25:39.435 50 INFO eventlet.wsgi.server [req-a30d0de8-9b34-414f-81ef-9a3457cf2758 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:39] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008886 2023-01-24 06:25:42.059 52 INFO eventlet.wsgi.server [req-409a24f7-7d8e-4bb8-8cab-ef730cf35478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:42] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008929 2023-01-24 06:25:44.709 52 INFO eventlet.wsgi.server [req-ebee0818-8afe-488e-a9f2-52bcdfb65983 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:44] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008267 2023-01-24 06:25:47.346 52 INFO eventlet.wsgi.server [req-71ad7a1a-acee-40f1-a3ed-70fe64428644 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:47] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008255 2023-01-24 06:25:50.005 50 INFO eventlet.wsgi.server [req-a69f1a44-7689-494c-9e84-c434a1ea9838 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:50] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008565 2023-01-24 06:25:52.631 50 INFO eventlet.wsgi.server [req-08c7e2db-c60e-4cfe-ab0e-78269924f60d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:52] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008409 2023-01-24 06:25:53.521 48 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:25:53] "GET / HTTP/1.1" 300 1517 0.001898 2023-01-24 06:25:55.271 51 INFO eventlet.wsgi.server [req-63d5645d-c9dc-4ec4-86b6-48305b05ebe8 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:55] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.005122 2023-01-24 06:25:57.911 50 INFO eventlet.wsgi.server [req-67b0f1e9-89a1-4e34-8f95-132f32f9c478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:25:57] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.009235 2023-01-24 06:26:00.554 48 INFO eventlet.wsgi.server [req-cc7033b2-698b-4bd5-96cf-589d864485ff f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:00] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008637 2023-01-24 06:26:03.181 50 INFO eventlet.wsgi.server [req-26a2b823-8a26-49b8-9d79-af6bf36c0e2a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:03] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.007718 2023-01-24 06:26:05.835 50 INFO eventlet.wsgi.server [req-1be8a06e-6c3a-4966-9031-a1548b6b0f7a f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:05] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008469 2023-01-24 06:26:08.483 50 INFO eventlet.wsgi.server [req-b24990c6-319d-4837-8128-406f49d38530 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:08] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008319 2023-01-24 06:26:11.134 50 INFO eventlet.wsgi.server [req-5393b80e-5069-47ee-a1f0-ef7174c1e09b f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:11] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008237 2023-01-24 06:26:13.765 50 INFO eventlet.wsgi.server [req-fcbc3f21-ddb5-4b3f-869c-b489ace9a453 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008331 2023-01-24 06:26:16.405 52 INFO eventlet.wsgi.server [req-20d98cec-4e1e-4582-aaa2-68043927ab7f f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:16] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008366 2023-01-24 06:26:19.028 50 INFO eventlet.wsgi.server [req-2f1bc312-00b8-4e92-91ac-a3ba1f951742 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:19] "GET /v2/schemas/image HTTP/1.1" 200 6259 0.008365 2023-01-24 06:26:20.032 49 ERROR glance_store._drivers.rbd [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.443 49 ERROR glance.api.v2.image_data [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image data due to internal error: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Caught error: unexpected end of file while parsing chunked data: OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi Traceback (most recent call last): 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1353, in __call__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi action_result = self.dispatch(self.controller, action, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", line 1397, in dispatch 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return method(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 416, in wrapped 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return func(self, req, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 300, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._restore(image_repo, image) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", line 165, in upload 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi image.set_data(data, size, backend=backend) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/domain/proxy.py", line 208, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.base.set_data(data, size, backend=backend, set_active=set_active) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 501, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi _send_notification(notify_error, 'image.upload', msg) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 227, in __exit__ 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.force_reraise() 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", line 200, in force_reraise 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line 447, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.repo.set_data(data, size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/policy.py", line 273, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self.image.set_data(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/quota/__init__.py", line 322, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self.image.set_data(data, size=size, backend=backend, 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 567, in set_data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi self._upload_to_store(data, verifier, backend, size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line 458, in _upload_to_store 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi multihash, loc_meta) = self.store_api.add_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 398, in add_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_add_to_backend_with_multihash( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", line 480, in store_add_to_backend_with_multihash 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi (location, size, checksum, multihash, metadata) = store.add( 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/driver.py", line 279, in add_adapter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi metadata_dict) = store_add_fun(*args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/capabilities.py", line 176, in op_checker 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return store_op_fun(store, *args, **kwargs) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 629, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise exc 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", line 574, in add 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi for chunk in chunks: 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/common/utils.py", line 73, in chunkiter 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = fp.read(chunk_size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 294, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = self.data.read(i) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", line 121, in readfn 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = fd.read(*args) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/format_inspector.py", line 658, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = self._source.read(size) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 221, in read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return self._chunked_read(self.rfile, length) 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 192, in _chunked_read 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise IOError("unexpected end of file while parsing chunked data") 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi OSError: unexpected end of file while parsing chunked data 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi 2023-01-24 06:26:20.570 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 604, in handle_one_response write(b''.join(towrite)) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line 538, in write wfile.flush() File "/usr/lib/python3.8/socket.py", line 687, in write return self._sock.send(b) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 396, in send return self._send_loop(self.fd.send, data, flags) File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 383, in _send_loop return send_method(data, *args) BrokenPipeError: [Errno 32] Broken pipe 2023-01-24 06:26:20.571 49 INFO eventlet.wsgi.server [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 - - [24/Jan/2023 06:26:20] "PUT /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 500 0 52.610723 2023-01-24 06:26:21.649 51 INFO eventlet.wsgi.server [req-9eea9898-f4e7-4cb1-b02c-019b56fa00a9 f75f32a3c1fd4cf68cdb7b76d70ee9a8 bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 - - [24/Jan/2023 06:26:21] "GET /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 4774 0.068947 2023-01-24 06:26:23.635 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - [24/Jan/2023 06:26:23] "GET / HTTP/1.1" 300 1517 0.001586 ^C root at controller1:/home/stack# Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Tue Jan 24 07:38:09 2023 From: tonykarera at gmail.com (Karera Tony) Date: Tue, 24 Jan 2023 09:38:09 +0200 Subject: Snapshots disappear during saving In-Reply-To: References: Message-ID: Hello Team, Issue has been fixed. Check the nova logs and I realized nova didnt have permission to image pool in ceph Regards Tony Karera On Tue, Jan 24, 2023 at 8:28 AM Karera Tony wrote: > Hello Sofia, > > Below are the logs > > > > 24/Jan/2023 06:25:10] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 > 0.058022 > 2023-01-24 06:25:12.381 50 INFO glance.api.v2.image_data > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: > no such option collect_timing in group [keystone_authtoken] Use the > existing user token. > 2023-01-24 06:25:12.470 50 WARNING glance_store._drivers.rbd > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Since image size is > zero we will be doing resize-before-write which will be slower than normal > 2023-01-24 06:25:13.010 51 INFO eventlet.wsgi.server > [req-b91e54ba-f3fa-4fad-ab7a-1ef1ad2750fb f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008496 > 2023-01-24 06:25:15.657 52 INFO eventlet.wsgi.server > [req-c9fa16c8-232d-45ff-b425-55504f332597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:15] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008760 > 2023-01-24 06:25:16.404 50 ERROR glance_store._drivers.rbd > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image > 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception invalid literal for > int() with base 16: b'': eventlet.wsgi.ChunkReadError: invalid literal for > int() with base 16: b'' > 2023-01-24 06:25:16.629 50 ERROR glance.api.v2.image_data > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image > data due to HTTP error: webob.exc.HTTPBadRequest: invalid literal for int() > with base 16: b'' > 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent > call last): > File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", > line 604, in handle_one_response > write(b''.join(towrite)) > File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", > line 538, in write > wfile.flush() > File "/usr/lib/python3.8/socket.py", line 687, in write > return self._sock.send(b) > File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", > line 396, in send > return self._send_loop(self.fd.send, data, flags) > File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", > line 383, in _send_loop > return send_method(data, *args) > BrokenPipeError: [Errno 32] Broken pipe > > 2023-01-24 06:25:16.697 50 INFO eventlet.wsgi.server > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 > - - [24/Jan/2023 06:25:16] "PUT > /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 400 0 > 4.384073 > 2023-01-24 06:25:18.281 49 INFO eventlet.wsgi.server > [req-dcd70adf-5dd4-4a15-abe0-f1e074030346 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:18] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 > 0.067056 > 2023-01-24 06:25:20.935 52 INFO eventlet.wsgi.server > [req-065021ef-2fa4-456f-bad8-22e4a06fec9a f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:20] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 > 0.064581 > 2023-01-24 06:25:23.367 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - > [24/Jan/2023 06:25:23] "GET / HTTP/1.1" 300 1517 0.003469 > 2023-01-24 06:25:23.583 51 INFO eventlet.wsgi.server > [req-a0a51dc6-01c5-4dff-afe9-6c67a279e7fa f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:23] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 > 0.072727 > 2023-01-24 06:25:26.211 49 INFO eventlet.wsgi.server > [req-052bc425-90c3-4e4e-94e3-15aba795c96b f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:26] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 5995 > 0.055309 > 2023-01-24 06:25:28.039 49 INFO glance.api.v2.image_data > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Unable to create trust: > no such option collect_timing in group [keystone_authtoken] Use the > existing user token. > 2023-01-24 06:25:28.168 49 WARNING glance_store._drivers.rbd > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Since image size is > zero we will be doing resize-before-write which will be slower than normal > 2023-01-24 06:25:28.887 51 INFO eventlet.wsgi.server > [req-4664a441-2fa6-41fa-b985-b6290712e597 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:28] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008411 > 2023-01-24 06:25:31.541 52 INFO eventlet.wsgi.server > [req-31a7d4fc-a266-40a7-9b0d-6ba1e8c0cf47 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:31] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008032 > 2023-01-24 06:25:34.168 52 INFO eventlet.wsgi.server > [req-d8e4513e-9945-4f0c-9b2d-a1e191982f0b f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:34] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.009528 > 2023-01-24 06:25:36.798 52 INFO eventlet.wsgi.server > [req-02c6e354-6137-4f31-9cf3-a2e9c0258938 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:36] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008272 > 2023-01-24 06:25:39.435 50 INFO eventlet.wsgi.server > [req-a30d0de8-9b34-414f-81ef-9a3457cf2758 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:39] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008886 > 2023-01-24 06:25:42.059 52 INFO eventlet.wsgi.server > [req-409a24f7-7d8e-4bb8-8cab-ef730cf35478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:42] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008929 > 2023-01-24 06:25:44.709 52 INFO eventlet.wsgi.server > [req-ebee0818-8afe-488e-a9f2-52bcdfb65983 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:44] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008267 > 2023-01-24 06:25:47.346 52 INFO eventlet.wsgi.server > [req-71ad7a1a-acee-40f1-a3ed-70fe64428644 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:47] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008255 > 2023-01-24 06:25:50.005 50 INFO eventlet.wsgi.server > [req-a69f1a44-7689-494c-9e84-c434a1ea9838 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:50] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008565 > 2023-01-24 06:25:52.631 50 INFO eventlet.wsgi.server > [req-08c7e2db-c60e-4cfe-ab0e-78269924f60d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:52] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008409 > 2023-01-24 06:25:53.521 48 INFO eventlet.wsgi.server [-] 10.10.13.27 - - > [24/Jan/2023 06:25:53] "GET / HTTP/1.1" 300 1517 0.001898 > 2023-01-24 06:25:55.271 51 INFO eventlet.wsgi.server > [req-63d5645d-c9dc-4ec4-86b6-48305b05ebe8 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:55] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.005122 > 2023-01-24 06:25:57.911 50 INFO eventlet.wsgi.server > [req-67b0f1e9-89a1-4e34-8f95-132f32f9c478 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:25:57] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.009235 > 2023-01-24 06:26:00.554 48 INFO eventlet.wsgi.server > [req-cc7033b2-698b-4bd5-96cf-589d864485ff f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:00] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008637 > 2023-01-24 06:26:03.181 50 INFO eventlet.wsgi.server > [req-26a2b823-8a26-49b8-9d79-af6bf36c0e2a f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:03] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.007718 > 2023-01-24 06:26:05.835 50 INFO eventlet.wsgi.server > [req-1be8a06e-6c3a-4966-9031-a1548b6b0f7a f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:05] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008469 > 2023-01-24 06:26:08.483 50 INFO eventlet.wsgi.server > [req-b24990c6-319d-4837-8128-406f49d38530 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:08] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008319 > 2023-01-24 06:26:11.134 50 INFO eventlet.wsgi.server > [req-5393b80e-5069-47ee-a1f0-ef7174c1e09b f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:11] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008237 > 2023-01-24 06:26:13.765 50 INFO eventlet.wsgi.server > [req-fcbc3f21-ddb5-4b3f-869c-b489ace9a453 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:13] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008331 > 2023-01-24 06:26:16.405 52 INFO eventlet.wsgi.server > [req-20d98cec-4e1e-4582-aaa2-68043927ab7f f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:16] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008366 > 2023-01-24 06:26:19.028 50 INFO eventlet.wsgi.server > [req-2f1bc312-00b8-4e92-91ac-a3ba1f951742 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:19] "GET /v2/schemas/image HTTP/1.1" 200 6259 > 0.008365 > 2023-01-24 06:26:20.032 49 ERROR glance_store._drivers.rbd > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Failed to store image > 57e5c3ee-6576-4cf7-a72a-2038c86456bc Store Exception unexpected end of file > while parsing chunked data: OSError: unexpected end of file while parsing > chunked data > 2023-01-24 06:26:20.443 49 ERROR glance.api.v2.image_data > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Failed to upload image > data due to internal error: OSError: unexpected end of file while parsing > chunked data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Caught error: > unexpected end of file while parsing chunked data: OSError: unexpected end > of file while parsing chunked data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi Traceback (most recent > call last): > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", > line 1353, in __call__ > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi action_result = > self.dispatch(self.controller, action, > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/wsgi.py", > line 1397, in dispatch > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return > method(*args, **kwargs) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", > line 416, in wrapped > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return func(self, > req, *args, **kwargs) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", > line 300, in upload > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self._restore(image_repo, image) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self.force_reraise() > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/v2/image_data.py", > line 165, in upload > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > image.set_data(data, size, backend=backend) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/domain/proxy.py", > line 208, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self.base.set_data(data, size, backend=backend, set_active=set_active) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line > 501, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > _send_notification(notify_error, 'image.upload', msg) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 227, in __exit__ > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self.force_reraise() > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_utils/excutils.py", > line 200, in force_reraise > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise self.value > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/notifier.py", line > 447, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self.repo.set_data(data, size, backend=backend, > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/api/policy.py", > line 273, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return > self.image.set_data(*args, **kwargs) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/quota/__init__.py", > line 322, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self.image.set_data(data, size=size, backend=backend, > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line > 567, in set_data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > self._upload_to_store(data, verifier, backend, size) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/location.py", line > 458, in _upload_to_store > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi multihash, > loc_meta) = self.store_api.add_with_multihash( > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", > line 398, in add_with_multihash > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return > store_add_to_backend_with_multihash( > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/multi_backend.py", > line 480, in store_add_to_backend_with_multihash > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi (location, size, > checksum, multihash, metadata) = store.add( > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/driver.py", > line 279, in add_adapter > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi metadata_dict) = > store_add_fun(*args, **kwargs) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/capabilities.py", > line 176, in op_checker > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return > store_op_fun(store, *args, **kwargs) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", > line 629, in add > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise exc > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/_drivers/rbd.py", > line 574, in add > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi for chunk in > chunks: > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance_store/common/utils.py", > line 73, in chunkiter > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = > fp.read(chunk_size) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", > line 294, in read > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = > self.data.read(i) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/utils.py", > line 121, in readfn > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi result = > fd.read(*args) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/glance/common/format_inspector.py", > line 658, in read > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi chunk = > self._source.read(size) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line > 221, in read > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi return > self._chunked_read(self.rfile, length) > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", line > 192, in _chunked_read > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi raise > IOError("unexpected end of file while parsing chunked data") > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi OSError: unexpected > end of file while parsing chunked data > 2023-01-24 06:26:20.563 49 ERROR glance.common.wsgi > 2023-01-24 06:26:20.570 49 INFO eventlet.wsgi.server > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] Traceback (most recent > call last): > File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", > line 604, in handle_one_response > write(b''.join(towrite)) > File "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/wsgi.py", > line 538, in write > wfile.flush() > File "/usr/lib/python3.8/socket.py", line 687, in write > return self._sock.send(b) > File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", > line 396, in send > return self._send_loop(self.fd.send, data, flags) > File > "/var/lib/kolla/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", > line 383, in _send_loop > return send_method(data, *args) > BrokenPipeError: [Errno 32] Broken pipe > > 2023-01-24 06:26:20.571 49 INFO eventlet.wsgi.server > [req-7eebf051-b8b0-4dff-9b7e-8d5085d77b5d f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.30,10.10.13.28 > - - [24/Jan/2023 06:26:20] "PUT > /v2/images/57e5c3ee-6576-4cf7-a72a-2038c86456bc/file HTTP/1.1" 500 0 > 52.610723 > 2023-01-24 06:26:21.649 51 INFO eventlet.wsgi.server > [req-9eea9898-f4e7-4cb1-b02c-019b56fa00a9 f75f32a3c1fd4cf68cdb7b76d70ee9a8 > bf5353fb17604156b492d5a7f00992ff - default default] 10.10.13.27,10.10.13.28 > - - [24/Jan/2023 06:26:21] "GET > /v2/images?limit=1000&sort_key=created_at&sort_dir=desc HTTP/1.1" 200 4774 > 0.068947 > 2023-01-24 06:26:23.635 50 INFO eventlet.wsgi.server [-] 10.10.13.27 - - > [24/Jan/2023 06:26:23] "GET / HTTP/1.1" 300 1517 0.001586 > ^C > root at controller1:/home/stack# > > Regards > > Tony Karera > > > > > On Mon, Jan 23, 2023 at 4:56 PM Karera Tony wrote: > >> Hello Sofia, >> >> It is actually Instance snapshot not Volume snapshot. >> I click on create Snapshot on the Instance options. >> >> Regards >> >> Tony Karera >> >> >> >> >> On Mon, Jan 23, 2023 at 3:59 PM Sofia Enriquez >> wrote: >> >>> Hi Karera, hope this email finds you well >>> >>> We need more information in order to reproduce this issue. >>> >>> - Do you mind sharing c-vol logs of the operation to see if there's any >>> errors? >>> - How do you create the snapshot? Do you mind sharing the steps to >>> reproduce this? >>> >>> Thanks in advance, >>> Sofia >>> >>> On Mon, Jan 23, 2023 at 1:20 PM Karera Tony >>> wrote: >>> >>>> Dear Team, >>>> >>>> I am using Openstack Wallaby deployed using kolla-ansible. >>>> >>>> I installed Glance with the ceph backend and all was well. >>>> However when I create snapshots, they disappear when they are saved. >>>> >>>> Any idea on how to resolve this? >>>> >>>> Regards >>>> >>>> Tony Karera >>>> >>>> >>>> >>> >>> -- >>> >>> Sof?a Enriquez >>> >>> she/her >>> >>> Software Engineer >>> >>> Red Hat PnT >>> >>> IRC: @enriquetaso >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdopiera at redhat.com Tue Jan 24 13:54:12 2023 From: rdopiera at redhat.com (Radomir Dopieralski) Date: Tue, 24 Jan 2023 14:54:12 +0100 Subject: [horizon][xstatic] xstatic-font-awesome and xstatic-jquery.tablesorter leaving OpenStack Message-ID: Hi everyone, a few years ago we have a adopted under the OpenStack umbrella a number of xstatic packages used by Horizon and its plugins, because they seemed to be unmaintained at the time, and we needed to keep them secure. A few weeks ago one of those packages, xstatic-font-awesome, got a new release by its original authors ? the good people at the MoinMoin Wiki project, where the XStatic mechanism originated. That surprised us a little, and raised some concerns, since the release wasn't done using the OpenStack process, so we reached out to clear this situation. After discussion, we have decided to remove two xstatic packages: xstatic-font-awesome and xstatic-jquery.tablesorter from the OpenStack repositories and let them be maintained by the MoinMoin Wiki contributors instead, together with many other xstatic packages they use, so that there is no more confusion about who is responsible for them. We will continue to use pinned versions of those packages in Horizon, but we will no longer keep them as part of the OpenStack project. We will start the retirement process on the repositories for those packages shortly. -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Tue Jan 24 14:47:45 2023 From: pdeore at redhat.com (Pranali Deore) Date: Tue, 24 Jan 2023 20:17:45 +0530 Subject: [Glance] No weekly meeting this week Message-ID: Hello, Thursday, 26th Jan is public holiday in India and almost half of the team will not be around, so cancelling glance weekly meeting for this week. Thanks & Regards, Pranali -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 24 16:02:20 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Jan 2023 16:02:20 +0000 Subject: [OSSA-2023-002] Cinder, Glance, Nova: Arbitrary file access through custom VMDK flat descriptor (CVE-2022-47951) Message-ID: <20230124160219.kbqypjgvrjog334c@yuggoth.org> ======================================================================== OSSA-2023-002: Arbitrary file access through custom VMDK flat descriptor ======================================================================== :Date: January 24, 2023 :CVE: CVE-2022-47951 Affects ~~~~~~~ - Cinder, glance, nova: Cinder <19.1.2, >=20.0.0 <20.0.2, ==21.0.0; Glance <23.0.1, >=24.0.0 <24.1.1, ==25.0.0; Nova <24.1.2, >=25.0.0 <25.0.2, ==26.0.0 Description ~~~~~~~~~~~ Guillaume Espanel, Pierre Libeau, Arnaud Morin and Damien Rannou (OVH) reported a vulnerability in VMDK image processing for Cinder, Glance and Nova. By supplying a specially created VMDK flat image which references a specific backing file path, an authenticated user may convince systems to return a copy of that file's contents from the server resulting in unauthorized access to potentially sensitive data. All Cinder deployments are affected; only Glance deployments with image conversion enabled are affected; all Nova deployments are affected. Patches ~~~~~~~ - https://review.opendev.org/871631 (Train(cinder)) - https://review.opendev.org/871630 (Train(glance)) - https://review.opendev.org/871629 (Ussuri(cinder)) - https://review.opendev.org/871626 (Ussuri(glance)) - https://review.opendev.org/871628 (Victoria(cinder)) - https://review.opendev.org/871623 (Victoria(glance)) - https://review.opendev.org/871627 (Wallaby(cinder)) - https://review.opendev.org/871621 (Wallaby(glance)) - https://review.opendev.org/871625 (Xena(cinder)) - https://review.opendev.org/871619 (Xena(glance)) - https://review.opendev.org/871622 (Xena(nova)) - https://review.opendev.org/871620 (Yoga(cinder)) - https://review.opendev.org/871617 (Yoga(glance)) - https://review.opendev.org/871624 (Yoga(nova)) - https://review.opendev.org/871618 (Zed(cinder)) - https://review.opendev.org/871614 (Zed(glance)) - https://review.opendev.org/871616 (Zed(nova)) - https://review.opendev.org/871615 (2023.1/antelope(cinder)) - https://review.opendev.org/871613 (2023.1/antelope(glance)) - https://review.opendev.org/871612 (2023.1/antelope(nova)) Credits ~~~~~~~ - Guillaume Espanel from OVH (CVE-2022-47951) - Pierre Libeau from OVH (CVE-2022-47951) - Arnaud Morin from OVH (CVE-2022-47951) - Damien Rannou from OVH (CVE-2022-47951) References ~~~~~~~~~~ - https://launchpad.net/bugs/1996188 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-47951 Notes ~~~~~ - The stable/wallaby, stable/victoria, stable/ussuri, and stable/train branches are under extended maintenance and will receive no new point releases, but patches for them are provided as a courtesy where possible. -- Jeremy Stanley OpenStack Vulnerability Management Team -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From swogatpradhan22 at gmail.com Tue Jan 24 18:06:52 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 24 Jan 2023 23:36:52 +0530 Subject: podman manifest error | status 404 | openstack wallaby Message-ID: Hi, I am currently trying DCN for my openstack and when running the deployment command for the edge site i am getting the following error: FATAL | Pre-fetch all the containers | dcn01-hci-0 | item= 172.25.201.68:8787/tripleomaster/openstack-etcd:current-tripleo | error={"ansible_loop_var": "prefetch_image", "changed": false, "msg": "Failed to pull image 172.25.201.68:8787/tripleomaster/openstack-etcd:current-tripleo", "prefetch_image": " 172.25.201.68:8787/tripleomaster/openstack-etcd:current-tripleo"} Can someone please guide me on how to fix this issue? With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Tue Jan 24 20:45:07 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 24 Jan 2023 21:45:07 +0100 Subject: [keystone] Re: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? In-Reply-To: References: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> <23e1d227-807c-8ef1-a861-deef17aaa1f0@inovex.de> Message-ID: Hey Jon, all, Jose, Nikolla, Francois: You did discuss about the current state of using OIDC with keystone and about a secure flow to use existing SSO and only provide tokens to the openstack cli in https://lists.openstack.org/pipermail/openstack-discuss/2022-February/027313.html, sorry I did not find this prior to me posting and asking about this. I took the liberty to CC you.? Alvaro you did apparently write up the below referenced spec about improving on the OIDC support in keystone so I CCed you as well. 1) On 16/02/2022 15:45, Jose Castro Leon wrote: > Hi, > We are preparing something based on keystoneauth1 that uses an > authorization code grant in OIDC that will send you an url address to > the client so they can do the SSO there and receive a validation code. > Then you input the validation code in the CLI and receive an OIDC. > > Once it receives the OIDC access token and refresh token, we cache > them on the filesystem for subsequent calls. > > The idea was to contribute it upstream once we clean it up a bit > > Cheers > Jose Jose, could you maybe give an update on your endeavors? Do you have your code public anywhere? Do you still plan to upstream this code? 2) On 23/01/2023 13:59, Jonathan Rosser wrote: > If my memory serves correctly I did approach the Keystone team in IRC > to have one of my developers contribute better support for OIDC in > keystoneauth, but there was a preference for a much more significant > rewrite of parts of keystone. Unfortunately time has passed and I > think that an external plugin is still needed for a secure OIDC cli > experience using a modern auth flow. That is exactly where we ended up when diving deeper into the existing OIDC capabilities :-) Would you then consider contributing your code upstream? 3) There likely would have to be a spec first do do any major change / addition to keystone auth capabilties. But there already are some specs / ideas discussing the OIDC integration: ?* https://opendev.org/openstack/keystone-specs/src/branch/master/specs/keystone/backlog/oidc-improved-support.rst ?*? less related, but quite recent: https://opendev.org/openstack/keystone-specs/src/branch/master/specs/keystone/2023.1/support-oauth2-mtls.rst 4) I certainly understand that my naive initial question about fetching a v3oidcaccesstoken and use it comes way short of the actually intended? authentication flows, such as using existing SSO (via PKCE) and then receiving the callback. But also making use of refresh tokens, handling expired tokens, ... My intention is simply to revive the discussion around this topic and to potentially join forces / code to make keystone, keystoneauth1 and the openstack clients integrate nicely and securely with (existing) OIDC infrastructure and flows Regards Christian From aloga at ifca.unican.es Tue Jan 24 23:26:29 2023 From: aloga at ifca.unican.es (=?utf-8?B?w4FsdmFybyBMw7NwZXogR2FyY8OtYQ==?=) Date: Wed, 25 Jan 2023 00:26:29 +0100 Subject: [keystone] Re: openstack client integration to fetch and provide OIDC access tokens (v3oidcaccesstoken)? In-Reply-To: References: <08949303-bfbf-3a15-1a62-78bcfffcb90b@inovex.de> <1006056f-b3d1-a649-93f7-09b13d6a0012@rd.bbc.co.uk> <23e1d227-807c-8ef1-a861-deef17aaa1f0@inovex.de> Message-ID: <20230124232629.v7kkydciwpxjbbpf@cea.ifca.unican.es> Dear all. This has been a long time ago since we implemented this, so I had to refresh my mind. Also, long time without contributing to OpenStack. See my responses inline. > On 16/02/2022 15:45, Jose Castro Leon wrote: > > > We are preparing something based on keystoneauth1 that uses an > > authorization code grant in OIDC that will send you an url address to > > the client so they can do the SSO there and receive a validation code. > > Then you input the validation code in the CLI and receive an OIDC. > > > > Once it receives the OIDC access token and refresh token, we cache them > > on the filesystem for subsequent calls. > > > > The idea was to contribute it upstream once we clean it up a bit > > > > Cheers > > Jose > > Jose, could you maybe give an update on your endeavors? Do you have your > code public anywhere? > Do you still plan to upstream this code? So far the first part is already implemented, using the Client Credentials grant type: https://github.com/openstack/keystoneauth/commit/e5fd66ca35424108ca0c1234119d57dca85c93f7 The part about storing the access and refresh tokens on disk was never addressed though. > There likely would have to be a spec first do do any major change / addition > to keystone auth capabilties. > But there already are some specs / ideas discussing the OIDC integration: > > ?* https://opendev.org/openstack/keystone-specs/src/branch/master/specs/keystone/backlog/oidc-improved-support.rst We implemented a prototype plugin for the Keystone server here: https://github.com/IFCA/keystone-oidc-auth-plugin And the client part here: https://github.com/IFCA/keystone-oidc-auth-plugin However, this was blocked due to this issue, that IIRC was introduced when Keystone removed the custom WSGI stack. https://bugs.launchpad.net/keystone/+bug/1854041 https://review.opendev.org/c/openstack/keystone/+/754694 > I certainly understand that my naive initial question about fetching a > v3oidcaccesstoken and use it comes way short of the actually intended? > authentication flows, > such as using existing SSO (via PKCE) and then receiving the callback. But > also making use of refresh tokens, handling expired tokens, ... We had that interest too, but to be honest then we quit. However, I think that there is still a better approach, that is to use an OpenID Connect agent (that handles all the nasty handling of tokens) and then using the keystonauth1 v3oidcaccesstoken plugin, modifying it to get the token from the agent: https://github.com/indigo-dc/oidc-agent We have implemented this internally, and it has been a long time since we implemented it, but I think that I can test it (tomorrow CEST) and try to prepare a patch, also writing some documentation, if that helps. If there is some movement arount it will be easier to get things merged. Best, -- ?lvaro L?pez Garc?a Advanced Computing and e-Science Group Instituto de F?sica de Cantabria (IFCA) - CSIC - UC Ed. Juan Jord?, Avda. de los Castros s/n - 39005 Santander (SPAIN) phone: (+34) 942 201 537 | skype: aloga.csic | keybase.io: aloga http://alvarolopez.github.io == I understand. > Because it reverses the logical flow of conversation. >> Why is top posting frowned upon? >>> Please do not top-post in email replies. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Jan 25 03:30:46 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 24 Jan 2023 19:30:46 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Message-ID: <185e6fa60b4.111bcf05138908.7245531441451352372@ghanshyammann.com> ---- On Mon, 23 Jan 2023 17:18:27 -0800 Michael Johnson wrote --- > Hi Ghanshyam and TC, > > This process seems a bit uncomfortable to me and I think we should > have a wider discussion on this topic. > Full disclosure: I am the creator and maintainer of some projects on > PyPi that openstackci releases packages to. > > Over the years since I created those projects and added openstackci to > them, there have been multiple occasions where maintenance was > required directly on PyPi (or via twine). This includes updating > product descriptions, links, and as of last year enabling mandatory > 2FA. As you probably know, not all of that has been possible (or just > worked) via the setup.cfg/Readme in the code repository. Historically, > I don't think anyone in infra or the release team has monitored the > PyPi projects and maintained those settings on a regular basis. We > pretty much leave it to the automated release tools and poke at it if > something goes wrong. Thanks, Michael for all those good points. This will be great if we can automate those manual updates. But for now, I will say we can have PTL can manage such updates if needed by pinging 'openstackci' which is handled by opendev team. > > Historically part of the project creation steps required us to already > have the PyPi projects setup[1] prior to attempting to become an > OpenStack project. The "Project Creator Guide" (Which is no longer > part of or linked from the OpenStack documentation[2], so maybe we > aren't accepting new projects to OpenStack?) then had us add > "openstackci" to the project if we were opting to have the release > team release our packages. This is not a documented requirement that I > am aware of and may be a gap caused by the openinfra split. > > It also seems odd that we would remove the project creator from their > own project just for contributing it to OpenStack. We don't celebrate > the effort and history of contributors or projects much anymore. We do appreciate their effort whether it is for the initial setup of PyPi or any previous PTL/contributors who helped in OpenStack in anyways. I do not think cleanup and centralizing the PyPi maintainer for all OpenStack packages will delete/forget their work/effort. > > I think there is value in having more than one account have access to > the projects on PyPi. For one, if the openstackci account is > compromised (via an insider or other), there is another account that > can quickly disable the compromised account and yank a compromised > release. Likewise, given the limited availability of folks with access > to the openstackci account, there is value in having the project owner > be able to yank a compromised release without waiting for folks to > return from vacation, etc. Well, it can happen but having additional maintainers in PyPi is riskier and we have encountered this for xstatic-font-awesome[1]. The horizon team was not at all aware of changes and new releases of this package. The challenge in maintaining additional maintainers is that we as the OpenStack project governance, TC, release team, and infra team might lose control of deliverables releases which are supposed to be handled by OpenStack. Those additional maintainers who are active in OpenStack might not be in future and we always forget to change the PyPi maintainers to someone actively maintaining it in OpenStack. One option is to add PTL there but again this needs updates in every cycle and also the same risk when PTL moves out of OpenStack. To avoid those risks, we should centralize the PyPi maintainers list too like other things (ML, code, release management etc) > > All of that said, I see the security implications of having abandoned > accounts or excessively wide access (the horizon case) to projects > published on PyPi. > I'm just not sure removing the project creator's access will really > solve all of the issues around software traceability and OpenStack. > Releases can still be pushed to PyPi maliciously via openstackci or > PyPi compromise. > > I think we should also discuss the following improvements: > 1. We PGP sign these releases with an OpenStack key, but we don't > upload the .asc file with the packages to PyPi. Why don't we do this > to help folks have an easy way to validate that the package came from > the OpenStack releases process? > 2. With these signatures, we can automate tools to validate that > releases were signed by the OpenStack release process and raise an > alert if they are invalid. > 3. Maybe we should have a system that subscribes to the PyPi release > history RSS feed for each managed OpenStack project and validates the > RSS list against the releases repository information. This could then > notify a release-team email list that an unexpected release has been > posted to PyPi. Anyone should be able to subscribe to this list. > 4. If we decide that removing maintainer access to projects is a > barrier to adding them to OpenStack, we should document this clearly. > > I think we have some options to consider beyond the "remove everyone > but openstackci from the project" or "kick the project out of > OpenStack"[3]. "kick the project out of OpenStack", This is not like this. 2nd option means the project team can discuss it with the additional maintainers to join the OpenStack team to maintain the package in a single place. But if those additional maintainers are not ready to join OpenStack for any reason we do not want to steal their effort and handover maintenance outside of OpenStack is the right way to proceed in the open-source ecosystem. It needs to be done with mutual understanding of the project team and those additional maintainers. That way we can respect their effort and decision and build a good relationship in the open-source world. I think it is clear that nobody wants to keep any software package maintenance/release in two different places and this is what we are trying to solve for OpenStack. [1] https://github.com/openstack/xstatic-font-awesome/pull/2 -gmann > > Michael > > [1] https://github.com/openstack-archive/infra-manual/blob/caa430c1345f1c1aef17919f1c8d228dc652758b/doc/source/creators.rst#give-openstack-permission-to-publish-releases > [2] https://docs.openstack.org/zed/ > [3] https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup#L17 > > On Fri, Jan 20, 2023 at 3:36 PM Ghanshyam Mann gmann at ghanshyammann.com> wrote: > > > > Hi PTLs, > > > > As you might know or have seen for your project package on PyPi, OpenStack deliverables on PyPi have > > additional maintainers, For example, https://pypi.org/project/murano/, https://pypi.org/project/glance/ > > > > We should keep only 'openstackci' as a maintainer in PyPi so that releases of OpenStack deliverables > > can be managed in a single place. Otherwise, we might face the two sets of maintainers' places and > > packages might get released in PyPi by additional maintainers without the OpenStack project team > > knowing about it. One such case is in Horizon repo 'xstatic-font-awesome' where a new maintainer is > > added by an existing additional maintainer and this package was released without the Horizon team > > knowing about the changes and release. > > - https://github.com/openstack/xstatic-font-awesome/pull/2 > > > > To avoid the 'xstatic-font-awesome' case for other packages, TC discussed it in their weekly meetings[1] > > and agreed to audit all the OpenStack packages and then clean up the additional maintainers in PyPi > > (keep only 'openstackci' as maintainers). > > > > To help in this task, TC requests project PTL to perform the audit for their project's repo and add comments > > in the below etherpad. > > > > - https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup > > > > Thanks to knikolla to automate the listing of the OpenStack packages with additional maintainers in PyPi which > > you can find the result in output.txt at the bottom of this link. I have added the project list of who needs to check > > their repo in etherpad. > > > > - https://gist.github.com/knikolla/7303a65a5ddaa2be553fc6e54619a7a1 > > > > Please complete the audit for your project before March 15 so that TC can discuss the next step in vPTG. > > > > [1] https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-11-16.00.log.html#l-41 > > > > > > -gmann > > > > From gmann at ghanshyammann.com Wed Jan 25 03:55:49 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 24 Jan 2023 19:55:49 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Jan 25 at 1600 UTC In-Reply-To: <185e1f5fa51.11f7e8c59257921.1166182092977069185@ghanshyammann.com> References: <185e1f5fa51.11f7e8c59257921.1166182092977069185@ghanshyammann.com> Message-ID: <185e7114fd4.10779c57c39097.7161019408909644362@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Jan 25 at 1600 UTC. Location: IRC OFTC network in the #openstack-tc channel * Roll call * Follow up on past action items * Gate health check * Cleanup of PyPI maintainer list for OpenStack Projects ** Etherpad for audit and cleanup of additional PyPi maintainers *** https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup ** ML discussion *** https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html * Less Active projects status: ** Zaqar *** Zaqar Gate is green, bete version is released **** https://review.opendev.org/c/openstack/zaqar/+/857924 **** https://review.opendev.org/c/openstack/releases/+/871399 *** Zaqar-ui, python-zaqarclient tox4 issue fixes are up but not yet merged **** https://review.opendev.org/q/topic:zaqar-gate-fix ** Mistral situation *** Gate is green, Beta version is released and all good now **** https://review.opendev.org/c/openstack/releases/+/869470 **** https://review.opendev.org/c/openstack/releases/+/869448 *** Governance patch to deprecate Mistral release is abandon **** https://review.opendev.org/c/openstack/governance/+/866562 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 23 Jan 2023 20:07:52 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2023 Jan 25, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Jan 24 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From tkajinam at redhat.com Wed Jan 25 08:32:43 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 25 Jan 2023 17:32:43 +0900 Subject: Retiring untested and unmaintained modules (murano, rally and tacker) Message-ID: Hello, In Puppet OpenStack projects we have multiple modules to support multiple OpenStack components. However unfortunately some of these have not been attracting enough interest from developers and have been unmaintained. During the past few cycles we retired a few incomplete modules but I'm wondering if we can retire a few unmaintained modules now, to reduce our maintenance/release effort. I checked the modules we have currently, and I think the following three can be first candidates. - puppet-murano - puppet-rally - puppet-tacker We haven't seen any feedback from users about these modules for a long time. Most of the changes for the past 2~3 years are proposed by me but I am not really using these components. These modules do not have proper test coverage and it's quite difficult for us to catch any breakage and honestly I'm not quite sure these modules can work properly with the latest code. Actually we've often caught up with the latest requirements several years after the change was made in the software side, and I'm afraid these are not well-maintained. eg. - support for tacker-conductor was added 4 years after the service was added - we didn't noticed that the openstack plugin was split out from the core rally package for several years If anybody has concerns with retiring these modules, then please let us know. If we don't hear any objections for a while, then I'll start proposing changes for project retirement. Thank you, Takashi -- ---------- Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Wed Jan 25 08:33:52 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 25 Jan 2023 17:33:52 +0900 Subject: Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: References: Message-ID: I didn't notice I didn't include tags in the title until I hit the send button... I'll start a different thread with the appropriate tag included in the title. Sorry for the noise ! On Wed, Jan 25, 2023 at 5:32 PM Takashi Kajinami wrote: > Hello, > > > In Puppet OpenStack projects we have multiple modules to support multiple > OpenStack components. > However unfortunately some of these have not been attracting enough > interest from developers and > have been unmaintained. > > During the past few cycles we retired a few incomplete modules but I'm > wondering if we can retire > a few unmaintained modules now, to reduce our maintenance/release effort. > > I checked the modules we have currently, and I think the following three > can be first candidates. > - puppet-murano > - puppet-rally > - puppet-tacker > > We haven't seen any feedback from users about these modules for a long > time. Most of the changes > for the past 2~3 years are proposed by me but I am not really using these > components. > > These modules do not have proper test coverage and it's quite difficult > for us to catch any breakage and > honestly I'm not quite sure these modules can work properly with the > latest code. Actually we've often > caught up with the latest requirements several years after the change was > made in the software side, > and I'm afraid these are not well-maintained. > > eg. > - support for tacker-conductor was added 4 years after the service was > added > - we didn't noticed that the openstack plugin was split out from the core > rally package for several years > > If anybody has concerns with retiring these modules, then please let us > know. If we don't hear any objections > for a while, then I'll start proposing changes for project retirement. > > Thank you, > Takashi > -- > ---------- > Takashi Kajinami > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Wed Jan 25 08:34:55 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 25 Jan 2023 17:34:55 +0900 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) Message-ID: Hello, In Puppet OpenStack projects we have multiple modules to support multiple OpenStack components. However unfortunately some of these have not been attracting enough interest from developers and have been unmaintained. During the past few cycles we retired a few incomplete modules but I'm wondering if we can retire a few unmaintained modules now, to reduce our maintenance/release effort. I checked the modules we have currently, and I think the following three can be first candidates. - puppet-murano - puppet-rally - puppet-tacker We haven't seen any feedback from users about these modules for a long time. Most of the changes for the past 2~3 years are proposed by me but I am not really using these components. These modules do not have proper test coverage and it's quite difficult for us to catch any breakage and honestly I'm not quite sure these modules can work properly with the latest code. Actually we've often caught up with the latest requirements several years after the change was made in the software side, and I'm afraid these are not well-maintained. eg. - support for tacker-conductor was added 4 years after the service was added - we didn't noticed that the openstack plugin was split out from the core rally package for several years If anybody has concerns with retiring these modules, then please let us know. If we don't hear any objections for a while, then I'll start proposing changes for project retirement. Thank you, Takashi -- ---------- Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Wed Jan 25 09:32:56 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Wed, 25 Jan 2023 09:32:56 +0000 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: References: Message-ID: +1 to retiring unmaintained modules mentioned to relief maintenance burden. Thanks Takashi! Best regards Tobias On 25 Jan 2023, at 09:34, Takashi Kajinami wrote: Hello, In Puppet OpenStack projects we have multiple modules to support multiple OpenStack components. However unfortunately some of these have not been attracting enough interest from developers and have been unmaintained. During the past few cycles we retired a few incomplete modules but I'm wondering if we can retire a few unmaintained modules now, to reduce our maintenance/release effort. I checked the modules we have currently, and I think the following three can be first candidates. - puppet-murano - puppet-rally - puppet-tacker We haven't seen any feedback from users about these modules for a long time. Most of the changes for the past 2~3 years are proposed by me but I am not really using these components. These modules do not have proper test coverage and it's quite difficult for us to catch any breakage and honestly I'm not quite sure these modules can work properly with the latest code. Actually we've often caught up with the latest requirements several years after the change was made in the software side, and I'm afraid these are not well-maintained. eg. - support for tacker-conductor was added 4 years after the service was added - we didn't noticed that the openstack plugin was split out from the core rally package for several years If anybody has concerns with retiring these modules, then please let us know. If we don't hear any objections for a while, then I'll start proposing changes for project retirement. Thank you, Takashi -- ---------- Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From wassilij.kaiser at dhbw-mannheim.de Wed Jan 25 11:01:54 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Wed, 25 Jan 2023 12:01:54 +0100 (CET) Subject: Live Snapshot Message-ID: <1840765424.35830.1674644514543@ox.dhbw-mannheim.de> Hi, I want to take a Live-snapshot. The instances are not switched off. Ubuntu 20.04 # Ansible managed DISTRIB_ID="OSA" DISTRIB_RELEASE="25.2.0" DISTRIB_CODENAME="Yoga" DISTRIB_DESCRIPTION="OpenStack-Ansible" nova-25.0.2.dev8.dist-info Compiled against library: libvirt 8.0.0 Using library: libvirt 8.0.0 Using API: QEMU 8.0.0 Running hypervisor: QEMU 4.2.1 ii apparmor 2.13.3-7ubuntu5.1 amd64 user-space parser utility for AppArmor I've also Adjusted virt-aa-helper: #include profile virt-aa-helper /usr/lib/libvirt/virt-aa-helper flags=(complain) { #include #include # needed for searching directories capability dac_override, capability dac_read_search, # needed for when disk is on a network filesystem network inet, network inet6, deny @{PROC}/[0-9]*/mounts r, @{PROC}/[0-9]*/net/psched r, owner @{PROC}/[0-9]*/status r, @{PROC}/filesystems r, # Used when internally running another command (namely apparmor_parser) @{PROC}/@{pid}/fd/ r, # allow reading libnl's classid file /etc/libnl{,-3}/classid r, # for gl enabled graphics /dev/dri/{,*} r, # for hostdev /sys/devices/ r, /sys/devices/** r, /sys/bus/usb/devices/ r, deny /dev/sd* r, deny /dev/vd* r, deny /dev/dm-* r, deny /dev/drbd[0-9]* r, deny /dev/dasd* r, deny /dev/nvme* r, deny /dev/zd[0-9]* r, deny /dev/mapper/ r, deny /dev/mapper/* r, /usr/lib/libvirt/virt-aa-helper mr, /{usr/,}sbin/apparmor_parser Ux, /etc/apparmor.d/libvirt/* r, /etc/apparmor.d/libvirt/libvirt-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]* rw, # for backingstore -- allow access to non-hidden files in @{HOME} as well # as storage pools audit deny @{HOME}/.* mrwkl, audit deny @{HOME}/.*/ rw, audit deny @{HOME}/.*/** mrwkl, audit deny @{HOME}/bin/ rw, audit deny @{HOME}/bin/** mrwkl, @{HOME}/ r, @{HOME}/** r, /var/lib/libvirt/images/ rw, /var/lib/libvirt/images/** rw, # nova base images (LP: #907269 https://bugs.launchpad.net/bugs/907269 ) /var/lib/nova/images/** rw, /var/lib/nova/instances/_base/** rw, # nova snapshots (LP: #1244694 https://bugs.launchpad.net/bugs/1244694 ) /var/lib/nova/instances/snapshots/** rw, } Filesystem: OCFS2 [keystone_authtoken] insecure = False auth_type = password auth_url = www_authenticate_uri = project_domain_id = default user_domain_id = default project_name = service username = nova password = region_name = RegionOne service_token_roles_required = False service_token_roles = service service_type = compute memcached_servers = token_cache_time = 300 [libvirt] inject_partition = -2 inject_password = False inject_key = False virt_type = kvm live_migration_with_native_tls = true live_migration_scheme = tls live_migration_inbound_addr = xxx.xxx.xxx.xxx hw_disk_discard = ignore disk_cachemodes = iscsi_use_multipath = True Jan 25 09:46:07 bc2bl13 libvirtd[154472]: internal error: Child process (LIBVIRT_LOG_OUTPUTS=3:stderr /usr/lib/libvirt/virt-aa-helper -r -u libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74) unexpected exit status 1: 2023-01-25 09:46:07.871+0000: 376129: info : libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2022 14:51:12 +0000) 2023-01-25 09:46:07.871+0000: 376129: info : hostname: bc2bl13 2023-01-25 09:46:07.871+0000: 376129: error : virDomainDiskDefMirrorParse:8800 : unsupported configuration: unknown mirror job type '' virt-aa-helper: error: could not parse XML virt-aa-helper: error: could not get VM definition Jan 25 09:46:07 bc2bl13 libvirtd[154472]: internal error: cannot update AppArmor profile 'libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74' Jan 25 09:46:07 bc2bl13 libvirtd[154472]: Unable to restore security label on /var/lib/nova/instances/snapshots/tmpej9y72fr/c8d4bb94296746d6bff6b747386b4a90.delta -------------- next part -------------- An HTML attachment was scrubbed... URL: From wassilij.kaiser at dhbw-mannheim.de Wed Jan 25 11:11:17 2023 From: wassilij.kaiser at dhbw-mannheim.de (Kaiser Wassilij) Date: Wed, 25 Jan 2023 12:11:17 +0100 (CET) Subject: Live Snapshot ERROR Message-ID: <1303159769.35985.1674645077113@ox.dhbw-mannheim.de> Hi, I want to take a Live-snapshot. The instances are not switched off. Ubuntu 20.04 # Ansible managed DISTRIB_ID="OSA" DISTRIB_RELEASE="25.2.0" DISTRIB_CODENAME="Yoga" DISTRIB_DESCRIPTION="OpenStack-Ansible" nova-25.0.2.dev8.dist-info Compiled against library: libvirt 8.0.0 Using library: libvirt 8.0.0 Using API: QEMU 8.0.0 Running hypervisor: QEMU 4.2.1 ii apparmor 2.13.3-7ubuntu5.1 amd64 user-space parser utility for AppArmor I've also Adjusted virt-aa-helper: #include profile virt-aa-helper /usr/lib/libvirt/virt-aa-helper flags=(complain) { #include #include # needed for searching directories capability dac_override, capability dac_read_search, # needed for when disk is on a network filesystem network inet, network inet6, deny @{PROC}/[0-9]*/mounts r, @{PROC}/[0-9]*/net/psched r, owner @{PROC}/[0-9]*/status r, @{PROC}/filesystems r, # Used when internally running another command (namely apparmor_parser) @{PROC}/@{pid}/fd/ r, # allow reading libnl's classid file /etc/libnl{,-3}/classid r, # for gl enabled graphics /dev/dri/{,*} r, # for hostdev /sys/devices/ r, /sys/devices/** r, /sys/bus/usb/devices/ r, deny /dev/sd* r, deny /dev/vd* r, deny /dev/dm-* r, deny /dev/drbd[0-9]* r, deny /dev/dasd* r, deny /dev/nvme* r, deny /dev/zd[0-9]* r, deny /dev/mapper/ r, deny /dev/mapper/* r, /usr/lib/libvirt/virt-aa-helper mr, /{usr/,}sbin/apparmor_parser Ux, /etc/apparmor.d/libvirt/* r, /etc/apparmor.d/libvirt/libvirt-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]*-[0-9a-f]* rw, # for backingstore -- allow access to non-hidden files in @{HOME} as well # as storage pools audit deny @{HOME}/.* mrwkl, audit deny @{HOME}/.*/ rw, audit deny @{HOME}/.*/** mrwkl, audit deny @{HOME}/bin/ rw, audit deny @{HOME}/bin/** mrwkl, @{HOME}/ r, @{HOME}/** r, /var/lib/libvirt/images/ rw, /var/lib/libvirt/images/** rw, # nova base images (LP: #907269 https://bugs.launchpad.net/bugs/907269 ) /var/lib/nova/images/** rw, /var/lib/nova/instances/_base/** rw, # nova snapshots (LP: #1244694 https://bugs.launchpad.net/bugs/1244694 ) /var/lib/nova/instances/snapshots/** rw, } Filesystem: OCFS2 [keystone_authtoken] insecure = False auth_type = password auth_url = www_authenticate_uri = project_domain_id = default user_domain_id = default project_name = service username = nova password = region_name = RegionOne service_token_roles_required = False service_token_roles = service service_type = compute memcached_servers = token_cache_time = 300 [libvirt] inject_partition = -2 inject_password = False inject_key = False virt_type = kvm live_migration_with_native_tls = true live_migration_scheme = tls live_migration_inbound_addr = xxx.xxx.xxx.xxx hw_disk_discard = ignore disk_cachemodes = iscsi_use_multipath = True Jan 25 09:46:07 bc2bl13 libvirtd[154472]: internal error: Child process (LIBVIRT_LOG_OUTPUTS=3:stderr /usr/lib/libvirt/virt-aa-helper -r -u libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74) unexpected exit status 1: 2023-01-25 09:46:07.871+0000: 376129: info : libvirt version: 8.0.0, package: 1ubuntu7.1~cloud0 (Openstack Ubuntu Testing Bot Wed, 25 May 2022 14:51:12 +0000) 2023-01-25 09:46:07.871+0000: 376129: info : hostname: bc2bl13 2023-01-25 09:46:07.871+0000: 376129: error : virDomainDiskDefMirrorParse:8800 : unsupported configuration: unknown mirror job type '' virt-aa-helper: error: could not parse XML virt-aa-helper: error: could not get VM definition Jan 25 09:46:07 bc2bl13 libvirtd[154472]: internal error: cannot update AppArmor profile 'libvirt-c6aa0368-8ae5-4fe4-8ae5-93a92329aa74' Jan 25 09:46:07 bc2bl13 libvirtd[154472]: Unable to restore security label on /var/lib/nova/instances/snapshots/tmpej9y72fr/c8d4bb94296746d6bff6b747386b4a90.delta -------------- next part -------------- An HTML attachment was scrubbed... URL: From amonster369 at gmail.com Wed Jan 25 12:31:33 2023 From: amonster369 at gmail.com (A Monster) Date: Wed, 25 Jan 2023 13:31:33 +0100 Subject: [kolla-ansible] TASK [Link kolla_logs volume to /var/log/kolla] error after changing docker directory Message-ID: THis occurred after I changed the default docker directory from /var/lib/docker to /custom_path because the /var partition's size is not sufficient TASK [common : Link kolla_logs volume to /var/log/kolla] ********************************************************************************************************************************************************** fatal: [Storage]: FAILED! => {"changed": false, "msg": "src file does not exist, use \"force=yes\" if you really want to create the link: /var/lib/docker/volumes/kolla_logs/_data", "path": "/var/log/kolla", "src": "/var/lib/docker/volumes/kolla_logs/_data"} knowing that the changes I made are for only this specific node, the default docker directory is used for the others, so is there a custom configuration I could do in globals.yml file, or add in a custom config file or is it that it's not possible to use custom working directories for docker? -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.dikshit at myrealdata.in Wed Jan 25 09:04:59 2023 From: uday.dikshit at myrealdata.in (Uday Dikshit) Date: Wed, 25 Jan 2023 09:04:59 +0000 Subject: How to create a dynamic pollster subsystem to create a pollster for senlin cluster Message-ID: Hello Team We are a public cloud provider based on Openstack. We are working to create Autoscaling with aodh and senlin in Kolla-ansible Openstack Wallaby release. We are facing an issue as ceilometer does not support metrics for senlin cluster as a resource. Our aim is to use https://docs.openstack.org/ceilometer/wallaby/admin/telemetry-dynamic-pollster.html to generate a pollster to collect data for senlin. We were looking if anybody in the community has ever used this feature. OpenStack Docs: Introduction to dynamic pollster subsystem Current limitations of the dynamic pollster system?. Currently, the following types of APIs are not supported by the dynamic pollster system: Tenant APIs: Tenant APIs are the ones that need to be polled in a tenant fashion. docs.openstack.org Thanks & Regards, [https://acefone.com/email-signature/logo-new.png] [https://acefone.com/email-signature/facebook.png] [https://acefone.com/email-signature/linkedin.png] [https://acefone.com/email-signature/twitter.png] [https://acefone.com/email-signature/youtube.png] [https://acefone.com/email-signature/glassdoor.png] Uday Dikshit Cloud DevOps Engineer, Product Development uday.dikshit at myrealdata.in www.myrealdata.in 809-A Udyog Vihar, Phase 5, Gurugram - 122015, Haryana -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jan 25 13:47:29 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 25 Jan 2023 13:47:29 +0000 Subject: [cinder] Bug Report from 01-25-2023 Message-ID: This is a bug report from 01-18-2022 to 01-25-2023. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/2003804 "Group actions enable, disable and failover replication can leave volume's replication status in transient states enabling, disabling and failing-over respectively." Fix proposed to master. Wishlist - https://bugs.launchpad.net/cinder/+bug/2003300 "[SVf] : Enable support for replication volume with mirror pool option." Fix proposed to master. Opinion - https://bugs.launchpad.net/cinder/+bug/2003245 "[HPE] cinder 3par FC driver not connecting to 3PAR storage." -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Jan 25 15:57:24 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 25 Jan 2023 07:57:24 -0800 Subject: [oslo][ironic] oslo.service (and IPA) TLS v1.3 Message-ID: Hey all, Ironic Python Agent uses oslo.service's wsgi module as a wsgi server, with the built in TLS support from sslutils.py. This sslutils.py support only works up to TLS v1.2. It needs some enhancement. It was indicated to me in #openstack-oslo that there's nobody working on this module currently. I know that Ironic can't be the only consumer of this across OpenStack, so this is a call for interested parties and help. We have to update this to support modern TLS. It's not an option. I'd rather not do it alone -- who wants to help? I was tempted to put something up about this at the PTG; but I'm not sure it's significant enough to be worth that discussion so I'm starting here :). Thanks, Jay Faulkner Ironic PTL -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Jan 25 16:06:29 2023 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 25 Jan 2023 17:06:29 +0100 Subject: [oslo][ironic] oslo.service (and IPA) TLS v1.3 In-Reply-To: References: Message-ID: Hi all! We did some further investigation on IRC, results inline. On Wed, Jan 25, 2023 at 5:03 PM Jay Faulkner wrote: > Hey all, > > Ironic Python Agent uses oslo.service's wsgi module as a wsgi server, with > the built in TLS support from sslutils.py. This sslutils.py support only > works up to TLS v1.2. It needs some enhancement. > A correction: sslutils only supports *limiting* TLS version to 1.2 or older. You cannot use its configuration to limit the TLS version to 1.3. I just tried built-in TLS in Ironic locally and got 1.3: $ openssl s_client -connect 127.0.0.1:6385 2>&1 | grep TLS New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384 > > It was indicated to me in #openstack-oslo that there's nobody working on > this module currently. I know that Ironic can't be the only consumer of > this across OpenStack, so this is a call for interested parties and help. > I do agree that we need to solve the question of maintaining oslo.service. We use it very extensively in all parts of Ironic. Dmitry > > We have to update this to support modern TLS. It's not an option. I'd > rather not do it alone -- who wants to help? > > I was tempted to put something up about this at the PTG; but I'm not sure > it's significant enough to be worth that discussion so I'm starting here :). > > > Thanks, > Jay Faulkner > Ironic PTL > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From kristin at openinfra.dev Wed Jan 25 22:35:08 2023 From: kristin at openinfra.dev (Kristin Barrientos) Date: Wed, 25 Jan 2023 16:35:08 -0600 Subject: OpenInfra Live - Jan. 26 at 9 am CT / 15:00 UTC Message-ID: <7BAA3018-AE3B-441C-9055-A5F2066BB3F3@openinfra.dev> Hi everyone, This week?s OpenInfra Live episode is brought to you by the OpenStack Large Scale SIG. Episode: In the "Large Scale Ops Deep Dive" series, a panel of OpenStack operators invites special guests to talk about their deployment and discuss their operations. For this episode, our guests will be Benjamin Fuhrmann and Stanislav Dmitriev from Ubisoft, the famous video game publisher. Date and time: Jan. 26 at 9 a.m. CT (15:00 UTC) You can watch us live on: YouTube: https://youtu.be/H1DunJM1zoc LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:7017187840652427264/ Facebook: https://www.facebook.com/events/835737927534901 WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Benjamin Fuhrmann, Stanislav Dmitriev, Felix Huettner, Arnaud Morin and Thierry Carrez Have an idea for a future episode? Share it now at ideas.openinfra.live. Thanks, Kristin Barrientos Marketing Coordinator OpenInfra Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Jan 26 00:46:03 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 25 Jan 2023 16:46:03 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Message-ID: On Mon, Jan 23, 2023, at 5:18 PM, Michael Johnson wrote: > Hi Ghanshyam and TC, > > This process seems a bit uncomfortable to me and I think we should > have a wider discussion on this topic. > Full disclosure: I am the creator and maintainer of some projects on > PyPi that openstackci releases packages to. > > Over the years since I created those projects and added openstackci to > them, there have been multiple occasions where maintenance was > required directly on PyPi (or via twine). This includes updating > product descriptions, links, and as of last year enabling mandatory > 2FA. As you probably know, not all of that has been possible (or just > worked) via the setup.cfg/Readme in the code repository. Historically, > I don't think anyone in infra or the release team has monitored the > PyPi projects and maintained those settings on a regular basis. We > pretty much leave it to the automated release tools and poke at it if > something goes wrong. > > Historically part of the project creation steps required us to already > have the PyPi projects setup[1] prior to attempting to become an > OpenStack project. The "Project Creator Guide" (Which is no longer > part of or linked from the OpenStack documentation[2], so maybe we > aren't accepting new projects to OpenStack?) then had us add > "openstackci" to the project if we were opting to have the release > team release our packages. This is not a documented requirement that I > am aware of and may be a gap caused by the openinfra split. > > It also seems odd that we would remove the project creator from their > own project just for contributing it to OpenStack. We don't celebrate > the effort and history of contributors or projects much anymore. > > I think there is value in having more than one account have access to > the projects on PyPi. For one, if the openstackci account is > compromised (via an insider or other), there is another account that > can quickly disable the compromised account and yank a compromised > release. Likewise, given the limited availability of folks with access > to the openstackci account, there is value in having the project owner > be able to yank a compromised release without waiting for folks to > return from vacation, etc. It is probably worth describing the two possible roles an account can have on a Pypi package: Owner and Maintainer [4]. Owners have full control, they can add and remove other owners/maintainers, publish and delete releases, and delete the project itself. Maintainers can only upload packages. What this means is that depending on whether or not openstackci is an owner it may not be able to remove files or releases. In those cases we would depend on the owner(s) to do so. For example openstackci cannot do these actions against octavia packages as openstackci is only a maintainer now. Having backup accounts seems reasonable, but you need to configure them properly to be able to do what you describe. This also means that for some packages openstackci cannot remove the other owners as proposed as it may not have sufficient permissions to do so. Side note the pypi web UI seems to call both owners and maintainers "maintainers" when you view the main package page. > > All of that said, I see the security implications of having abandoned > accounts or excessively wide access (the horizon case) to projects > published on PyPi. > I'm just not sure removing the project creator's access will really > solve all of the issues around software traceability and OpenStack. > Releases can still be pushed to PyPi maliciously via openstackci or > PyPi compromise. > Maybe there is a middle ground where active maintainers can/should keep their pypi package permissions, but we clean up those who have moved on and aren't paying attention? > I think we should also discuss the following improvements: > 1. We PGP sign these releases with an OpenStack key, but we don't > upload the .asc file with the packages to PyPi. Why don't we do this > to help folks have an easy way to validate that the package came from > the OpenStack releases process? > 2. With these signatures, we can automate tools to validate that > releases were signed by the OpenStack release process and raise an > alert if they are invalid. My main concern with doing this is that it requires users to opt into checking it because pip itself is never going to check the gpg signatures. It is better than nothing, but the vast majority of people running a pip install and pulling in random libraries from openstack as dependencies will never validate the signatures. Another incomplete, but complementary tool, may be to use lockfiles that enforce more than our current constraints system does. I believe you can require specific package hashes on top of the versions. I'm not sure what is involved in converting from constraints to a lockfile though. > 3. Maybe we should have a system that subscribes to the PyPi release > history RSS feed for each managed OpenStack project and validates the > RSS list against the releases repository information. This could then > notify a release-team email list that an unexpected release has been > posted to PyPi. Anyone should be able to subscribe to this list. > 4. If we decide that removing maintainer access to projects is a > barrier to adding them to OpenStack, we should document this clearly. > > I think we have some options to consider beyond the "remove everyone > but openstackci from the project" or "kick the project out of > OpenStack"[3]. > > Michael > > [1] > https://github.com/openstack-archive/infra-manual/blob/caa430c1345f1c1aef17919f1c8d228dc652758b/doc/source/creators.rst#give-openstack-permission-to-publish-releases > [2] https://docs.openstack.org/zed/ > [3] > https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup#L17 [4] https://pypi.org/help/#collaborator-roles From fungi at yuggoth.org Thu Jan 26 03:03:00 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 26 Jan 2023 03:03:00 +0000 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> Message-ID: <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> On 2023-01-25 16:46:03 -0800 (-0800), Clark Boylan wrote: > On Mon, Jan 23, 2023, at 5:18 PM, Michael Johnson wrote: [...] > > I think we should also discuss the following improvements: > > > > 1. We PGP sign these releases with an OpenStack key, but we don't > > upload the .asc file with the packages to PyPi. Why don't we do this > > to help folks have an easy way to validate that the package came from > > the OpenStack releases process? > > > > 2. With these signatures, we can automate tools to validate that > > releases were signed by the OpenStack release process and raise an > > alert if they are invalid. > > My main concern with doing this is that it requires users to opt > into checking it because pip itself is never going to check the > gpg signatures. It is better than nothing, but the vast majority > of people running a pip install and pulling in random libraries > from openstack as dependencies will never validate the signatures. [...] I read this suggestion as having automation or some periodic task performed by the release managers or similar group, whereby our community checks new releases against available signatures rather than at install time. Worth noting, the release team already periodically runs a script which audits all project tags to make sure we have all intended packages and signatures in the expected locations. It would theoretically be possible to just double check that there aren't any extra packages/releases on PyPI that don't correspond to release tags in our repositories or are otherwise anomalous (extra platform wheels, post versions, et cetera) or which differ from the ones on our tarballs site in some way. That should be sufficient to catch most possibilities without needing to actually retrieve every package so that the signatures for them can be validated directly. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jay at gr-oss.io Thu Jan 26 17:50:57 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Thu, 26 Jan 2023 09:50:57 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> Message-ID: I'm going to be honest, I'm a little frustrated that this isn't being treated as a severe security issue; because it is. Regardless of what people anecdotally say, there are a large number of operators who are installing and running openstack from pypi. Even larger still are developers -- like you and I -- installing openstack packages as our user anytime we run tox tests. I'm not personally worried too much about the cases where an existing core or PTL is listed. I'm happy to have a policy argument in that context. That's not what's happening in the vast majority of these cases. For Ironic, of the 18 pypi artifacts we have; 14 have maintainers who are not currently core reviewers or openstack contributors on them -- and a handful of them have maintainers who I personally have never even heard of (we even have a `login.launchpad.net_154` as a maintainer on IPA). At any point in time, one of these people could[1] upload a malicious version of the package up to pypi and compromise any one of us who are unlucky enough to run `tox -r` locally before it's caught and rolled back. I don't think packages that are managed by the OpenStack governance process should ever have a backdoor open to humans of any kind; but that's not the case we're arguing here. Right now, we have a large group of untrusted people with the ability to compromise our developers and users if they are not securing their pypi accounts well. This is especially true in an open source climate that is becoming more and more focused on supply chain risks. That is where my focus lies, and voting to remove those erroneous maintainers hastily is exactly the sort of decision I'd expect to be made by the Technical Committee -- even before I was elected to it (and if you disagree; I'm happy to only serve one term). We have to protect our users and our contributors. Our focus must remain on closing this security hole. It does raise policy questions, but right now I'd rather us put out the fire rather than arguing about fire prevention tactics. Thanks, Jay Faulkner TC Member Ironic PTL Footnote: 1 - Arguably this has already happened once, in the case of xstatic-font-awesome -- we were just lucky that it was a good-faith release and not an actual compromised. On Wed, Jan 25, 2023 at 7:12 PM Jeremy Stanley wrote: > On 2023-01-25 16:46:03 -0800 (-0800), Clark Boylan wrote: > > On Mon, Jan 23, 2023, at 5:18 PM, Michael Johnson wrote: > [...] > > > I think we should also discuss the following improvements: > > > > > > 1. We PGP sign these releases with an OpenStack key, but we don't > > > upload the .asc file with the packages to PyPi. Why don't we do this > > > to help folks have an easy way to validate that the package came from > > > the OpenStack releases process? > > > > > > 2. With these signatures, we can automate tools to validate that > > > releases were signed by the OpenStack release process and raise an > > > alert if they are invalid. > > > > My main concern with doing this is that it requires users to opt > > into checking it because pip itself is never going to check the > > gpg signatures. It is better than nothing, but the vast majority > > of people running a pip install and pulling in random libraries > > from openstack as dependencies will never validate the signatures. > [...] > > I read this suggestion as having automation or some periodic task > performed by the release managers or similar group, whereby our > community checks new releases against available signatures rather > than at install time. > > Worth noting, the release team already periodically runs a script > which audits all project tags to make sure we have all intended > packages and signatures in the expected locations. It would > theoretically be possible to just double check that there aren't any > extra packages/releases on PyPI that don't correspond to release > tags in our repositories or are otherwise anomalous (extra platform > wheels, post versions, et cetera) or which differ from the ones on > our tarballs site in some way. That should be sufficient to catch > most possibilities without needing to actually retrieve every > package so that the signatures for them can be validated directly. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jan 26 18:31:30 2023 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 26 Jan 2023 19:31:30 +0100 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> Message-ID: I agree with Jay's summary below wholeheartedly. It is good that this issue has also highlighted other issues with the current maintainers&owners lists. The bottom line anyways is that the current state of affairs *must* change and a policy be created and kept in check going forward. Radek -yoctozepto On Thu, 26 Jan 2023 at 18:53, Jay Faulkner wrote: > > > > I'm going to be honest, I'm a little frustrated that this isn't being treated as a severe security issue; because it is. Regardless of what people anecdotally say, there are a large number of operators who are installing and running openstack from pypi. Even larger still are developers -- like you and I -- installing openstack packages as our user anytime we run tox tests. > > I'm not personally worried too much about the cases where an existing core or PTL is listed. I'm happy to have a policy argument in that context. That's not what's happening in the vast majority of these cases. For Ironic, of the 18 pypi artifacts we have; 14 have maintainers who are not currently core reviewers or openstack contributors on them -- and a handful of them have maintainers who I personally have never even heard of (we even have a `login.launchpad.net_154` as a maintainer on IPA). At any point in time, one of these people could[1] upload a malicious version of the package up to pypi and compromise any one of us who are unlucky enough to run `tox -r` locally before it's caught and rolled back. > > I don't think packages that are managed by the OpenStack governance process should ever have a backdoor open to humans of any kind; but that's not the case we're arguing here. Right now, we have a large group of untrusted people with the ability to compromise our developers and users if they are not securing their pypi accounts well. This is especially true in an open source climate that is becoming more and more focused on supply chain risks. That is where my focus lies, and voting to remove those erroneous maintainers hastily is exactly the sort of decision I'd expect to be made by the Technical Committee -- even before I was elected to it (and if you disagree; I'm happy to only serve one term). We have to protect our users and our contributors. > > Our focus must remain on closing this security hole. It does raise policy questions, but right now I'd rather us put out the fire rather than arguing about fire prevention tactics. > > Thanks, > Jay Faulkner > TC Member > Ironic PTL > > Footnote: > 1 - Arguably this has already happened once, in the case of xstatic-font-awesome -- we were just lucky that it was a good-faith release and not an actual compromised. From fungi at yuggoth.org Thu Jan 26 18:40:08 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 26 Jan 2023 18:40:08 +0000 Subject: [security-sig] Polls in preparation to revive our meetings Message-ID: <20230126183907.tiamhukqq6ixpp43@yuggoth.org> As discussed at the last PTG, the present meeting time (15:00 UTC on the first Thursday of each month) is inconvenient for some attendees, and that combined with year-end holidays and general busy weeks recently have led to skipping them entirely. In order to start narrowing down the potential meeting schedule, I have two initial polls. The first is to determine what frequency we should meet. If you have an opinion on that, please fill out this poll before Thursday, February 9 (two weeks from today): https://framadate.org/wz7GioqmgyWeILkr The second poll is to hopefully determine what day of the week is optimal for potential attendees. If you have a preference for which day of the week to meet, please complete this one by the same date as the first: https://framadate.org/CyxKgZPT8PWxcCnJ Once I can analyze the results, I'll put together a more specific poll for choosing a time of day as well as possibly choosing which week(s) of the month (if we don't settle on weekly frequency). In the meantime, let's plan to hold February's meeting on Thursday the 2nd at 15:00-16:00 UTC as usual[*] for anyone who is able to attend, and I'll get an agenda together in preparation for that. [*] https://meetings.opendev.org/#OpenStack_Security_SIG_meeting -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From satish.txt at gmail.com Thu Jan 26 19:20:01 2023 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 26 Jan 2023 14:20:01 -0500 Subject: [cinder] cinder-backup volume stuck in creating Message-ID: Folks, I have configured nova and cinder with ceph storage. VMs running on ceph storage but now when i am trying to create a backup of cinder volume its getting stuck on creating and doing nothing. Logs also do not give any indication of bad. My cinder.conf [DEFAULT] enabled_backends = rbd-1 backup_driver = cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf = /etc/ceph/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool = backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true osapi_volume_listen = 10.73.0.181 osapi_volume_listen_port = 8776 Output of "openstack volume service list" showing cinder-backup service is up but when i create a backup it's getting stuck in this stage and no activity. I am not seeing anything getting transferred to the ceph backups pool also. Any clue? or method to debug? # openstack volume backup list --all +--------------------------------------+------+-------------+----------+------+ | ID | Name | Description | Status | Size | +--------------------------------------+------+-------------+----------+------+ | bc844d55-8c5a-4bd3-b0e9-7c4c780c95ad | foo1 | | creating | 20 | +--------------------------------------+------+-------------+----------+------+ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Jan 27 00:09:28 2023 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 26 Jan 2023 16:09:28 -0800 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> Message-ID: Thank you to everyone that has participated in this discussion. My hope was that the community could have a discussion around the issues and come up with some ideas beyond the "single maintainer" or "out of OpenStack" specified in the etherpad[1]. As I mentioned, and was highlighted by a few responses, removing additional maintainers from the projects doesn't fully solve the problem for our community. In fact it raises some new problems that were not previously discussed. Let me summarize some of the issues/proposals for easier comment/discussion (please add/correct if I missed something): 1. Issue: There is currently no way to validate packages on PyPi (tar or wheel) are official OpenStack packages. a) PEP 458[2][3] - TUF for PyPi/pip does not exist yet. Development has slowed. b) PyPi has removed the capability to associate signatures with packages(beyond uploading an asc file) and does not display them in any way via the web UI. c) Some packagers include the pgp signature as ".asc" files associated with each package file uploaded. i) OpenStack does not do this today. Currently OpenStack only publishes pgp signatures on tarballs.opendev.org. ii) The tooling for working with PyPi doesn't use these signature files, so they are of minimal value. d) Question raised: Should we as the OpenStack community get involved in the PEP458 development to help move this forward? e) Question raised: Should we stop using PyPi as it does not provide software traceability? PyPi has had a lot of press recently about issues with look-alike packes, etc.[4][5][6] f) Question raised: Should we develop some OpenStack tooling that provides the required packaged validation? Either via pulling signatures from tarballs.o.o or pulling OpenStack packages from tarballs.o.o directly. g) Question raised: Should we use lockfiles (pipfile/pipfile.lock)[4] to provide hash validation of packages? h) Question raised: Should we start providing hashes in requirements.txt to allow "hash-checking mode"[5]? 2. Issue: There is no documentation for how to add a service to OpenStack on the OpenStack website any longer and no policy on what it means for artifacts transitioning to OpenStack management. a) Historically not all OpenStack services were required to be managed via the release team process. It was not a problem for PTLs to push tags and publish releases via alternate processes. b) Question raised: Should this change and all OpenStack services be required to use the release process? c) Question raised: Should we clearly call out all associated services (PyPi, github, snyk, twitter, grafana, docker hub, quay, etc.) will become solely managed by OpenStack? d) Question raised: Should we bring back a link to the "project creator" section of the docs.opendev.org or do we need an OpenStack specific guide? 3. Issue: Current and historical documentation[9] for project teams management of PyPi projects has stated that "openstackci" should not have "Owner" access to the project, just "Maintainer". a) The "Maintainer" role allows "openstackci" to publish packages.[10[ b) The "Maintainer" role does not allow the deletion of packages, the project, or the removal of other maintainers.[10] c) Question raised: Does "openstackci" even have the required permission to remove the other maintainers? d) Question raised: Do we need to create/correct proper documentation for project management on PyPi? 4. Issue: Removing maintainers other than "openstackci" creates a single point of management. a) Resources available in OpenStack are declining and response times for infrastructure issues are rising[11][12]. b) Even with current openstackci access, many of these PyPi projects have not been maintained by "openstackci" resources, but at least some have been maintained by the other maintainers. i) Some projects still have "openstackjenkins" maintainers[13] that were never cleaned up. ii) Have all projects with "openstackci" access been updated to require 2FA? (I turned this on for the Octavia projects I maintain long ago) iii) The issue with another group having access to the project highlights that we might not have resources to actively manage all of these projects[14]. c) If (when if you are a security person) the "openstackci" account is compromised (insider or other), we would be locked out of all of these projects. d) In OpenStack we distribute the work and responsibility to trusted parties (PTLs, core reviews, liaisons, etc.), moving to a single account for PyPi would not allow this pattern. e) Question raised: Could we provide an option 3 where we leave access for active maintainers if the PTL approves? f) Question raised: Should we have secondary accounts for services like this that are held in escrow in case we have a compromise of the primary account? g) Question raised: Can we setup an automated system where the current PTL can get access to the project on PyPi for a period of time? h) Why isn't the release/"openstackci" account a lower permission account used for the release process only instead of a "root" style account? (related to question "f") above. i) The good news is, even with the above issues, someone did see a notification of a maintainer list change to one of these projects. We don't get notifications of releases, but we do get notifications of maintainer list changes. 5. Issue: We have no audit checks in place to identify rogue releases on PyPi. a) Even with the change to only allowing "openstackci" as a maintainer, a compromised "openstackci" or PyPi could still release compromised packages. b) Proposal: Update the existing release team audit script to also audit the PyPi packages for unexpected packages (tar, wheel, etc.). There are a lot of questions here to work through and I hope they are a topic for future TC/community discussion. Here is what I would do: 1. Give the PTLs the option to leave some maintainers on the packages as appropriate until we have additional infrastructure in place. Remove the other "maintainers/owners" from the projects. 2. Immediately audit the "openstackci" access to see what level of permission we have on the projects. We may need to find people or escalate with the PyPi admins to fix this. It's not a simple "maintainer" or not. 3. Create OpenStack documentation for how access to PyPi projects is expected to work (either in the release team docs, or a new section to replace the project creator guide). If we want to use the opendev.org docs, we need to relink those into the OpenStack docs and correct the issues with the instructions. 4. We should create another account in all of the OpenStack managed projects that will assume the "owner" role. Access to this account should be separate (escrowed?) from the other maintainer accounts. 5. The "openstackci" accounts should all be dropped to "maintainer" level permissions, allowing releases, but no destructive actions. 6. Create a documented action plan for who/how a package can be pulled from PyPi in the case of a compromised package was released or the automated release process had a failure. 7. Update the release team audit script to also check PyPi for rogue releases, preferably by also checking the hash. This may uncover existing problem releases we do not know about yet. 8. Automate the above audit script to run regularly or document a schedule it will be run by the release team. 9. Audit that all of the OpenStack projects on PyPi have 2FA required. This is a per-project setting on PyPi. 10. Start work on developing an automated system where PTLs can get owner level access, for a period of time, to address the issues the individual maintainers have done historically. This would at least provide an audit log and allow us to spread the work. 11. Form a task team to evaluate if we should get involved in the PEP 458 work or if using one of the hash validation tools would be feasible as part of our requirements process. Michael [1] https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup#L17 [2] https://peps.python.org/pep-0458/ [3] https://github.com/pypi/warehouse/issues/10672 [4] https://thenewstack.io/poisoned-lolip0p-pypi-packages/ [5] https://www.theregister.com/2023/01/09/pypi_aws_malware_key/ [6] https://www.securityweek.com/security-firms-find-over-20-malicious-pypi-packages-designed-data-theft/ [7] https://pypi.org/project/pipfile/ [8] https://pip.pypa.io/en/stable/topics/secure-installs/#hash-checking-mode [9] https://docs.opendev.org/opendev/infra-manual/latest/creators.html#give-opendev-permission-to-publish-releases [10] https://pypi.org/help/#collaborator-roles [11] https://review.opendev.org/c/opendev/system-config/+/795596 [12] https://review.opendev.org/c/openinfra/openstack-map/+/840774 [13] https://pypi.org/user/openstackjenkins/ [14] https://github.com/openstack/xstatic-font-awesome/pull/2 On Thu, Jan 26, 2023 at 10:31 AM Rados?aw Piliszek wrote: > > I agree with Jay's summary below wholeheartedly. > > It is good that this issue has also highlighted other issues with the > current maintainers&owners lists. The bottom line anyways is that the > current state of affairs *must* change and a policy be created and > kept in check going forward. > > Radek > -yoctozepto > > On Thu, 26 Jan 2023 at 18:53, Jay Faulkner wrote: > > > > > > > > I'm going to be honest, I'm a little frustrated that this isn't being treated as a severe security issue; because it is. Regardless of what people anecdotally say, there are a large number of operators who are installing and running openstack from pypi. Even larger still are developers -- like you and I -- installing openstack packages as our user anytime we run tox tests. > > > > I'm not personally worried too much about the cases where an existing core or PTL is listed. I'm happy to have a policy argument in that context. That's not what's happening in the vast majority of these cases. For Ironic, of the 18 pypi artifacts we have; 14 have maintainers who are not currently core reviewers or openstack contributors on them -- and a handful of them have maintainers who I personally have never even heard of (we even have a `login.launchpad.net_154` as a maintainer on IPA). At any point in time, one of these people could[1] upload a malicious version of the package up to pypi and compromise any one of us who are unlucky enough to run `tox -r` locally before it's caught and rolled back. > > > > I don't think packages that are managed by the OpenStack governance process should ever have a backdoor open to humans of any kind; but that's not the case we're arguing here. Right now, we have a large group of untrusted people with the ability to compromise our developers and users if they are not securing their pypi accounts well. This is especially true in an open source climate that is becoming more and more focused on supply chain risks. That is where my focus lies, and voting to remove those erroneous maintainers hastily is exactly the sort of decision I'd expect to be made by the Technical Committee -- even before I was elected to it (and if you disagree; I'm happy to only serve one term). We have to protect our users and our contributors. > > > > Our focus must remain on closing this security hole. It does raise policy questions, but right now I'd rather us put out the fire rather than arguing about fire prevention tactics. > > > > Thanks, > > Jay Faulkner > > TC Member > > Ironic PTL > > > > Footnote: > > 1 - Arguably this has already happened once, in the case of xstatic-font-awesome -- we were just lucky that it was a good-faith release and not an actual compromised. > From ralonsoh at redhat.com Fri Jan 27 08:22:30 2023 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 27 Jan 2023 09:22:30 +0100 Subject: [neutron] Neutron drivers meeting Message-ID: Hello Neutrinos: This is just a reminder of the meeting we have today at 1400UTC. Please check today's agenda: https://wiki.openstack.org/wiki/Meetings/NeutronDrivers See you in IRC. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Jan 27 08:34:30 2023 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 27 Jan 2023 09:34:30 +0100 Subject: [ptl][tc] OpenStack packages PyPi additional external maintainers audit & cleanup In-Reply-To: References: <185d18a20aa.1206b91ad115363.5205111285046207324@ghanshyammann.com> <20230126030300.kmmc7aq6u5q3at6i@yuggoth.org> Message-ID: <04e7e620-4f63-8c7c-7a06-1353432d35c5@openstack.org> Michael Johnson wrote: > [...] > 2. Issue: There is no documentation for how to add a service to > OpenStack on the OpenStack website any longer and no policy on what it > means for artifacts transitioning to OpenStack management. > a) Historically not all OpenStack services were required to be > managed via the release team process. It was not a problem for PTLs to > push tags and publish releases via alternate processes. > b) Question raised: Should this change and all OpenStack services be > required to use the release process? It is actually a documented requirement for OpenStack projects that "releases of OpenStack deliverables are handled by the OpenStack Release Management team through the openstack/releases repository. Official projects are expected to relinquish direct tagging (and branch creation) rights in their Gerrit ACLs once their release jobs are functional." See https://governance.openstack.org/tc/reference/new-projects-requirements.html The release team delegates release management for some deliverables that use different toolchains (tagged "release-management: external" in governance) like for example OpenStack Charms. And some deliverables are just not released (tagged "release-management: none"). But the default is already to have all services follow the release management process. -- Thierry Carrez (ttx) From jake.yip at ardc.edu.au Fri Jan 27 10:05:53 2023 From: jake.yip at ardc.edu.au (Jake Yip) Date: Fri, 27 Jan 2023 21:05:53 +1100 Subject: [magnum] Other Distro for k8s In-Reply-To: References: Message-ID: <499d7e4d-bb98-3769-afaf-46387bd26d9c@ardc.edu.au> Hi Nguyen, The Magnum team is looking to move to ClusterAPI [1]; one of the advantages is that we can leverage off upstream effort and support Ubuntu. This work is still in the very early stages, but yes, we do have plans for it. [1] https://cluster-api.sigs.k8s.io/ Regards, Jake On 23/1/2023 1:54 am, Nguy?n H?u Kh?i wrote: > Hello guys. > > I know that Magnum is using Fedora Coreos for k8s. Why don't we?use a > long-term distro such as Ubuntu for this project? > I will be more stable. and this project seems obsolete with the old > version for k8s. > > Nguyen Huu Khoi From jake.yip at ardc.edu.au Fri Jan 27 10:15:56 2023 From: jake.yip at ardc.edu.au (Jake Yip) Date: Fri, 27 Jan 2023 21:15:56 +1100 Subject: [Magnum]enable cluster user trust In-Reply-To: References: Message-ID: <6040b621-38c1-33d2-2f1a-2b44ca384c87@ardc.edu.au> Hi Nguyen, This is quite an old (2016) CVE, and I see that there have been a patch for it already. On why Trust is needed - the Kubernetes cluster needs to have OpenStack credentials to be able to spin up OpenStack resources like Cinder Volumes and Octavia Loadbalancers. You should use [trust]/roles in magnum config to limit the amount of roles that the trust is created with. Typically only Member is necessary but this can vary from cloud to cloud, depending on whether your cloud have custom policies. Regards, Jake On 23/1/2023 1:59 am, Nguy?n H?u Kh?i wrote: > Hello guys. > I am going to use Magnum for production but I see that > https://nvd.nist.gov/vuln/detail/CVE-2016-7404 > if I want to use cinder > for k8s cluster. Is there any way to fix or minimize this problem? > Thanks. > Nguyen Huu Khoi From jake.yip at ardc.edu.au Fri Jan 27 10:35:52 2023 From: jake.yip at ardc.edu.au (Jake Yip) Date: Fri, 27 Jan 2023 21:35:52 +1100 Subject: Reddit query for openstack magnum for enterprise core component maker In-Reply-To: References: Message-ID: Hi, Read your post on Reddit. From an operator's perspective, Magnum allows us to easily let users provision a Kubernetes cluster with a few clicks and go straight into k8s. We are already operating an OpenStack cloud so Magnum with the Openstack integration was a great choice. What is best for you will depend a lot on a few factors: - Your familiarity with OpenStack and size/features of your current cloud - Size / Number of clusters and Day 1 operations - What kind of service you want provide It is difficult to give a great answer without knowing more details about your situation. Feel free to ping me on this email with more information if you are not comfortable with sharing internal details on the internet. Regards, Jake On 22/1/2023 10:07 am, Gajendra D Ambi wrote: > https://www.reddit.com/r/openstack/comments/10hu68s/container_orchestrator_for_openstack/ . > Hi team, > request anyone of you from this project to please help us out. We also > mean to contribute to the project because we know that we will need to > add a lot more features to it that what api endpoints are already > providing to us. When we do, it will all be contributed to the project > after it is being tested for months in production. I am leaning towards > openstack magnum and I do not have a lot of time to convince others of > the same. > > Thanks and Regards, > https://ambig.one/2/ > From senrique at redhat.com Fri Jan 27 10:38:53 2023 From: senrique at redhat.com (Sofia Enriquez) Date: Fri, 27 Jan 2023 10:38:53 +0000 Subject: [cinder] cinder-backup volume stuck in creating In-Reply-To: References: Message-ID: Hello Satish, Are you able to track the API request in c-api logs? Does the c-bak logs show that it is creating a backup at least or nothing at all? Regards, On Thu, Jan 26, 2023 at 7:22 PM Satish Patel wrote: > Folks, > > I have configured nova and cinder with ceph storage. VMs running on ceph > storage but now when i am trying to create a backup of cinder volume its > getting stuck on creating and doing nothing. Logs also do not give any > indication of bad. > > My cinder.conf > > [DEFAULT] > > enabled_backends = rbd-1 > backup_driver = cinder.backup.drivers.ceph.CephBackupDriver > backup_ceph_conf = /etc/ceph/ceph.conf > backup_ceph_user = cinder-backup > backup_ceph_chunk_size = 134217728 > backup_ceph_pool = backups > backup_ceph_stripe_unit = 0 > backup_ceph_stripe_count = 0 > restore_discard_excess_bytes = true > osapi_volume_listen = 10.73.0.181 > osapi_volume_listen_port = 8776 > > > Output of "openstack volume service list" showing cinder-backup service is > up but when i create a backup it's getting stuck in this stage and no > activity. I am not seeing anything getting transferred to the ceph backups > pool also. Any clue? or method to debug? > > # openstack volume backup list --all > > +--------------------------------------+------+-------------+----------+------+ > | ID | Name | Description | Status | > Size | > > +--------------------------------------+------+-------------+----------+------+ > | bc844d55-8c5a-4bd3-b0e9-7c4c780c95ad | foo1 | | creating | > 20 | > > +--------------------------------------+------+-------------+----------+------+ > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jan 27 11:15:51 2023 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 27 Jan 2023 16:45:51 +0530 Subject: cloud dcn01 (edge) not found in External Deployment Post deployment Tasks | tripleo | wallaby | centos 8 stream Message-ID: Hi, I am trying to deploy DCN and in the final step getting the following error: 2023-01-27 18:31:10.843012 | 48d539a1-1679-1cbd-45d5-0000000000fe | TASK | Nova: Manage aggregate and availability zone and add hosts to the zone Using module file /usr/lib/python3.6/site-packages/ansible/modules/cloud/openstack/os_nova_host_aggregate.py Pipelining is enabled. ESTABLISH LOCAL CONNECTION FOR USER: stack EXEC /bin/sh -c 'sudo -H -S -n -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-bhcwyaeoxhaczfutgazdvosyznxmddwq ; OS_CLOUD=dcn01 /usr/bin/python3'"'"' && sleep 0' The full traceback is: File "/tmp/ansible_os_nova_host_aggregate_payload_uur0b1qu/ansible_os_nova_host_aggregate_payload.zip/ansible/module_utils/openstack.py", line 159, in openstack_cloud_from_module interface=module.params['interface'], File "/usr/lib/python3.6/site-packages/openstack/__init__.py", line 63, in connect options=options, **kwargs) File "/usr/lib/python3.6/site-packages/openstack/config/__init__.py", line 36, in get_cloud_region return config.get_one(options=parsed_options, **kwargs) File "/usr/lib/python3.6/site-packages/openstack/config/loader.py", line 1107, in get_one config = self._get_base_cloud_config(cloud, profile) File "/usr/lib/python3.6/site-packages/openstack/config/loader.py", line 509, in _get_base_cloud_config name=name)) 2023-01-27 18:31:14.137975 | 48d539a1-1679-1cbd-45d5-0000000000fe | FATAL | Nova: Manage aggregate and availability zone and add hosts to the zone | undercloud | error={ "changed": false, "invocation": { "module_args": { "api_timeout": null, "auth": null, "auth_type": null, "availability_zone": "dcn01", "ca_cert": null, "client_cert": null, "client_key": null, "hosts": [ "dcn01-hci-0.bdxworld.com", "dcn01-hci-1.bdxworld.com", "dcn01-hci-2.bdxworld.com" ], "interface": "public", "metadata": null, "name": "dcn01", "region_name": null, "state": "present", "timeout": 180, "validate_certs": null, "wait": true } }, "msg": "Cloud dcn01 was not found." } 2023-01-27 18:31:14.139916 | 48d539a1-1679-1cbd-45d5 Deploy step: (undercloud) [stack at hkg2director dcn01]$ cat deploy_dcn01.sh #!/bin/bash THT=/usr/share/openstack-tripleo-heat-templates/ CNF=/home/stack/ openstack overcloud deploy \ --stack dcn01 \ --templates $THT \ -r $CNF/dcn01/dcn01_roles.yaml \ -n $CNF/dcn01/custom_network_data.yaml \ -e $CNF/dcn01/node-info.yaml \ -e $CNF/dcn01/scheduler-hints.yaml \ -e $CNF/dcn01/overcloud-networks-deployed.yaml \ -e $CNF/dcn01/vip-deployed-environment.yaml \ -e ~/containers-prepare-parameter.yaml \ -e $THT/environments/services/neutron-ovn-dvr-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/dcn-storage.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm-rbd-only.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-az-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/etcd.yaml \ -e $CNF/dcn01/dcn01_overcloud-baremetal-deployed.yaml \ -e $CNF/dcn01/glance_dcn01.yaml \ -e $CNF/dcn01/deployed_ceph.yaml \ -e $CNF/dcn01/dcn01_parameters.yaml \ -e $CNF/dcn01/overcloud-export.yaml \ -e $CNF/dcn01/clouddomain.yaml \ --ntp-server 172.25.201.68 -vv Can you please tell me what this issue could be? How to fix it? With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Jan 27 15:24:20 2023 From: zigo at debian.org (Thomas Goirand) Date: Fri, 27 Jan 2023 16:24:20 +0100 Subject: How to create a dynamic pollster subsystem to create a pollster for senlin cluster In-Reply-To: References: Message-ID: On 1/25/23 10:04, Uday Dikshit wrote: > Hello Team > We are a public cloud provider based on Openstack. > We are working to create Autoscaling with aodh and senlin in > Kolla-ansible Openstack Wallaby release. We are facing an issue as > ceilometer does not support metrics for senlin cluster as a resource. > Our aim is to use > https://docs.openstack.org/ceilometer/wallaby/admin/telemetry-dynamic-pollster.html ?to generate a pollster to collect data for senlin. We were looking if anybody in the community has ever used this feature. Hi, Not only we use that feature in production, but I also used the dynamic pollster stuff on the compute pollster using the command-line thingy. The result is this project: https://salsa.debian.org/openstack-team/services/ceilometer-instance-poller/ You can also read bits of docs of OCI about it: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer#configuring-a-custom-metric-and-billing I hope this helps. If you need more help, please do reply ... Cheers, Thomas Goirand (zigo) From elod.illes at est.tech Fri Jan 27 15:30:33 2023 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 27 Jan 2023 15:30:33 +0000 Subject: [release] Release countdown for week R-7, Jan 30 - Feb 03 Message-ID: Development Focus ----------------- We are entering the last weeks of the 2023.1 Antelope development cycle. From now until the final release, we'll send a countdown email like this every week. It's probably a good time for teams to take stock of their library and client work that needs to be completed yet. The non-client library freeze is coming up, followed closely by the client lib freeze. Please plan accordingly to avoid any last minute rushes to get key functionality in. General Information ------------------- Next week is the Extra-ATC freeze, in preparation for elections. All contributions to OpenStack are valuable, but some are not expressed as Gerrit code changes. Please list active contributors to your project team who do not have a code contribution this cycle, and therefore won't automatically be considered an Active Technical Contributor and allowed to vote. This is done by adding extra-atcs to https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml before the Extra-ATC freeze on February 2nd, 2023. A quick reminder of the upcoming freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (February 9th, 2023). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (February 16th, 2023) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, February 16th, 2023. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (March 2nd, 2023) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (March 16th, 2023) Finally, now is also a good time to start planning what highlights you want for your deliverables in the cycle highlights. The deadline to submit an initial version for those is set to Feature freeze (February 16th, 2023). Background on cycle-highlights: http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html Project Team Guide, Cycle-Highlights: https://docs.openstack.org/project-team-guide/release-management.html#cycle-highlights knelson [at] openstack.org/diablo_rojo on IRC is available if you need help selecting or writing your highlights Upcoming Deadlines & Dates -------------------------- Extra-ATC freeze: February 2nd, 2023 (R-7 week) Non-client library freeze: February 9th, 2023 (R-6 week) Client library freeze: February 16th, 2023 (R-5 week) Antelope-3 milestone: February 16th, 2023 (R-5 week) 2023.2 Bobcat Virtual PTG: March 27-31, 2023 El?d Ill?s irc: elodilles @ #openstack-release -------------- next part -------------- An HTML attachment was scrubbed... URL: From jobernar at redhat.com Fri Jan 27 15:50:15 2023 From: jobernar at redhat.com (Jon Bernard) Date: Fri, 27 Jan 2023 10:50:15 -0500 Subject: [cinder] cinder-backup volume stuck in creating In-Reply-To: References: Message-ID: Without the logs themselves it's really hard to say. One way to proceed would be to file a bug [1] and the team can work with you there. You could also enable debugging (debug = True), reproduce the failure, and upload the relevant logs there as well. [1]: https://bugs.launchpad.net/cinder/+filebug -- Jon On Thu, Jan 26, 2023 at 2:20 PM Satish Patel wrote: > Folks, > > I have configured nova and cinder with ceph storage. VMs running on ceph > storage but now when i am trying to create a backup of cinder volume its > getting stuck on creating and doing nothing. Logs also do not give any > indication of bad. > > My cinder.conf > > [DEFAULT] > > enabled_backends = rbd-1 > backup_driver = cinder.backup.drivers.ceph.CephBackupDriver > backup_ceph_conf = /etc/ceph/ceph.conf > backup_ceph_user = cinder-backup > backup_ceph_chunk_size = 134217728 > backup_ceph_pool = backups > backup_ceph_stripe_unit = 0 > backup_ceph_stripe_count = 0 > restore_discard_excess_bytes = true > osapi_volume_listen = 10.73.0.181 > osapi_volume_listen_port = 8776 > > > Output of "openstack volume service list" showing cinder-backup service is > up but when i create a backup it's getting stuck in this stage and no > activity. I am not seeing anything getting transferred to the ceph backups > pool also. Any clue? or method to debug? > > # openstack volume backup list --all > > +--------------------------------------+------+-------------+----------+------+ > | ID | Name | Description | Status | > Size | > > +--------------------------------------+------+-------------+----------+------+ > | bc844d55-8c5a-4bd3-b0e9-7c4c780c95ad | foo1 | | creating | > 20 | > > +--------------------------------------+------+-------------+----------+------+ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jan 27 17:22:57 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 27 Jan 2023 18:22:57 +0100 Subject: [openstack-ansible] Meeting on 31st of January 2023 is cancelled Message-ID: Hi everyone, Due to limited availability of core members for the upcoming meeting that should have taken place on 31.01.2023, I'm informing you that this meeting will be cancelled. Hoping to see everyone for the next meeting on 7th of February. From elod.illes at est.tech Fri Jan 27 18:02:13 2023 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 27 Jan 2023 18:02:13 +0000 Subject: [all][stable][ptl] Propose to EOL Rocky series Message-ID: Hi, Similarly like the Queens branch EOL proposal [1] I would like to propose to transition every project's stable/rocky to End of Life: - gates are mostly broken - minimal number of activity can be seen on this branch - some core projects already transitioned their stable/rocky to EOL recently (like ironic, neutron, nova) - gate job definitions are still using the old, legacy zuul syntax - gate jobs are based on Ubuntu Xenial, which is also beyond its public maintenance window date and hard to maintain Based on the above, if there won't be any project who wants to keep open their stable/rocky, then I'll start the process of EOL'ing Rocky stable series as a whole. If anyone has any objection then please respond to this mail. Thanks, El?d Ill?s irc: elodilles @ #openstack-stable / #openstack-release [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-October/031030.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Fri Jan 27 19:57:42 2023 From: jay at gr-oss.io (Jay Faulkner) Date: Fri, 27 Jan 2023 11:57:42 -0800 Subject: [all][stable][ptl] Propose to EOL Rocky series In-Reply-To: References: Message-ID: Thanks for doing all this cleanup work Elod. Ironic is OK with retirements of these shared resources up to Train. - Jay Faulkner On Fri, Jan 27, 2023 at 10:12 AM El?d Ill?s wrote: > Hi, > > Similarly like the Queens branch EOL proposal [1] I would like to propose > to transition every project's stable/rocky to End of Life: > > - gates are mostly broken > - minimal number of activity can be seen on this branch > - some core projects already transitioned their stable/rocky to EOL > recently (like ironic, neutron, nova) > - gate job definitions are still using the old, legacy zuul syntax > - gate jobs are based on Ubuntu Xenial, which is also beyond its public > maintenance window date and hard to maintain > > Based on the above, if there won't be any project who wants to keep open > their stable/rocky, then I'll start the process of EOL'ing Rocky stable > series as a whole. If anyone has any objection then please respond to > this mail. > > Thanks, > > El?d Ill?s > irc: elodilles @ #openstack-stable / #openstack-release > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2022-October/031030.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Jan 27 20:12:47 2023 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 27 Jan 2023 21:12:47 +0100 Subject: [Openstack][cinder] multiple backend generic volume group issue Message-ID: Hello All, I created a cinder configuration using multiple backends based on netapp ontap nfs with the same backend name and the same volume type. So my nfsgold1, nfsgold2 and nfsgold3 are addressed by the nfsgold volume backend name and nfsgold volume type. Each one has its own svm on netapp. This help me to distribute nfsgold volumes using scheduler filters based on free capacity. So when I create a volume with volume type nfsgold the scheduler allocates it on nfsgold1 or nfsgold2 or nfsgold3 using the capacity filter. Since some virtual machines need to have volumes on the same backend (for example nfsgold1) because they belong to the same application, I use the cinder scheduler hint. Why I need to store those volumes on the same backend? It is because they must belong to the same generic volume group for snaphotting at the same time. For this reason I need to create a generic volume group. Generic volume group creation need the volume type, in my case nfsgold. But like the volume, when I create a generic volume group, it is scheduled on nfsgold1 or nfsgold2 or nfsgold3 and it is obvious looking in the cinder database. So if I want to groups volumes of an application I must: 1 check if they are on the same backend (nfsgold1/nfsgold2/nfsgold3) 2) check on which backend the volume group is allocated (nfsgold1,/nfsgold2/nfsgold3) and it can be done only looking in the cinder database). Volumes and volume goups must stay on the same real backend. If not, when I create a group snapshot, it gives some errors because it checks the host related to the real backend (nfsgold1/nfsgold2/nfsgold3) and returns errors failing the operation. When I create a volume group by api or by command line, I must specify the volume type but I cannot know which is the real backend associated to it without looking in the cinder database. I think this is a bug. In the above situation, how can obtain consistent volume group snapshot ? Sorry for my bad english. I hope who is reading can understand what I mean. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From techstep at gmail.com Fri Jan 27 20:36:44 2023 From: techstep at gmail.com (Rob Jefferson) Date: Fri, 27 Jan 2023 15:36:44 -0500 Subject: [kolla] [senlin] [xena] Self-signed cert errors during Senlin auth Message-ID: Folks, I am deploying OpenStack Xena via Kolla. As part of improving our orchestration offerings, I am investigating the use of Senlin in our deployments. Using `enable_senlin: "yes"`, the containers install as expected. When I attempt to create an initial profile, I get the following error: > HttpException: 500: Server Error for url: https://external:8778/v1/profiles, > Could not find versioned identity endpoints when attempting to authenticate. > Please check that your auth_url is correct. SSL exception connecting > to https://internal:35357: HTTPSConnectionPool(host='internal', > port=35357): Max retries exceeded with url: / (Caused by > SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] > certificate verify failed: self signed certificate in > certificate chain (_ssl.c:1131)'))) I have tried setting `verify_ssl = False` in senlin.conf, but no dice. I don't see this issue on the other services for which we're using the same certificates (e.g., Heat, Keystone, Barbican). Looking in the containers, I don't see -cert.pem or -key.pem files for Senlin as I did for other services. Moreover, the authentication configurations look the same in all relevant respects, between Senlin and the services that do work. I'm positively flummoxed about why the certs aren't getting distributed. When I take a look at the documentation for Kolla TLS [1], I saw the following: > Enabling TLS on the backend services secures communication between the > HAProxy listing on the internal/external VIP and the OpenStack > services. It also enables secure end-to-end communication between > OpenStack services that support TLS termination. The OpenStack services > that support backend TLS termination in Victoria are: Nova, Ironic, > Neutron, Keystone, Glance, Heat, Placement, Horizon, Barbican, and > Cinder. Missing from here is Senlin, and looking at the same document from subsequent OpenStack releases suggests this hasn't changed. I don't know if this is a relevant issue to the problem I've been having (to be fair, I don't see Octavia, which we've also been using, on the list, even though we also haven't been having issues with Octavia certs). Is this something that I can fix via configuration, or is this a thing wherein we need to change how Kolla deploys Senlin, or even adding in SSL termination to the Senlin service? Any help on this would be greatly appreciated. Thanks, Rob [1] https://github.com/openstack/kolla-ansible/blob/stable/xena/doc/source/admin/tls.rst#back-end-tls-configuration From satish.txt at gmail.com Fri Jan 27 20:46:59 2023 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 27 Jan 2023 15:46:59 -0500 Subject: [cinder] cinder-backup volume stuck in creating In-Reply-To: References: Message-ID: Thank you Jon/Sofia, Biggest issue is even if I turn on debugging, it's not producing enough logs to see what is going on. See following output. https://paste.opendev.org/show/bh9OF9l2OrozrNMglv2Y/ On Fri, Jan 27, 2023 at 10:50 AM Jon Bernard wrote: > Without the logs themselves it's really hard to say. One way to proceed > would be to file a bug [1] and the team can work with you there. You > could also enable debugging (debug = True), reproduce the failure, and > upload the relevant logs there as well. > > [1]: https://bugs.launchpad.net/cinder/+filebug > > -- > Jon > > On Thu, Jan 26, 2023 at 2:20 PM Satish Patel wrote: > >> Folks, >> >> I have configured nova and cinder with ceph storage. VMs running on ceph >> storage but now when i am trying to create a backup of cinder volume its >> getting stuck on creating and doing nothing. Logs also do not give any >> indication of bad. >> >> My cinder.conf >> >> [DEFAULT] >> >> enabled_backends = rbd-1 >> backup_driver = cinder.backup.drivers.ceph.CephBackupDriver >> backup_ceph_conf = /etc/ceph/ceph.conf >> backup_ceph_user = cinder-backup >> backup_ceph_chunk_size = 134217728 >> backup_ceph_pool = backups >> backup_ceph_stripe_unit = 0 >> backup_ceph_stripe_count = 0 >> restore_discard_excess_bytes = true >> osapi_volume_listen = 10.73.0.181 >> osapi_volume_listen_port = 8776 >> >> >> Output of "openstack volume service list" showing cinder-backup service >> is up but when i create a backup it's getting stuck in this stage and no >> activity. I am not seeing anything getting transferred to the ceph backups >> pool also. Any clue? or method to debug? >> >> # openstack volume backup list --all >> >> +--------------------------------------+------+-------------+----------+------+ >> | ID | Name | Description | Status | >> Size | >> >> +--------------------------------------+------+-------------+----------+------+ >> | bc844d55-8c5a-4bd3-b0e9-7c4c780c95ad | foo1 | | creating | >> 20 | >> >> +--------------------------------------+------+-------------+----------+------+ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Jan 28 02:05:45 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 27 Jan 2023 18:05:45 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2023 Jan 27: Reading: 5 min Message-ID: <185f61f9e96.ea3c1a65316313.8690102718815187562@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on Jan 25. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2023/tc.2023-01-25-16.00.log.html * The next TC weekly meeting will be on Feb 1 Wed at 16:00 UTC, Feel free to add the topic to the agenda[1] by Jan 31. 2. What we completed this week: ========================= * Added Axel to Mistral project DPL list[2] * Remaining two projects (Mistral and Zaqar) which were less active (their release was in question) are green and active now. Their beta releases are also completed[3][4][5][6]. 3. Activities In progress: ================== TC Tracker for the 2023.1 cycle ------------------------------------- * Current cycle working items and their progress are present in the 2023.1 tracker etherpad[7]. Open Reviews ----------------- * Two open reviews for ongoing activities[8]. Cleanup of PyPI maintainer list for OpenStack Projects ---------------------------------------------------------------- The audit process is going on and a few projects have been completed [9]. There is a discussion also going on in ML[10]. The TC discussed it in this week's meeting also and no change in the initial plan for the cleanup of additional PyPi maintainers. The process to find the less active projects early in the cycle: ------------------------------------------------------------------------ I think in every cycle we get some less active projects and that is too little late in the cycle which adds extra work for the release team to release such projects. We discussed in PTG that TC should come up with some process to find such less active projects a little early in the cycle. JayF started the etherpad to collect the data and define the criteria to help in this process[11]. Project updates ------------------- * Add Cinder Huawei charm[12] * Add the woodpecker charm to Openstack charms[13] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[14]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [15] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions [2] https://review.opendev.org/c/openstack/governance/+/871302 [3] https://review.opendev.org/c/openstack/releases/+/869470 [4] https://review.opendev.org/c/openstack/releases/+/869448 [5] https://review.opendev.org/c/openstack/zaqar/+/857924 [6] https://review.opendev.org/c/openstack/releases/+/871399 [7] https://etherpad.opendev.org/p/tc-2023.1-tracker [8] https://review.opendev.org/q/projects:openstack/governance+status:open [9] https://etherpad.opendev.org/p/openstack-pypi-maintainers-cleanup [10] https://lists.openstack.org/pipermail/openstack-discuss/2023-January/031848.html [11] https://etherpad.opendev.org/p/project-health-check [12] https://review.opendev.org/c/openstack/governance/+/867588 [13] https://review.opendev.org/c/openstack/governance/+/869752 [14] hhttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [15] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From vincentlee676 at gmail.com Sat Jan 28 03:50:59 2023 From: vincentlee676 at gmail.com (vincent lee) Date: Fri, 27 Jan 2023 21:50:59 -0600 Subject: Pulling plugins from my Github repository Message-ID: Hi everyone, I would like to know where exactly I can replace the path for the GitHub repository. For example, I want to pull some plugins, such as zun_ui, blazar_dashboard, etc., from my own GitHub repository instead of the default GitHub repository. I am currently using Kolla-ansible for deploying OpenStack in the yoga version. Best, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Jan 28 04:46:00 2023 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 27 Jan 2023 23:46:00 -0500 Subject: [kolla] local registry setup issue Message-ID: Folks, I have set up a local registry using the following method as per official doc. docker run -d \ --network host \ --name registry \ --restart=always \ -e REGISTRY_HTTP_ADDR=0.0.0.0:4000 \ -v registry:/var/lib/registry \ registry:2 After that I ran the following command to import all images to the local registry. docker images | grep kolla | grep -v local | awk '{print $1,$2}' | while read -r image tag; do new_image_name=${image#"quay.io/"} docker tag ${image}:${tag} "localhost:4000"/${new_image_name}:${tag} docker push localhost:4000/${new_image_name}:${tag} done Now i can see images in local registry using docker images command (venv-kolla) root at kolla-infra-1:/etc/kolla# docker images | grep localhost localhost:4000/openstack.kolla/ubuntu-source-nova-novncproxy yoga 01dc100dc65a 2 days ago 1.2GB localhost:4000/openstack.kolla/ubuntu-source-horizon yoga a016e1f5b9af 2 days ago 1.1GB localhost:4000/openstack.kolla/ubuntu-source-nova-conductor yoga 381a64a368bd 2 days ago 1.11GB localhost:4000/openstack.kolla/ubuntu-source-nova-api yoga f90a15a09142 2 days ago 1.11GB .... .... I have added it in global.yml docker_registry: 10.73.0.181:4000 docker_registry_insecure: yes Now when i am trying to add compute nodes then I get a registry error like the following. Not sure why its trying to use https://10.73.0.181:4000/v2 instead of http;// TASK [common : include_tasks] **************************************************************************************************************************************************** included: /root/venv-kolla/share/kolla-ansible/ansible/roles/common/tasks/pull.yml for kolla-comp-2 TASK [service-images-pull : common | Pull images] ******************************************************************************************************************************** FAILED - RETRYING: common | Pull images (3 retries left). FAILED - RETRYING: common | Pull images (2 retries left). FAILED - RETRYING: common | Pull images (1 retries left). failed: [kolla-comp-2] (item=fluentd) => {"ansible_loop_var": "item", "attempts": 3, "changed": true, "item": {"key": "fluentd", "value": {"container_name": "fluentd", "dimensions": {}, "enabled": true, "environment": {"KOLLA_CONFIG_STRATEGY": "COPY_ALWAYS"}, "group": "fluentd", "image": " 10.73.0.181:4000/openstack.kolla/ubuntu-source-fluentd:yoga", "volumes": ["/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro", "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", "kolla_logs:/var/log/kolla/", "fluentd_data:/var/lib/fluentd/data/"]}}, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 261, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=10.73.0.181%3A4000%2Fopenstack.kolla%2Fubuntu-source-fluentd\\n\\nDuring handling of the above exception, another exception occurred:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", line 381, in main\\n File \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 415, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 263, in _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 31, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: Internal Server Error (\"Get \"https://10.73.0.181:4000/v2/\": http: server gave HTTP response to HTTPS client\")\\n'"} FAILED - RETRYING: common | Pull images (3 retries left). -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at caktusgroup.com Sat Jan 28 04:58:50 2023 From: tobias at caktusgroup.com (Tobias McNulty) Date: Fri, 27 Jan 2023 23:58:50 -0500 Subject: [kolla] local registry setup issue In-Reply-To: References: Message-ID: You might need to configure daemon.json as described here: https://docs.openstack.org/kolla-ansible/rocky/user/multinode.html#configure-docker-on-all-nodes On Fri, Jan 27, 2023, 11:46 PM Satish Patel wrote: > Folks, > > I have set up a local registry using the following method as per > official doc. > > docker run -d \ > --network host \ > --name registry \ > --restart=always \ > -e REGISTRY_HTTP_ADDR=0.0.0.0:4000 \ > -v registry:/var/lib/registry \ > registry:2 > > After that I ran the following command to import all images to the local > registry. > > docker images | grep kolla | grep -v local | awk '{print $1,$2}' | while > read -r image tag; do > new_image_name=${image#"quay.io/"} > docker tag ${image}:${tag} "localhost:4000"/${new_image_name}:${tag} > docker push localhost:4000/${new_image_name}:${tag} > done > > > Now i can see images in local registry using docker images command > > (venv-kolla) root at kolla-infra-1:/etc/kolla# docker images | grep localhost > localhost:4000/openstack.kolla/ubuntu-source-nova-novncproxy > yoga 01dc100dc65a 2 days ago 1.2GB > localhost:4000/openstack.kolla/ubuntu-source-horizon > yoga a016e1f5b9af 2 days ago 1.1GB > localhost:4000/openstack.kolla/ubuntu-source-nova-conductor > yoga 381a64a368bd 2 days ago 1.11GB > localhost:4000/openstack.kolla/ubuntu-source-nova-api > yoga f90a15a09142 2 days ago 1.11GB > .... > .... > > > I have added it in global.yml > > docker_registry: 10.73.0.181:4000 > docker_registry_insecure: yes > > > Now when i am trying to add compute nodes then I get a registry error like > the following. Not sure why its trying to use https://10.73.0.181:4000/v2 > instead of http;// > > TASK [common : include_tasks] > **************************************************************************************************************************************************** > included: > /root/venv-kolla/share/kolla-ansible/ansible/roles/common/tasks/pull.yml > for kolla-comp-2 > > TASK [service-images-pull : common | Pull images] > ******************************************************************************************************************************** > FAILED - RETRYING: common | Pull images (3 retries left). > FAILED - RETRYING: common | Pull images (2 retries left). > FAILED - RETRYING: common | Pull images (1 retries left). > failed: [kolla-comp-2] (item=fluentd) => {"ansible_loop_var": "item", > "attempts": 3, "changed": true, "item": {"key": "fluentd", "value": > {"container_name": "fluentd", "dimensions": {}, "enabled": true, > "environment": {"KOLLA_CONFIG_STRATEGY": "COPY_ALWAYS"}, "group": > "fluentd", "image": " > 10.73.0.181:4000/openstack.kolla/ubuntu-source-fluentd:yoga", "volumes": > ["/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro", > "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", > "kolla_logs:/var/log/kolla/", "fluentd_data:/var/lib/fluentd/data/"]}}, > "msg": "'Traceback (most recent call last):\\n File > \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 261, in > _raise_for_status\\n response.raise_for_status()\\n File > \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in > raise_for_status\\n raise HTTPError(http_error_msg, > response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal > Server Error for url: > http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=10.73.0.181%3A4000%2Fopenstack.kolla%2Fubuntu-source-fluentd\\n\\nDuring > handling of the above exception, another exception occurred:\\n\\nTraceback > (most recent call last):\\n File > \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", > line 381, in main\\n File > \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", > line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) > for line in self.dc.pull(\\n File > \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 415, in > pull\\n self._raise_for_status(response)\\n File > \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 263, in > _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n > File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 31, in > create_api_error_from_http_exception\\n raise cls(e, response=response, > explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: > Internal Server Error (\"Get \"https://10.73.0.181:4000/v2/\": http: > server gave HTTP response to HTTPS client\")\\n'"} > FAILED - RETRYING: common | Pull images (3 retries left). > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sat Jan 28 10:54:23 2023 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sat, 28 Jan 2023 11:54:23 +0100 Subject: [all][stable][ptl] Propose to EOL Rocky series In-Reply-To: References: Message-ID: Masakari is happy to EOL Rocky too. Radek -yoctozepto On Fri, 27 Jan 2023 at 20:59, Jay Faulkner wrote: > > Thanks for doing all this cleanup work Elod. Ironic is OK with retirements of these shared resources up to Train. > > - > Jay Faulkner > > On Fri, Jan 27, 2023 at 10:12 AM El?d Ill?s wrote: >> >> Hi, >> >> Similarly like the Queens branch EOL proposal [1] I would like to propose >> to transition every project's stable/rocky to End of Life: >> >> - gates are mostly broken >> - minimal number of activity can be seen on this branch >> - some core projects already transitioned their stable/rocky to EOL >> recently (like ironic, neutron, nova) >> - gate job definitions are still using the old, legacy zuul syntax >> - gate jobs are based on Ubuntu Xenial, which is also beyond its public >> maintenance window date and hard to maintain >> >> Based on the above, if there won't be any project who wants to keep open >> their stable/rocky, then I'll start the process of EOL'ing Rocky stable >> series as a whole. If anyone has any objection then please respond to >> this mail. >> >> Thanks, >> >> El?d Ill?s >> irc: elodilles @ #openstack-stable / #openstack-release >> >> [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-October/031030.html >> From gajuambi at gmail.com Sat Jan 28 13:07:55 2023 From: gajuambi at gmail.com (Gajendra D Ambi) Date: Sat, 28 Jan 2023 18:37:55 +0530 Subject: Reddit query for openstack magnum for enterprise core component maker In-Reply-To: References: Message-ID: Hi, So we need to test our AI/ML frameworks using our chips on nodes. We have a custom django rest api with frontend where our devs can request for a k8s cluster so that they can run their workload inside a pod which accesses chips on the hardware to perform AI/ML activities. We already have ironic working for baremetal provisioning. More than GUI, we are interested in the API part of magnum, whether all that is there in GUI is exposed as api via openstack so that our restapi+frontend can offer it to our devs. We could not find any installation material apart from what we have on openstack guide itself (https://docs.openstack.org/magnum/latest/install/index.html), We are currently running ussuri (I know it is EOL) but we plan to have openstack-helm in few months from a vendor so that should solve the problem but for now, at least to do POC, we were hoping to get this installed+integrated with our regular openstack on baremetal. If any of you have a better installation+integration guide for magnum, then it will be greatly appreciated. devstack was a no go after many days, we could not even get it going. So we want to bite the bullet and install magnum on openstack itself. Thanks and Regards, https://ambig.one/2/ On Fri, Jan 27, 2023 at 4:05 PM Jake Yip wrote: > Hi, > > Read your post on Reddit. From an operator's perspective, Magnum allows > us to easily let users provision a Kubernetes cluster with a few clicks > and go straight into k8s. We are already operating an OpenStack cloud so > Magnum with the Openstack integration was a great choice. > > What is best for you will depend a lot on a few factors: > > - Your familiarity with OpenStack and size/features of your current cloud > - Size / Number of clusters and Day 1 operations > - What kind of service you want provide > > It is difficult to give a great answer without knowing more details > about your situation. Feel free to ping me on this email with more > information if you are not comfortable with sharing internal details on > the internet. > > Regards, > Jake > > On 22/1/2023 10:07 am, Gajendra D Ambi wrote: > > > https://www.reddit.com/r/openstack/comments/10hu68s/container_orchestrator_for_openstack/ > < > https://www.reddit.com/r/openstack/comments/10hu68s/container_orchestrator_for_openstack/ > >. > > Hi team, > > request anyone of you from this project to please help us out. We also > > mean to contribute to the project because we know that we will need to > > add a lot more features to it that what api endpoints are already > > providing to us. When we do, it will all be contributed to the project > > after it is being tested for months in production. I am leaning towards > > openstack magnum and I do not have a lot of time to convince others of > > the same. > > > > Thanks and Regards, > > https://ambig.one/2/ > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at goirand.fr Sat Jan 28 13:25:38 2023 From: thomas at goirand.fr (Thomas Goirand) Date: Sat, 28 Jan 2023 14:25:38 +0100 Subject: [all][stable][ptl] Propose to EOL Rocky series Message-ID: Hi, I understand the gate is broken, however, Rocky is in Debian LTS, and I would like to keep the possibility to merge patches, even with the gate tests disabled. CVE-2022-47951 is an example why this is important... Thomas Goirand (zigo) On Jan 28, 2023 11:54, Rados?aw Piliszek wrote: > > Masakari is happy to EOL Rocky too. > > Radek > -yoctozepto > > On Fri, 27 Jan 2023 at 20:59, Jay Faulkner wrote: > > > > Thanks for doing all this cleanup work Elod. Ironic is OK with retirements of these shared resources up to Train. > > > > - > > Jay Faulkner > > > > On Fri, Jan 27, 2023 at 10:12 AM El?d Ill?s wrote: > >> > >> Hi, > >> > >> Similarly like the Queens branch EOL proposal [1] I would like to propose > >> to transition every project's stable/rocky to End of Life: > >> > >> - gates are mostly broken > >> - minimal number of activity can be seen on this branch > >> - some core projects already transitioned their stable/rocky to EOL > >>?? recently (like ironic, neutron, nova) > >> - gate job definitions are still using the old, legacy zuul syntax > >> - gate jobs are based on Ubuntu Xenial, which is also beyond its public > >>?? maintenance window date and hard to maintain > >> > >> Based on the above, if there won't be any project who wants to keep open > >> their stable/rocky, then I'll start the process of EOL'ing Rocky stable > >> series as a whole. If anyone has any objection then please respond to > >> this mail. > >> > >> Thanks, > >> > >> El?d Ill?s > >> irc: elodilles @ #openstack-stable / #openstack-release > >> > >> [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-October/031030.html > >> > From satish.txt at gmail.com Sat Jan 28 20:07:14 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 28 Jan 2023 15:07:14 -0500 Subject: [kolla] local registry setup issue In-Reply-To: References: Message-ID: Great!! I don't know how I missed that part. It seems working but related to i got following error 10.73.0.181:4000/openstack.kolla/ubuntu-source-nova-ssh:yoga not found: manifest unknown I have checked and found I don't have that image in the registry. How do I download that image from quay.io and push it to the local registry? On Fri, Jan 27, 2023 at 11:59 PM Tobias McNulty wrote: > You might need to configure daemon.json as described here: > https://docs.openstack.org/kolla-ansible/rocky/user/multinode.html#configure-docker-on-all-nodes > > On Fri, Jan 27, 2023, 11:46 PM Satish Patel wrote: > >> Folks, >> >> I have set up a local registry using the following method as per >> official doc. >> >> docker run -d \ >> --network host \ >> --name registry \ >> --restart=always \ >> -e REGISTRY_HTTP_ADDR=0.0.0.0:4000 \ >> -v registry:/var/lib/registry \ >> registry:2 >> >> After that I ran the following command to import all images to the local >> registry. >> >> docker images | grep kolla | grep -v local | awk '{print $1,$2}' | while >> read -r image tag; do >> new_image_name=${image#"quay.io/"} >> docker tag ${image}:${tag} "localhost:4000"/${new_image_name}:${tag} >> docker push localhost:4000/${new_image_name}:${tag} >> done >> >> >> Now i can see images in local registry using docker images command >> >> (venv-kolla) root at kolla-infra-1:/etc/kolla# docker images | grep >> localhost >> localhost:4000/openstack.kolla/ubuntu-source-nova-novncproxy >> yoga 01dc100dc65a 2 days ago 1.2GB >> localhost:4000/openstack.kolla/ubuntu-source-horizon >> yoga a016e1f5b9af 2 days ago 1.1GB >> localhost:4000/openstack.kolla/ubuntu-source-nova-conductor >> yoga 381a64a368bd 2 days ago 1.11GB >> localhost:4000/openstack.kolla/ubuntu-source-nova-api >> yoga f90a15a09142 2 days ago 1.11GB >> .... >> .... >> >> >> I have added it in global.yml >> >> docker_registry: 10.73.0.181:4000 >> docker_registry_insecure: yes >> >> >> Now when i am trying to add compute nodes then I get a registry error >> like the following. Not sure why its trying to use >> https://10.73.0.181:4000/v2 instead of http;// >> >> TASK [common : include_tasks] >> **************************************************************************************************************************************************** >> included: >> /root/venv-kolla/share/kolla-ansible/ansible/roles/common/tasks/pull.yml >> for kolla-comp-2 >> >> TASK [service-images-pull : common | Pull images] >> ******************************************************************************************************************************** >> FAILED - RETRYING: common | Pull images (3 retries left). >> FAILED - RETRYING: common | Pull images (2 retries left). >> FAILED - RETRYING: common | Pull images (1 retries left). >> failed: [kolla-comp-2] (item=fluentd) => {"ansible_loop_var": "item", >> "attempts": 3, "changed": true, "item": {"key": "fluentd", "value": >> {"container_name": "fluentd", "dimensions": {}, "enabled": true, >> "environment": {"KOLLA_CONFIG_STRATEGY": "COPY_ALWAYS"}, "group": >> "fluentd", "image": " >> 10.73.0.181:4000/openstack.kolla/ubuntu-source-fluentd:yoga", "volumes": >> ["/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro", >> "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", >> "kolla_logs:/var/log/kolla/", "fluentd_data:/var/lib/fluentd/data/"]}}, >> "msg": "'Traceback (most recent call last):\\n File >> \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 261, in >> _raise_for_status\\n response.raise_for_status()\\n File >> \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in >> raise_for_status\\n raise HTTPError(http_error_msg, >> response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal >> Server Error for url: >> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=10.73.0.181%3A4000%2Fopenstack.kolla%2Fubuntu-source-fluentd\\n\\nDuring >> handling of the above exception, another exception occurred:\\n\\nTraceback >> (most recent call last):\\n File >> \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", >> line 381, in main\\n File >> \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >> line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) >> for line in self.dc.pull(\\n File >> \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 415, in >> pull\\n self._raise_for_status(response)\\n File >> \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 263, in >> _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n >> File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 31, in >> create_api_error_from_http_exception\\n raise cls(e, response=response, >> explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: >> Internal Server Error (\"Get \"https://10.73.0.181:4000/v2/\": http: >> server gave HTTP response to HTTPS client\")\\n'"} >> FAILED - RETRYING: common | Pull images (3 retries left). >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Jan 28 20:22:30 2023 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 28 Jan 2023 15:22:30 -0500 Subject: [kolla] local registry setup issue In-Reply-To: References: Message-ID: Nevermind, figured it out $ docker pull quay.io/openstack.kolla/ubuntu-source-nova-ssh:yoga $ docker tag quay.io/openstack.kolla/ubuntu-source-nova-ssh:yoga " 10.73.0.181:4000"/openstack.kolla/ubuntu-source-nova-ssh $ docker push "10.73.0.181:4000"/openstack.kolla/ubuntu-source-nova-ssh I have a question related to the registry so let's say if I don't have a local registry in that case, does kolla always pull new images when running $ kolla-ansible -i multinode deploy ? Are there any best practices for local registry in production? like download images and push them to local-registry with specific tags to keep track or patch etc? On Sat, Jan 28, 2023 at 3:07 PM Satish Patel wrote: > Great!! I don't know how I missed that part. It seems working but related > to i got following error > > 10.73.0.181:4000/openstack.kolla/ubuntu-source-nova-ssh:yoga not found: > manifest unknown > > I have checked and found I don't have that image in the registry. How do I > download that image from quay.io and push it to the local registry? > > > > On Fri, Jan 27, 2023 at 11:59 PM Tobias McNulty > wrote: > >> You might need to configure daemon.json as described here: >> https://docs.openstack.org/kolla-ansible/rocky/user/multinode.html#configure-docker-on-all-nodes >> >> On Fri, Jan 27, 2023, 11:46 PM Satish Patel wrote: >> >>> Folks, >>> >>> I have set up a local registry using the following method as per >>> official doc. >>> >>> docker run -d \ >>> --network host \ >>> --name registry \ >>> --restart=always \ >>> -e REGISTRY_HTTP_ADDR=0.0.0.0:4000 \ >>> -v registry:/var/lib/registry \ >>> registry:2 >>> >>> After that I ran the following command to import all images to the local >>> registry. >>> >>> docker images | grep kolla | grep -v local | awk '{print $1,$2}' | while >>> read -r image tag; do >>> new_image_name=${image#"quay.io/"} >>> docker tag ${image}:${tag} "localhost:4000"/${new_image_name}:${tag} >>> docker push localhost:4000/${new_image_name}:${tag} >>> done >>> >>> >>> Now i can see images in local registry using docker images command >>> >>> (venv-kolla) root at kolla-infra-1:/etc/kolla# docker images | grep >>> localhost >>> localhost:4000/openstack.kolla/ubuntu-source-nova-novncproxy >>> yoga 01dc100dc65a 2 days ago 1.2GB >>> localhost:4000/openstack.kolla/ubuntu-source-horizon >>> yoga a016e1f5b9af 2 days ago 1.1GB >>> localhost:4000/openstack.kolla/ubuntu-source-nova-conductor >>> yoga 381a64a368bd 2 days ago 1.11GB >>> localhost:4000/openstack.kolla/ubuntu-source-nova-api >>> yoga f90a15a09142 2 days ago 1.11GB >>> .... >>> .... >>> >>> >>> I have added it in global.yml >>> >>> docker_registry: 10.73.0.181:4000 >>> docker_registry_insecure: yes >>> >>> >>> Now when i am trying to add compute nodes then I get a registry error >>> like the following. Not sure why its trying to use >>> https://10.73.0.181:4000/v2 instead of http;// >>> >>> TASK [common : include_tasks] >>> **************************************************************************************************************************************************** >>> included: >>> /root/venv-kolla/share/kolla-ansible/ansible/roles/common/tasks/pull.yml >>> for kolla-comp-2 >>> >>> TASK [service-images-pull : common | Pull images] >>> ******************************************************************************************************************************** >>> FAILED - RETRYING: common | Pull images (3 retries left). >>> FAILED - RETRYING: common | Pull images (2 retries left). >>> FAILED - RETRYING: common | Pull images (1 retries left). >>> failed: [kolla-comp-2] (item=fluentd) => {"ansible_loop_var": "item", >>> "attempts": 3, "changed": true, "item": {"key": "fluentd", "value": >>> {"container_name": "fluentd", "dimensions": {}, "enabled": true, >>> "environment": {"KOLLA_CONFIG_STRATEGY": "COPY_ALWAYS"}, "group": >>> "fluentd", "image": " >>> 10.73.0.181:4000/openstack.kolla/ubuntu-source-fluentd:yoga", >>> "volumes": ["/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro", >>> "/etc/localtime:/etc/localtime:ro", "/etc/timezone:/etc/timezone:ro", >>> "kolla_logs:/var/log/kolla/", "fluentd_data:/var/lib/fluentd/data/"]}}, >>> "msg": "'Traceback (most recent call last):\\n File >>> \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 261, in >>> _raise_for_status\\n response.raise_for_status()\\n File >>> \"/usr/lib/python3/dist-packages/requests/models.py\", line 940, in >>> raise_for_status\\n raise HTTPError(http_error_msg, >>> response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal >>> Server Error for url: >>> http+docker://localhost/v1.41/images/create?tag=yoga&fromImage=10.73.0.181%3A4000%2Fopenstack.kolla%2Fubuntu-source-fluentd\\n\\nDuring >>> handling of the above exception, another exception occurred:\\n\\nTraceback >>> (most recent call last):\\n File >>> \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/modules/kolla_docker.py\", >>> line 381, in main\\n File >>> \"/tmp/ansible_kolla_docker_payload_6o9nw9xd/ansible_kolla_docker_payload.zip/ansible/module_utils/kolla_docker_worker.py\", >>> line 450, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) >>> for line in self.dc.pull(\\n File >>> \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 415, in >>> pull\\n self._raise_for_status(response)\\n File >>> \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 263, in >>> _raise_for_status\\n raise create_api_error_from_http_exception(e)\\n >>> File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 31, in >>> create_api_error_from_http_exception\\n raise cls(e, response=response, >>> explanation=explanation)\\ndocker.errors.APIError: 500 Server Error: >>> Internal Server Error (\"Get \"https://10.73.0.181:4000/v2/\": http: >>> server gave HTTP response to HTTPS client\")\\n'"} >>> FAILED - RETRYING: common | Pull images (3 retries left). >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 29 22:38:37 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 30 Jan 2023 05:38:37 +0700 Subject: [Magnum]enable cluster user trust In-Reply-To: <6040b621-38c1-33d2-2f1a-2b44ca384c87@ardc.edu.au> References: <6040b621-38c1-33d2-2f1a-2b44ca384c87@ardc.edu.au> Message-ID: Thank you for your reply. I will test and let you know. Nguyen Huu Khoi On Fri, Jan 27, 2023 at 5:16 PM Jake Yip wrote: > Hi Nguyen, > > This is quite an old (2016) CVE, and I see that there have been a patch > for it already. > > On why Trust is needed - the Kubernetes cluster needs to have OpenStack > credentials to be able to spin up OpenStack resources like Cinder > Volumes and Octavia Loadbalancers. > > You should use [trust]/roles in magnum config to limit the amount of > roles that the trust is created with. Typically only Member is necessary > but this can vary from cloud to cloud, depending on whether your cloud > have custom policies. > > Regards, > Jake > > On 23/1/2023 1:59 am, Nguy?n H?u Kh?i wrote: > > Hello guys. > > I am going to use Magnum for production but I see that > > https://nvd.nist.gov/vuln/detail/CVE-2016-7404 > > if I want to use > cinder > > for k8s cluster. Is there any way to fix or minimize this problem? > > Thanks. > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 29 22:40:03 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 30 Jan 2023 05:40:03 +0700 Subject: [magnum] Other Distro for k8s In-Reply-To: <499d7e4d-bb98-3769-afaf-46387bd26d9c@ardc.edu.au> References: <499d7e4d-bb98-3769-afaf-46387bd26d9c@ardc.edu.au> Message-ID: Thank you for your information. https://github.com/vexxhost/magnum-cluster-api/tree/main/magnum_cluster_api Do you mention it? Nguyen Huu Khoi On Fri, Jan 27, 2023 at 5:05 PM Jake Yip wrote: > Hi Nguyen, > > The Magnum team is looking to move to ClusterAPI [1]; one of the > advantages is that we can leverage off upstream effort and support > Ubuntu. This work is still in the very early stages, but yes, we do have > plans for it. > > [1] https://cluster-api.sigs.k8s.io/ > > Regards, > Jake > > On 23/1/2023 1:54 am, Nguy?n H?u Kh?i wrote: > > Hello guys. > > > > I know that Magnum is using Fedora Coreos for k8s. Why don't we use a > > long-term distro such as Ubuntu for this project? > > I will be more stable. and this project seems obsolete with the old > > version for k8s. > > > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 29 22:52:31 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 30 Jan 2023 05:52:31 +0700 Subject: [ALL] Why we dont have an official forum? Message-ID: Hello guys. Openstack is a very interesting project, many questions from users will make it grow more and more but I see that people, including me, still ask the same question. It is hard to sort or find knowledge by this way. If we hope this project spreads for people, we need a new way to share knowledge and skills, we are in the modern world but the way to access and exchange information in this project is too obsolete. This is a wall to slow down this project. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sun Jan 29 23:08:14 2023 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sun, 29 Jan 2023 18:08:14 -0500 Subject: [magnum] Other Distro for k8s In-Reply-To: References: <499d7e4d-bb98-3769-afaf-46387bd26d9c@ardc.edu.au> Message-ID: On Sun, Jan 29, 2023 at 5:46 PM Nguy?n H?u Kh?i wrote: > Thank you for your information. > https://github.com/vexxhost/magnum-cluster-api/tree/main/magnum_cluster_api > This is something that we've worked here which includes native support for the Cluster API out of the box, which then allows you to deploy using Ubuntu. Happy to answer questions about it :) > > Do you mention it? > Nguyen Huu Khoi > > > On Fri, Jan 27, 2023 at 5:05 PM Jake Yip wrote: > >> Hi Nguyen, >> >> The Magnum team is looking to move to ClusterAPI [1]; one of the >> advantages is that we can leverage off upstream effort and support >> Ubuntu. This work is still in the very early stages, but yes, we do have >> plans for it. >> >> [1] https://cluster-api.sigs.k8s.io/ >> >> Regards, >> Jake >> >> On 23/1/2023 1:54 am, Nguy?n H?u Kh?i wrote: >> > Hello guys. >> > >> > I know that Magnum is using Fedora Coreos for k8s. Why don't we use a >> > long-term distro such as Ubuntu for this project? >> > I will be more stable. and this project seems obsolete with the old >> > version for k8s. >> > >> > Nguyen Huu Khoi >> > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Sun Jan 29 23:12:17 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 30 Jan 2023 06:12:17 +0700 Subject: [magnum] Other Distro for k8s In-Reply-To: References: <499d7e4d-bb98-3769-afaf-46387bd26d9c@ardc.edu.au> Message-ID: Awesome, thanks! .I am following this project and hope this will be a very active project. Will we move it to opendevs? Nguyen Huu Khoi On Mon, Jan 30, 2023 at 6:08 AM Mohammed Naser wrote: > > > On Sun, Jan 29, 2023 at 5:46 PM Nguy?n H?u Kh?i > wrote: > >> Thank you for your information. >> >> https://github.com/vexxhost/magnum-cluster-api/tree/main/magnum_cluster_api >> > > This is something that we've worked here which includes native support for > the Cluster API > out of the box, which then allows you to deploy using Ubuntu. > > Happy to answer questions about it :) > > >> >> Do you mention it? >> Nguyen Huu Khoi >> >> >> On Fri, Jan 27, 2023 at 5:05 PM Jake Yip wrote: >> >>> Hi Nguyen, >>> >>> The Magnum team is looking to move to ClusterAPI [1]; one of the >>> advantages is that we can leverage off upstream effort and support >>> Ubuntu. This work is still in the very early stages, but yes, we do have >>> plans for it. >>> >>> [1] https://cluster-api.sigs.k8s.io/ >>> >>> Regards, >>> Jake >>> >>> On 23/1/2023 1:54 am, Nguy?n H?u Kh?i wrote: >>> > Hello guys. >>> > >>> > I know that Magnum is using Fedora Coreos for k8s. Why don't we use a >>> > long-term distro such as Ubuntu for this project? >>> > I will be more stable. and this project seems obsolete with the old >>> > version for k8s. >>> > >>> > Nguyen Huu Khoi >>> >> > > -- > Mohammed Naser > VEXXHOST, Inc. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Jan 29 23:16:20 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 29 Jan 2023 15:16:20 -0800 Subject: [qa][gate][stable] stable/wallaby gate is broken Message-ID: <185ffd139cb.f2315ede356579.5195015287935599210@ghanshyammann.com> Hello Everyone, You might know stable/wallaby which is in the EM phase is broken because of the Tempest master incompatibility. As this is in EM phase, Tempest master does support it and the fix is to use the old compatible Tempest. I have pushed the fix on devstack to pin Tempest 29.0.0 to test stable/wallaby, do not recheck until that is merged: - https://review.opendev.org/c/openstack/devstack/+/871782 This depends on a few other fixes which are in the gate. Like the Tempest pin, we need to pin tempest plugins also on stable/wallaby. I have pushed a few projects fix for that, please review those if the devstack patch alone does not fix the gate - https://review.opendev.org/q/topic:wallaby-pin-tempest+status:open - https://review.opendev.org/q/topic:bug%252F2003993 -gmann From berndbausch at gmail.com Mon Jan 30 03:59:46 2023 From: berndbausch at gmail.com (Bernd Bausch) Date: Mon, 30 Jan 2023 12:59:46 +0900 Subject: [ALL] Why we dont have an official forum? In-Reply-To: References: Message-ID: There used to be ask.openstack.org, but since nobody maintained the website, it became unreliable and was eventually disbanded. At the time, we were encouraged to ask questions at superuser.com and, in case it's related to programming, stackoverflow.com. There is also https://www.reddit.com/r/openstack, which is probably less "official" but seems more lively than the two Stackexchange sites. On Mon, Jan 30, 2023 at 7:56 AM Nguy?n H?u Kh?i wrote: > Hello guys. > > Openstack is a very interesting project, many questions from users will > make it grow more and more but I see that people, including me, still ask > the same question. It is hard to sort or find knowledge by this way. > > If we hope this project spreads for people, we need a new way to share > knowledge and skills, we are in the modern world but the way to access and > exchange information in this project is too obsolete. This is a wall to > slow down this project. > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jan 30 04:48:05 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 30 Jan 2023 10:18:05 +0530 Subject: [all][stable][ptl] Propose to EOL Rocky series In-Reply-To: References: Message-ID: Hi Elod, The last commits done in rocky[1] and stein[2] were on Sep 17, 2021. Since then we also discovered that one of the job definition, nova-multiattach[3] was removed in nova rocky release and since nova EOLed their rocky branch[4], that job is breaking (although I haven't confirmed with WIP patches but the last commit in September 2021 passed that job[5] and the gate breaking was noticed recently with change[6]). We will discuss this in the cinder upstream meeting this week and will update this thread but I'm currently in favor of moving cinder rocky and stein branches to EOL. [1] https://github.com/openstack/cinder/commit/cdcf7b5f8b3c850555942f422b8ad1f43e21fe7b [2] https://github.com/openstack/cinder/commit/667c6da08d423888f1df85d639fef058553f6169 [3] https://github.com/openstack/cinder/blob/stable/rocky/.zuul.yaml#L153 [4] https://review.opendev.org/c/openstack/releases/+/862520 [5] https://review.opendev.org/c/openstack/cinder/+/809657/1#message-50e6adf07ba3883a74f6e9939d34f0f0f0fe8d7a [6] https://review.opendev.org/c/openstack/cinder/+/871799/3#message-439428e2a146adc233e1a894a7a85004f3f920e4 Thanks Rajat Dhasmana On Fri, Jan 27, 2023 at 11:38 PM El?d Ill?s wrote: > Hi, > > Similarly like the Queens branch EOL proposal [1] I would like to propose > to transition every project's stable/rocky to End of Life: > > - gates are mostly broken > - minimal number of activity can be seen on this branch > - some core projects already transitioned their stable/rocky to EOL > recently (like ironic, neutron, nova) > - gate job definitions are still using the old, legacy zuul syntax > - gate jobs are based on Ubuntu Xenial, which is also beyond its public > maintenance window date and hard to maintain > > Based on the above, if there won't be any project who wants to keep open > their stable/rocky, then I'll start the process of EOL'ing Rocky stable > series as a whole. If anyone has any objection then please respond to > this mail. > > Thanks, > > El?d Ill?s > irc: elodilles @ #openstack-stable / #openstack-release > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2022-October/031030.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tevfikkoksal64 at gmail.com Mon Jan 30 05:34:21 2023 From: tevfikkoksal64 at gmail.com (T Koksal) Date: Mon, 30 Jan 2023 08:34:21 +0300 Subject: [ALL] Why we dont have an official forum? In-Reply-To: References: Message-ID: Hello I totally agree with Nguyen! I believe, as a new comer into Openstack I have concluded that there is the expectation from the user the to have pre-existing knowledge of the platforms. Additionally, the documentation is all-over and unstructured for someone wanting to learn. TK On Mon, Jan 30, 2023 at 7:05 AM Bernd Bausch wrote: > There used to be ask.openstack.org, but since nobody maintained the > website, it became unreliable and was eventually disbanded. At the time, we > were encouraged to ask questions at superuser.com and, in case it's > related to programming, stackoverflow.com. There is also > https://www.reddit.com/r/openstack, which is probably less "official" but > seems more lively than the two Stackexchange sites. > > On Mon, Jan 30, 2023 at 7:56 AM Nguy?n H?u Kh?i > wrote: > >> Hello guys. >> >> Openstack is a very interesting project, many questions from users will >> make it grow more and more but I see that people, including me, still ask >> the same question. It is hard to sort or find knowledge by this way. >> >> If we hope this project spreads for people, we need a new way to share >> knowledge and skills, we are in the modern world but the way to access and >> exchange information in this project is too obsolete. This is a wall to >> slow down this project. >> Nguyen Huu Khoi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Mon Jan 30 06:22:19 2023 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 30 Jan 2023 08:22:19 +0200 Subject: hacluster_corosync container stuck in restarting Message-ID: Dear Team, I installed Openstack Wallaby using kolla-ansible. Everything is fine except for hacluster_corosync container stuck in restarting. Below are the corosync logs. Has anyone faced this issue before ? Jan 30 08:19:42 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:19:42 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:19:42 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:19:42 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:19:42 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:19:42 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:19:42 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:19:42 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:19:42 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:19:42 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Jan 30 08:20:43 [7] controller1 corosync notice [MAIN ] Corosync Cluster Engine 3.0.3 starting up Jan 30 08:20:43 [7] controller1 corosync info [MAIN ] Corosync built-in features: dbus monitoring watchdog augeas systemd xmlconf vqsim nozzle snmp pie relro bindnow Jan 30 08:20:43 [7] controller1 corosync warning [MAIN ] Could not increase RLIMIT_MEMLOCK, not locking memory: Operation not permitted (1) Jan 30 08:20:43 [7] controller1 corosync notice [TOTEM ] Initializing transport (Kronosnet). Jan 30 08:20:43 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:20:43 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:20:43 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:20:43 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:20:43 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:20:43 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:20:43 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:20:43 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:20:43 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:20:43 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Jan 30 08:21:43 [7] controller1 corosync notice [MAIN ] Corosync Cluster Engine 3.0.3 starting up Jan 30 08:21:43 [7] controller1 corosync info [MAIN ] Corosync built-in features: dbus monitoring watchdog augeas systemd xmlconf vqsim nozzle snmp pie relro bindnow Jan 30 08:21:43 [7] controller1 corosync warning [MAIN ] Could not increase RLIMIT_MEMLOCK, not locking memory: Operation not permitted (1) Jan 30 08:21:43 [7] controller1 corosync notice [TOTEM ] Initializing transport (Kronosnet). Jan 30 08:21:44 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:21:44 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:21:44 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:21:44 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:21:44 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:21:44 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:21:44 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:21:44 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:21:44 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:21:44 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Jan 30 06:33:34 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 30 Jan 2023 13:33:34 +0700 Subject: [ALL] Why we dont have an official forum? In-Reply-To: References: Message-ID: In my view, Having an official forum will make our projects grow faster and users can access Openstack easier. Take a look at K8S or Icinga. They are very good at helping people to access their platform by having a nice forum. I can help set up and configure the forum. I hope Openstack will become more and more mature and grow. Nguyen Huu Khoi On Mon, Jan 30, 2023 at 12:34 PM T Koksal wrote: > Hello > > I totally agree with Nguyen! I believe, as a new comer into Openstack I > have concluded that there is the expectation from the user the to have > pre-existing knowledge of the platforms. Additionally, the documentation is > all-over and unstructured for someone wanting to learn. > > TK > > On Mon, Jan 30, 2023 at 7:05 AM Bernd Bausch > wrote: > >> There used to be ask.openstack.org, but since nobody maintained the >> website, it became unreliable and was eventually disbanded. At the time, we >> were encouraged to ask questions at superuser.com and, in case it's >> related to programming, stackoverflow.com. There is also >> https://www.reddit.com/r/openstack, which is probably less "official" >> but seems more lively than the two Stackexchange sites. >> >> On Mon, Jan 30, 2023 at 7:56 AM Nguy?n H?u Kh?i < >> nguyenhuukhoinw at gmail.com> wrote: >> >>> Hello guys. >>> >>> Openstack is a very interesting project, many questions from users will >>> make it grow more and more but I see that people, including me, still ask >>> the same question. It is hard to sort or find knowledge by this way. >>> >>> If we hope this project spreads for people, we need a new way to share >>> knowledge and skills, we are in the modern world but the way to access and >>> exchange information in this project is too obsolete. This is a wall to >>> slow down this project. >>> Nguyen Huu Khoi >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jan 30 08:44:19 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 30 Jan 2023 14:14:19 +0530 Subject: [cinder][requirements] python-cinderclient Yoga gate broken due to rtslib-fb Message-ID: Hello, Currently python-cinderclient yoga gate is broken because *python-cinderclient-functional-py39* job is failing. Upon looking into the logs, I found the *cinder-rtstool delete *command failing[1]. Jan 16 07:58:14.935519 np0032740756 cinder-volume[115744]: ERROR cinder.volume.targets.lio Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf cinder-rtstool delete iqn.2010-10.org.openstack:volume-095ce534-8ef7-45c8-867c-f6d4fbc2b02d Jan 16 07:58:14.935519 np0032740756 cinder-volume[115744]: ERROR cinder.volume.targets.lio Exit code: 1 Jan 16 07:58:14.935519 np0032740756 cinder-volume[115744]: ERROR cinder.volume.targets.lio Stdout: '' Looking further into the traceback, this seems to be the main issue. line 215, in _gen_attached_luns\n for tpgt_dir in listdir(tpgts_base):\nNotADirectoryError: [Errno 20] Not a directory: \'/sys/kernel/config/target/iscsi/cpus_allowed_list\'\n' I found a similar (not exact) error in thread[2] to which there was a reply that it was fixed[3]. The fix is included in version 2.1.75[4] but the version pinned in upper constraints for yoga is 2.1.74[5]. I've tested the version bump in these DNM patches[6][7] and it works. A quick code search reveals that cinder is the only project actively using this lib[8]. My question for the requirements team is, is it OK to bump the requirement to allow cinderclient yoga gate to pass since it's blocking backports? [1] https://zuul.opendev.org/t/openstack/build/a282bfbb6b0a4148a69694db6ab7eb69/log/controller/logs/screen-c-vol.txt#1896 [2] https://www.spinics.net/lists/linux-scsi/msg172264.html [3] https://www.spinics.net/lists/linux-scsi/msg172265.html [4] https://github.com/open-iscsi/rtslib-fb/commit/8d2543c4da62e962661011fea5b19252b9660822 [5] https://github.com/openstack/requirements/blob/stable/yoga/upper-constraints.txt#L13 [6] https://review.opendev.org/c/openstack/python-cinderclient/+/870513 [7] https://review.opendev.org/c/openstack/requirements/+/870714 [8] https://codesearch.opendev.org/?q=rtslib-fb&i=nope&literal=nope&files=&excludeFiles=&repos= Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Jan 30 08:44:35 2023 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Mon, 30 Jan 2023 17:44:35 +0900 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: References: Message-ID: <294c3302-852d-475b-2fa8-5240eb1a8f97@gmail.com> Hi Takashi, I'd like you to wait for dropping puppet-tacker for a while. > - support for tacker-conductor was added 4 years after the service was added Unfortunately, we don't know anything about this support in current tacker team because no one didn't join the team yet. I'd like to confirm anyone in our users still want the module to be supported. Thanks, Yasufumi On 2023/01/25 17:34, Takashi Kajinami wrote: > Hello, > > > In Puppet OpenStack projects we have multiple modules to support multiple > OpenStack components. > However unfortunately some of these have not been attracting enough > interest from developers and > have been unmaintained. > > During the past few cycles we retired a few incomplete modules but I'm > wondering if we can retire > a few unmaintained modules now, to reduce our maintenance/release effort. > > I checked the modules we have currently, and I think the following three > can be first candidates. > - puppet-murano > - puppet-rally > - puppet-tacker > > We haven't seen any feedback from users about these modules for a long > time. Most of the changes > for the past 2~3 years are proposed by me but I am not really using these > components. > > These modules do not have proper test coverage and it's quite difficult for > us to catch any breakage and > honestly I'm not quite sure these modules can work properly with the latest > code. Actually we've often > caught up with the latest requirements several years after the change was > made in the software side, > and I'm afraid these are not well-maintained. > > eg. > - support for tacker-conductor was added 4 years after the service was > added > - we didn't noticed that the openstack plugin was split out from the core > rally package for several years > > If anybody has concerns with retiring these modules, then please let us > know. If we don't hear any objections > for a while, then I'll start proposing changes for project retirement. > > Thank you, > Takashi From andy at andybotting.com Mon Jan 30 09:12:15 2023 From: andy at andybotting.com (Andy Botting) Date: Mon, 30 Jan 2023 20:12:15 +1100 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: References: Message-ID: Hi Takashi, We're still using puppet-murano. If there's any specific issues, I'd be happy to look into them, but it's working OK for us currently. cheers, Andy On Wed, 25 Jan 2023 at 19:36, Takashi Kajinami wrote: > > Hello, > > > In Puppet OpenStack projects we have multiple modules to support multiple OpenStack components. > However unfortunately some of these have not been attracting enough interest from developers and > have been unmaintained. > > During the past few cycles we retired a few incomplete modules but I'm wondering if we can retire > a few unmaintained modules now, to reduce our maintenance/release effort. > > I checked the modules we have currently, and I think the following three can be first candidates. > - puppet-murano > - puppet-rally > - puppet-tacker > > We haven't seen any feedback from users about these modules for a long time. Most of the changes > for the past 2~3 years are proposed by me but I am not really using these components. > > These modules do not have proper test coverage and it's quite difficult for us to catch any breakage and > honestly I'm not quite sure these modules can work properly with the latest code. Actually we've often > caught up with the latest requirements several years after the change was made in the software side, > and I'm afraid these are not well-maintained. > > eg. > - support for tacker-conductor was added 4 years after the service was added > - we didn't noticed that the openstack plugin was split out from the core rally package for several years > > If anybody has concerns with retiring these modules, then please let us know. If we don't hear any objections > for a while, then I'll start proposing changes for project retirement. > > Thank you, > Takashi > -- > ---------- > Takashi Kajinami From tkajinam at redhat.com Mon Jan 30 09:14:29 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 30 Jan 2023 18:14:29 +0900 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: <294c3302-852d-475b-2fa8-5240eb1a8f97@gmail.com> References: <294c3302-852d-475b-2fa8-5240eb1a8f97@gmail.com> Message-ID: Hi Yasufumi, We'll wait for any feedback regarding puppet-tacker. Please note, we need to add proper functional test coverage, fixing any critical feature gap and working on CI failures related to Tacker to maintain the module properly. We'd still need to discuss how we address these points and probably look for actual volunteers, in case we aim to keep the module. Thank you, Takashi On Mon, Jan 30, 2023 at 5:49 PM Yasufumi Ogawa wrote: > Hi Takashi, > > I'd like you to wait for dropping puppet-tacker for a while. > > > - support for tacker-conductor was added 4 years after the service was > added > Unfortunately, we don't know anything about this support in current > tacker team because no one didn't join the team yet. I'd like to confirm > anyone in our users still want the module to be supported. > > Thanks, > Yasufumi > > On 2023/01/25 17:34, Takashi Kajinami wrote: > > Hello, > > > > > > In Puppet OpenStack projects we have multiple modules to support multiple > > OpenStack components. > > However unfortunately some of these have not been attracting enough > > interest from developers and > > have been unmaintained. > > > > During the past few cycles we retired a few incomplete modules but I'm > > wondering if we can retire > > a few unmaintained modules now, to reduce our maintenance/release effort. > > > > I checked the modules we have currently, and I think the following three > > can be first candidates. > > - puppet-murano > > - puppet-rally > > - puppet-tacker > > > > We haven't seen any feedback from users about these modules for a long > > time. Most of the changes > > for the past 2~3 years are proposed by me but I am not really using these > > components. > > > > These modules do not have proper test coverage and it's quite difficult > for > > us to catch any breakage and > > honestly I'm not quite sure these modules can work properly with the > latest > > code. Actually we've often > > caught up with the latest requirements several years after the change was > > made in the software side, > > and I'm afraid these are not well-maintained. > > > > eg. > > - support for tacker-conductor was added 4 years after the service was > > added > > - we didn't noticed that the openstack plugin was split out from the > core > > rally package for several years > > > > If anybody has concerns with retiring these modules, then please let us > > know. If we don't hear any objections > > for a while, then I'll start proposing changes for project retirement. > > > > Thank you, > > Takashi > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonykarera at gmail.com Mon Jan 30 09:19:49 2023 From: tonykarera at gmail.com (Karera Tony) Date: Mon, 30 Jan 2023 11:19:49 +0200 Subject: hacluster_corosync container stuck in restarting Message-ID: Dear Team, I installed Openstack Wallaby using kolla-ansible. Everything is fine except for the hacluster_corosync container stuck in restarting. Below are the corosync logs. Has anyone faced this issue before ? I have two controllers but this issue is on happening on one Jan 30 08:19:42 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:19:42 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:19:42 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:19:42 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:19:42 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:19:42 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:19:42 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:19:42 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:19:42 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:19:42 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Jan 30 08:20:43 [7] controller1 corosync notice [MAIN ] Corosync Cluster Engine 3.0.3 starting up Jan 30 08:20:43 [7] controller1 corosync info [MAIN ] Corosync built-in features: dbus monitoring watchdog augeas systemd xmlconf vqsim nozzle snmp pie relro bindnow Jan 30 08:20:43 [7] controller1 corosync warning [MAIN ] Could not increase RLIMIT_MEMLOCK, not locking memory: Operation not permitted (1) Jan 30 08:20:43 [7] controller1 corosync notice [TOTEM ] Initializing transport (Kronosnet). Jan 30 08:20:43 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:20:43 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:20:43 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:20:43 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:20:43 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:20:43 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:20:43 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:20:43 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:20:43 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:20:43 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Jan 30 08:21:43 [7] controller1 corosync notice [MAIN ] Corosync Cluster Engine 3.0.3 starting up Jan 30 08:21:43 [7] controller1 corosync info [MAIN ] Corosync built-in features: dbus monitoring watchdog augeas systemd xmlconf vqsim nozzle snmp pie relro bindnow Jan 30 08:21:43 [7] controller1 corosync warning [MAIN ] Could not increase RLIMIT_MEMLOCK, not locking memory: Operation not permitted (1) Jan 30 08:21:43 [7] controller1 corosync notice [TOTEM ] Initializing transport (Kronosnet). Jan 30 08:21:44 [7] controller1 corosync info [TOTEM ] kronosnet crypto initialized: aes256/sha384 Jan 30 08:21:44 [7] controller1 corosync info [TOTEM ] totemknet initialized Jan 30 08:21:44 [7] controller1 corosync info [KNET ] common: crypto_nss.so has been loaded from /usr/lib/x86_64-linux-gnu/kronosnet/crypto_nss.so Jan 30 08:21:44 [7] controller1 corosync notice [SERV ] Service engine loaded: corosync configuration map access [0] Jan 30 08:21:44 [7] controller1 corosync info [QB ] server name: cmap Jan 30 08:21:44 [7] controller1 corosync error [QB ] Could not bind AF_UNIX (): Address already in use (98) Jan 30 08:21:44 [7] controller1 corosync info [QB ] withdrawing server sockets Jan 30 08:21:44 [7] controller1 corosync error [MAIN ] Can't initialize IPC Jan 30 08:21:44 [7] controller1 corosync error [SERV ] Service engine 'corosync_cmap' failed to load for reason 'qb_ipcs_run error' Jan 30 08:21:44 [7] controller1 corosync error [MAIN ] Corosync Cluster Engine exiting with status 20 at service.c:356. Regards Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon Jan 30 09:31:56 2023 From: tkajinam at redhat.com (Takashi Kajinami) Date: Mon, 30 Jan 2023 18:31:56 +0900 Subject: [puppet] Retiring untested and unmaintained modules (murano, rally and tacker) In-Reply-To: References: Message-ID: Hi Andy, Thank you for the response. May I know which version you are currently using ? We can keep puppet-murano for now as there is a user(or some users, hopefully) using the module. However please note there is no functional test coverage(integration tests/acceptance tests) to test actual deployments in our CI. We probably better try enabling a few tests to test actual deployments. I'll look into it when I get time, and might ask for your help in case I face any problems. Thank you, Takashi On Mon, Jan 30, 2023 at 6:12 PM Andy Botting wrote: > Hi Takashi, > > We're still using puppet-murano. If there's any specific issues, I'd > be happy to look into them, but it's working OK for us currently. > > cheers, > Andy > > On Wed, 25 Jan 2023 at 19:36, Takashi Kajinami > wrote: > > > > Hello, > > > > > > In Puppet OpenStack projects we have multiple modules to support > multiple OpenStack components. > > However unfortunately some of these have not been attracting enough > interest from developers and > > have been unmaintained. > > > > During the past few cycles we retired a few incomplete modules but I'm > wondering if we can retire > > a few unmaintained modules now, to reduce our maintenance/release effort. > > > > I checked the modules we have currently, and I think the following three > can be first candidates. > > - puppet-murano > > - puppet-rally > > - puppet-tacker > > > > We haven't seen any feedback from users about these modules for a long > time. Most of the changes > > for the past 2~3 years are proposed by me but I am not really using > these components. > > > > These modules do not have proper test coverage and it's quite difficult > for us to catch any breakage and > > honestly I'm not quite sure these modules can work properly with the > latest code. Actually we've often > > caught up with the latest requirements several years after the change > was made in the software side, > > and I'm afraid these are not well-maintained. > > > > eg. > > - support for tacker-conductor was added 4 years after the service was > added > > - we didn't noticed that the openstack plugin was split out from the > core rally package for several years > > > > If anybody has concerns with retiring these modules, then please let us > know. If we don't hear any objections > > for a while, then I'll start proposing changes for project retirement. > > > > Thank you, > > Takashi > > -- > > ---------- > > Takashi Kajinami > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obondarev at mirantis.com Mon Jan 30 10:28:50 2023 From: obondarev at mirantis.com (Oleg Bondarev) Date: Mon, 30 Jan 2023 14:28:50 +0400 Subject: [Neutron] Bug Deputy Report January 23 - 29 Message-ID: Hello Neutron Team, Bug report for the week of Jan 23 is below: *High:* - https://bugs.launchpad.net/neutron/+bug/2004012 - [Secure RBAC] Delete port on own network which is shared with other project is not possible Confirmed Unassigned - https://bugs.launchpad.net/neutron/+bug/2004015 - [Secure RBAC] Sharing Security groups don't works with new RBAC policies Edit Confirmed Unassigned - https://bugs.launchpad.net/neutron/+bug/2004014 - [Secure RBAC] Sharing QoS Policies don't works with new RBAC policies Confirmed Unassigned - https://bugs.launchpad.net/neutron/+bug/2004013 - [Secure RBAC] List QoS Policies filtered by tags is not possible with new RBAC policies Confirmed Unassigned - https://bugs.launchpad.net/neutron/+bug/2004017 - [Secure RBAC] List flavors don't work for regular user with new RBAC policies Confirmed Unassigned - https://bugs.launchpad.net/neutron/+bug/2004016 - [Secure RBAC] Cleaning shared networks fails with new RBAC policies Confirmed Unassigned *Medium:* - https://bugs.launchpad.net/neutron/+bug/2003706 - [OVN] Security group logging only logs half of the connection In progress: https://review.opendev.org/c/openstack/neutron/+/871096 Assigned to elvira - https://bugs.launchpad.net/neutron/+bug/2003842 - [OVN] A route inferred from a subnet's default gateway is not added to ovn-nb if segment_id is not None for a subnet Invalid Unassigned - https://bugs.launchpad.net/neutron/+bug/2004004 - keepalived virtual_routes wrong order Triaged Unassigned - https://bugs.launchpad.net/neutron/+bug/2003997 - [ovn-octavia-provider] ovn-lb with VIP on provider network not working In progress Assigned to ltomasbo *Low:* - https://bugs.launchpad.net/neutron/+bug/2003861 - Remove the CLI code from python-neutronclient Triaged Assigned to ralonsoh - https://bugs.launchpad.net/neutron/+bug/2003999 - Stateleful SG API extension should be disabled when old OVN is used In progres: https://review.opendev.org/c/openstack/neutron/+/871982, https://review.opendev.org/c/openstack/neutron/+/871983s Assigned to slaweq *Undecided:* - https://bugs.launchpad.net/neutron/+bug/2004041 - Missing flows with ovs dvr after openvswitch restart Incomplete: more info requested Unassigned Thanks, Oleg -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Jan 30 12:55:52 2023 From: smooney at redhat.com (Sean Mooney) Date: Mon, 30 Jan 2023 12:55:52 +0000 Subject: [ALL] Why we dont have an official forum? In-Reply-To: References: Message-ID: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> On Mon, 2023-01-30 at 13:33 +0700, Nguy?n H?u Kh?i wrote: > In my view, > Having an official forum will make our projects grow faster and users can > access Openstack easier. > > Take a look at K8S or Icinga. They are very good at helping people to > access their platform by having a nice forum. > > I can help set up and configure the forum. > > I hope Openstack will become more and more mature and grow. openstack has had 26 releases over 10+ years and many we would see it as a very mature comunity. in fact it has past the hype/fast groth phases and is into the more stable grandule eveolving and sustaining phase. the main reason we do not have an offical fourm any more is that we do not have enough contibutors to maintain one. as was noted in the tread that is why ask.openstack.org was removed. the opendev infra team is small and manages alot of service on behalf of the comunity our prvious fourm attempt largely went unmainteined for years. if one was to be created again it would need to be automated, maintaiend and hosted with several people commiting to maintaining it. it would likely be better to collaberate with an exsitign froum or opensouce comunity then host our own at this point. e.g. stackoverflow or perhaps a matrix/mastadon space of some kind. the other problem is getting the people with the knowlage to partake. many wont have the time to be active in such a fourm. many of the active members of our comunity have been wearing 2 or 3 hats already and may not have the mental bandwith to also act as support in an offical fourm and answer questions. that would leave the questions eitehr unanswered or to experinced users/operators. some of the more exprience operators may have bandwith to step in, in fact having an operator lead fourm might be more interesting as if there is a common issue and/or a solution that they comeup with that could be feed back to the project teams to fix or implement for them. its equally likely they will be busy running there clouds and the questions will be unansered or poorly answered. its a gap i just dont know if its one that can be simply filled. > > Nguyen Huu Khoi > > > On Mon, Jan 30, 2023 at 12:34 PM T Koksal wrote: > > > Hello > > > > I totally agree with Nguyen! I believe, as a new comer into Openstack I > > have concluded that there is the expectation from the user the to have > > pre-existing knowledge of the platforms. Additionally, the documentation is > > all-over and unstructured for someone wanting to learn. > > > > TK > > > > On Mon, Jan 30, 2023 at 7:05 AM Bernd Bausch > > wrote: > > > > > There used to be ask.openstack.org, but since nobody maintained the > > > website, it became unreliable and was eventually disbanded. At the time, we > > > were encouraged to ask questions at superuser.com and, in case it's > > > related to programming, stackoverflow.com. There is also > > > https://www.reddit.com/r/openstack, which is probably less "official" > > > but seems more lively than the two Stackexchange sites. > > > > > > On Mon, Jan 30, 2023 at 7:56 AM Nguy?n H?u Kh?i < > > > nguyenhuukhoinw at gmail.com> wrote: > > > > > > > Hello guys. > > > > > > > > Openstack is a very interesting project, many questions from users will > > > > make it grow more and more but I see that people, including me, still ask > > > > the same question. It is hard to sort or find knowledge by this way. > > > > > > > > If we hope this project spreads for people, we need a new way to share > > > > knowledge and skills, we are in the modern world but the way to access and > > > > exchange information in this project is too obsolete. This is a wall to > > > > slow down this project. > > > > Nguyen Huu Khoi > > > > > > > From fungi at yuggoth.org Mon Jan 30 13:34:35 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Jan 2023 13:34:35 +0000 Subject: [ALL] Why we dont have an official forum? In-Reply-To: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> References: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> Message-ID: <20230130133435.w3li5bvqlmiw2omg@yuggoth.org> On 2023-01-30 12:55:52 +0000 (+0000), Sean Mooney wrote: [...] > our prvious fourm attempt largely went unmainteined for years. if > one was to be created again it would need to be automated, > maintaiend and hosted with several people commiting to maintaining > it. [...] And for those who may have forgotten or haven't been around long enough to remember, that was not the first "official" OpenStack user forum site either. The same pattern gets repeated: someone's very excited about setting up a forum site, they have the energy to maintain it for a while, then they disappear or lose interest and nobody else volunteers to take over, site decays into a state of outdated misinformation and frustrated users who ask questions but get no useful answers, eventually we tear down the service because having it in that state is worse than nothing at all. One need only look at wiki.openstack.org for a remaining example of this sort of commons neglect. I personally manage to find only enough time to delete spam and block the accounts of would-be abusers, but most of the information in it is outdated and the server is so behind in terms of upgrades that it's going to need to be taken offline if something doesn't change. > the other problem is getting the people with the knowlage to partake. > many wont have the time to be active in such a fourm. [...] Yes, this is why the openstack-discuss mailing list combines user and developer discussions. Just like how getting users and developers together at conferences makes for more productive conversations, users learn faster by being exposed to the development discussions and developers are more likely to notice and help answer questions from users. One of our goals as a community is to turn our users into maintainers of the software over time, so forcing them to communicate in a different place and separating them from the current developers only makes that outcome less likely. We need fewer walls between these parts of our community, not more. > its a gap i just dont know if its one that can be simply filled. [...] The OpenDev Collaboratory is in the process of upgrading our mailing list software to a platform which has a forum-like searchable web archive and the ability to post to mailing lists from a browser without needing to use E-mail. The lists.opendev.org and lists.zuul-ci.org sites have already moved to it if you want to see how it works, though the collaboratory sysadmins are in the middle of ironing out some cosmetic issues before moving forward with remaining sites. Due to the volume of activity and size of its archives, the lists.openstack.org migration is likely to happen sometime in early Q2, around April if all goes according to plan. In my estimation, that should satisfy much of the desires of those who prefer a web forum, while not breaking the existing experience for people who would rather use mailing lists. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Jan 30 13:38:20 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Jan 2023 13:38:20 +0000 Subject: [cinder][requirements] python-cinderclient Yoga gate broken due to rtslib-fb In-Reply-To: References: Message-ID: <20230130133820.uoilxvak33xehx5j@yuggoth.org> On 2023-01-30 14:14:19 +0530 (+0530), Rajat Dhasmana wrote: > Currently python-cinderclient yoga gate is broken because > *python-cinderclient-functional-py39* job is failing. Upon looking > into the logs, I found the *cinder-rtstool delete *command > failing[1]. [...] Presumably it was working with the listed requirements at one time. Do you happen to know what changed to cause the bug you mentioned to suddenly appear? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From niklas.schwarz at inovex.de Mon Jan 30 14:21:06 2023 From: niklas.schwarz at inovex.de (Niklas Schwarz) Date: Mon, 30 Jan 2023 15:21:06 +0100 Subject: [Keystone | python-openstackclient] Fedartion with OAuth2.0/OIDC Message-ID: Hey there, I'm currently investigating the features of openstack federated identity and oauth2/oidc with keycloak as an identity provider. Following the documentation [1] I have successfully deployed a setup where it is possible to login via the horizion board using the login of keycloak. As defined in the documentation I'm using apache2 with the mod_auth_openidc module. So far so good... If I try to access the api via the openstack-cli using the following configuration ``` OS_AUTH_URL=https:///identity/v3 OS_AUTH_TYPE=v3oidcpassword OS_IDENTITY_PROVIDER=keycloak OS_PROTOCOL=openid OS_USERNAME= OS_PASSWORD= OS_PROJECT=test OS_OPENID_SCOPE='openid email profile' OS_DISCOVERY_ENDPOINT=https:// /realms//.well-known/openid-configuration OS_ACCESS_TOKEN_TYPE=access_token OS_CLIENT_ID= OS_CLIENT_SECRET= ``` the http-status-code of the server is 500. Inspecting the logs , I found the problem in the mod_auth_openidc modul which expects a content-type of application/x-www-form-urlencoded. Is there any way to change the content-type the openstack-cli from json to urlencoded or am I missing a step in the configuration or something else? Thanks in advanced Niklas [1] https://docs.openstack.org/keystone/zed/admin/federation/configure_federation.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jan 30 14:22:55 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 30 Jan 2023 19:52:55 +0530 Subject: [cinder][requirements] python-cinderclient Yoga gate broken due to rtslib-fb In-Reply-To: <20230130133820.uoilxvak33xehx5j@yuggoth.org> References: <20230130133820.uoilxvak33xehx5j@yuggoth.org> Message-ID: Hi Jeremy, On Mon, Jan 30, 2023 at 7:13 PM Jeremy Stanley wrote: > On 2023-01-30 14:14:19 +0530 (+0530), Rajat Dhasmana wrote: > > Currently python-cinderclient yoga gate is broken because > > *python-cinderclient-functional-py39* job is failing. Upon looking > > into the logs, I found the *cinder-rtstool delete *command > > failing[1]. > [...] > > Presumably it was working with the listed requirements at one time. > Do you happen to know what changed to cause the bug you mentioned to > suddenly appear? > I'm not 100% sure but I think something changed in the kernel (i.e. they added a directory path in the iSCSI target list *cpus_allowed_list* which rtslib-fb doesn't recognize) that started causing failure in the rtslib-fb library. Based on the thread[1], Commit: *d72d827f2f26 ("scsi: target: Add iscsi/cpus_allowed_list in configfs") *seems to be causing the issue. Also looking at the fix in rtslib-fb[2], they are excluding the directory *"cpus_allowed_list" *which is failing in our gate job as Not a directory error. [Errno 20] Not a directory: \'/sys/kernel/config/target/iscsi/ *cpus_allowed_list*\ Commit message of fix: target has been added cpus_allowed_list attribute in sysfs. Therefore, the rtslib should handle the new attribute: 1. add cpus_allowed_list item in target_names_excludes 2. add cpus_allowed_list feature in ISCSIFabricModule This fix is released in rtslib-fb version 2.7.5 and yoga u-c is pinned to 2.7.4. [1] https://www.spinics.net/lists/linux-scsi/msg172264.html [2] https://github.com/open-iscsi/rtslib-fb/commit/8d2543c4da62e962661011fea5b19252b9660822 -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jan 30 20:16:28 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 30 Jan 2023 12:16:28 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2023 Feb 1 at 1600 UTC Message-ID: <1860452ea70.bfd2c6a717775.3195958107654873232@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2023 Feb 1, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Jan 31 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From ianyrchoi at gmail.com Mon Jan 30 22:29:09 2023 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 31 Jan 2023 07:29:09 +0900 Subject: [all][elections][ptl][tc] Combined PTL/TC bugbear cycle Election Season Message-ID: Election details: https://governance.openstack.org/election/ The nomination period officially begins Feb 01, 2023 23:45 UTC. Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. Due to circumstances of timing, PTL and TC elections for the coming cycle will run concurrently; deadlines for their nomination and voting activities are synchronized but will still use separate ballots. Please note, if only one candidate is nominated as PTL for a project team during the PTL nomination period, that candidate will win by acclaim, and there will be no poll. There will only be a poll if there is more than one candidate stepping forward for a project team's PTL position. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, [1] https://governance.openstack.org/election/#election-officials From ianyrchoi at gmail.com Mon Jan 30 23:24:25 2023 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 31 Jan 2023 08:24:25 +0900 Subject: [all][elections][ptl][tc] Combined PTL/TC 2023.2 Bobcat cycle Election Season In-Reply-To: References: Message-ID: (Correcting release name with version number - arbitrary release name was marked on the election template tool) On Tue, Jan 31, 2023 at 7:29 AM Ian Y. Choi wrote: > > Election details: https://governance.openstack.org/election/ > > The nomination period officially begins Feb 01, 2023 23:45 UTC. > > Please read the stipulations and timelines for candidates and > electorate contained in this governance documentation. > > Due to circumstances of timing, PTL and TC elections for the coming > cycle will run concurrently; deadlines for their nomination and > voting activities are synchronized but will still use separate > ballots. > > Please note, if only one candidate is nominated as PTL for a project > team during the PTL nomination period, that candidate will win by > acclaim, and there will be no poll. There will only be a poll if > there is more than one candidate stepping forward for a project > team's PTL position. > > There will be further announcements posted to the mailing list as > action is required from the electorate or candidates. This email > is for information purposes only. > > If you have any questions which you feel affect others please reply > to this email thread. > > If you have any questions that you which to discuss in private please > email any of the election officials[1] so that we may address your > concerns. > > Thank you, > > [1] https://governance.openstack.org/election/#election-officials From soufian.zaouam-ext at socgen.com Tue Jan 31 10:45:51 2023 From: soufian.zaouam-ext at socgen.com (ZAOUAM Soufian (EXT)) Date: Tue, 31 Jan 2023 10:45:51 +0000 Subject: [neutron][kolla] Do neutron agents in compute nodes connect to neutron database ? Message-ID: Hi all, Do neutron agents in compute nodes have to connect to neutron database ? We are maintaining an Openstack using kolla-ansible (ussuri version). We noticed that the 4 neutron agents on compute nodes: neutron-l3-agent, neutron-metadata-agent, neutron-openvswitch-agent, neutron-dhcp-agent are all using the same neutron.conf config file, but we are wondering if they actually use [database] or is it redundant ? If no, is it safe to remove the [database] section from the neutron.conf for these agents? Best regards, ========================================================= Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et susceptibles de contenir des informations couvertes par le secret professionnel. Ce message est etabli a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee interdite. Tout message electronique est susceptible d'alteration. La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme falsifie. ========================================================= This message and any attachments (the "message") are confidential, intended solely for the addresses, and may contain legally privileged information. Any unauthorized use or dissemination is prohibited. E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or falsified. ========================================================= -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jan 31 11:12:04 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 31 Jan 2023 12:12:04 +0100 Subject: [neutron][kolla] Do neutron agents in compute nodes connect to neutron database ? In-Reply-To: References: Message-ID: <6524789.CNkyblSvZa@p1> Hi, Dnia wtorek, 31 stycznia 2023 11:45:51 CET ZAOUAM Soufian (EXT) pisze: > Hi all, > > Do neutron agents in compute nodes have to connect to neutron database ? > > We are maintaining an Openstack using kolla-ansible (ussuri version). > > We noticed that the 4 neutron agents on compute nodes: neutron-l3-agent, neutron-metadata-agent, neutron-openvswitch-agent, neutron-dhcp-agent are all using the same neutron.conf config file, but we are wondering if they actually use [database] or is it redundant ? > > If no, is it safe to remove the [database] section from the neutron.conf for these agents? It's safe to remove that section. Only neutron-server is connecting to database. Agents are communicating with neutron-server through RPC. > > Best regards, > ========================================================= > > Ce message et toutes les pieces jointes (ci-apres le "message") > sont confidentiels et susceptibles de contenir des informations > couvertes par le secret professionnel. Ce message est etabli > a l'intention exclusive de ses destinataires. Toute utilisation > ou diffusion non autorisee interdite. > Tout message electronique est susceptible d'alteration. La SOCIETE GENERALE > et ses filiales declinent toute responsabilite au titre de ce message > s'il a ete altere, deforme falsifie. > > ========================================================= > > This message and any attachments (the "message") are confidential, > intended solely for the addresses, and may contain legally privileged > information. Any unauthorized use or dissemination is prohibited. > E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any > of its subsidiaries or affiliates shall be liable for the message > if altered, changed or falsified. > > ========================================================= > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From rishat.azizov at gmail.com Tue Jan 31 14:11:31 2023 From: rishat.azizov at gmail.com (Rishat Azizov) Date: Tue, 31 Jan 2023 20:11:31 +0600 Subject: [trove] Unable to manage databases with mariadb 10.7 Message-ID: Hello! I get this error when listing databases: "An error occurred communicating with the guest: 'utf8mb3_general_ci' not a valid collation.\nTraceback (most recent call last):\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/oslo_messaging/rpc/server.py\", line 165, in _process_incoming\n res = self.dispatcher.dispatch(message)\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py\", line 309, in dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/oslo_messaging/rpc/dispatcher.py\", line 229, in _do_dispatch\n result = func(ctxt, **new_args)\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/osprofiler/profiler.py\", line 160, in wrapper\n result = f(*args, **kwargs)\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/trove/guestagent/datastore/manager.py\", line 812, in list_databases\n return self.adm.list_databases(limit, marker, include_marker)\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/trove/guestagent/datastore/mysql_common/service.py\", line 341, in list_databases\n mysql_db = models.MySQLSchema(name=database[0],\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/trove/common/db/mysql/models.py\", line 46, in __init__\n self.collate = collate\n\n File \"/opt/guest-agent-venv/lib/python3.8/site-packages/trove/common/db/mysql/models.py\", line 85, in collate\n raise ValueError(_(\"'%s' not a valid collation.\") % value)\n\nValueError: 'utf8mb3_general_ci' not a valid collation.\n.? 400 get /instances/78ffcc29-9a36-486d-aa4f-f4133511696d/databases trustId: a8daa2804e404a28b64df50fb824bb63" Listing failed becase in mariadb 10.7 utf8_general_ci aliased to utf8mb3_general_ci. Could you please help with this error? -------------- next part -------------- An HTML attachment was scrubbed... URL: From uday.dikshit at myrealdata.in Tue Jan 31 16:03:40 2023 From: uday.dikshit at myrealdata.in (Uday Dikshit) Date: Tue, 31 Jan 2023 16:03:40 +0000 Subject: How to create a dynamic pollster subsystem to create a pollster for senlin cluster In-Reply-To: References: Message-ID: Hey Thomas This was really helpful. I had another doubt, how can I add a complex calculation such as taking mean of CPU utilization of all nodes that are present in a senlin cluster with senlin cluster ID as its primary key. Thanks & Regards, [https://acefone.com/email-signature/logo-new.png] [https://acefone.com/email-signature/facebook.png] [https://acefone.com/email-signature/linkedin.png] [https://acefone.com/email-signature/twitter.png] [https://acefone.com/email-signature/youtube.png] [https://acefone.com/email-signature/glassdoor.png] Uday Dikshit Cloud DevOps Engineer, Product Development uday.dikshit at myrealdata.in www.myrealdata.in 809-A Udyog Vihar, Phase 5, Gurugram - 122015, Haryana ________________________________ From: Thomas Goirand Sent: Friday, January 27, 2023 8:54 PM To: Uday Dikshit ; openstack-discuss at lists.openstack.org Subject: Re: How to create a dynamic pollster subsystem to create a pollster for senlin cluster On 1/25/23 10:04, Uday Dikshit wrote: > Hello Team > We are a public cloud provider based on Openstack. > We are working to create Autoscaling with aodh and senlin in > Kolla-ansible Openstack Wallaby release. We are facing an issue as > ceilometer does not support metrics for senlin cluster as a resource. > Our aim is to use > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_ceilometer_wallaby_admin_telemetry-2Ddynamic-2Dpollster.html&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=-LMnIXpO63G3i6FF9aX_zIGELd_4Z32B8jN24N8Yy2Y&e= to generate a pollster to collect data for senlin. We were looking if anybody in the community has ever used this feature. Hi, Not only we use that feature in production, but I also used the dynamic pollster stuff on the compute pollster using the command-line thingy. The result is this project: https://urldefense.proofpoint.com/v2/url?u=https-3A__salsa.debian.org_openstack-2Dteam_services_ceilometer-2Dinstance-2Dpoller_&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=PZG_y8gQzPmKPmaCX1OWHNA5Zg9tFJ0O4zdZZkrUnIY&e= You can also read bits of docs of OCI about it: https://urldefense.proofpoint.com/v2/url?u=https-3A__salsa.debian.org_openstack-2Dteam_debian_openstack-2Dcluster-2Dinstaller-23configuring-2Da-2Dcustom-2Dmetric-2Dand-2Dbilling&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=HB4uQxN4HzA-iHQ1Cvlt-UVhF6oH5jzY0xmJ8Afu5i8&e= I hope this helps. If you need more help, please do reply ... Cheers, Thomas Goirand (zigo) ---------- This email has been scanned for spam and viruses by Proofpoint Essentials. Visit the following link to report this email as spam: https://us1.proofpointessentials.com/index01.php?mod_id=11&mod_option=logitem&mail_id=1674833071-prElz2OKrtG9&r_address=uday.dikshit%40myrealdata.in&report=1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Jan 31 16:41:50 2023 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 31 Jan 2023 13:41:50 -0300 Subject: How to create a dynamic pollster subsystem to create a pollster for senlin cluster In-Reply-To: References: Message-ID: Do you want this to be done in Ceilometer? I mean, complex calculations are normally executed in the metric backend, such as Gnocchi. You could collect data for all clusters, and persist in the backend. Then, you do the calculation mean/min/max/rate:xxx, and so on in Gnocchi for instance. The calculation can be done in the dynamic pollsters as well though. They accept python expressions, where you can do basically anything you want. Also, it is possible to use nested dynamic pollsters to collect and combine data from different sources together. On Tue, Jan 31, 2023 at 1:19 PM Uday Dikshit wrote: > Hey Thomas > This was really helpful. I had another doubt, how can I add a complex > calculation such as taking mean of CPU utilization of all nodes that are > present in a senlin cluster with senlin cluster ID as its primary key. > > *Thanks & Regards,* > > > > > > > > Uday Dikshit > Cloud DevOps Engineer, Product Development > uday.dikshit at myrealdata.in > www.myrealdata.in > 809-A Udyog Vihar, > Phase 5, Gurugram - 122015, Haryana > ------------------------------ > *From:* Thomas Goirand > *Sent:* Friday, January 27, 2023 8:54 PM > *To:* Uday Dikshit ; > openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject:* Re: How to create a dynamic pollster subsystem to create a > pollster for senlin cluster > > On 1/25/23 10:04, Uday Dikshit wrote: > > Hello Team > > We are a public cloud provider based on Openstack. > > We are working to create Autoscaling with aodh and senlin in > > Kolla-ansible Openstack Wallaby release. We are facing an issue as > > ceilometer does not support metrics for senlin cluster as a resource. > > Our aim is to use > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_ceilometer_wallaby_admin_telemetry-2Ddynamic-2Dpollster.html&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=-LMnIXpO63G3i6FF9aX_zIGELd_4Z32B8jN24N8Yy2Y&e= > < > https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_ceilometer_wallaby_admin_telemetry-2Ddynamic-2Dpollster.html&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=-LMnIXpO63G3i6FF9aX_zIGELd_4Z32B8jN24N8Yy2Y&e=> > to generate a pollster to collect data for senlin. We were looking if > anybody in the community has ever used this feature. > > Hi, > > Not only we use that feature in production, but I also used the dynamic > pollster stuff on the compute pollster using the command-line thingy. > The result is this project: > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__salsa.debian.org_openstack-2Dteam_services_ceilometer-2Dinstance-2Dpoller_&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=PZG_y8gQzPmKPmaCX1OWHNA5Zg9tFJ0O4zdZZkrUnIY&e= > > You can also read bits of docs of OCI about it: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__salsa.debian.org_openstack-2Dteam_debian_openstack-2Dcluster-2Dinstaller-23configuring-2Da-2Dcustom-2Dmetric-2Dand-2Dbilling&d=DwIDaQ&c=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM&r=58AkTXQK-t27foam0JxQj_p2S7oML-5RlT2bY2LISOk&m=KrUyOp7R8Eq0a_TOSOboQA2IOuHFlEtSswxM7uAB0f4&s=HB4uQxN4HzA-iHQ1Cvlt-UVhF6oH5jzY0xmJ8Afu5i8&e= > > I hope this helps. If you need more help, please do reply ... > > Cheers, > > Thomas Goirand (zigo) > > > > ---------- > > This email has been scanned for spam and viruses by Proofpoint Essentials. > Visit the following link to report this email as spam: > > https://us1.proofpointessentials.com/index01.php?mod_id=11&mod_option=logitem&mail_id=1674833071-prElz2OKrtG9&r_address=uday.dikshit%40myrealdata.in&report=1 > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From egarciar at redhat.com Tue Jan 31 17:02:52 2023 From: egarciar at redhat.com (Elvira Garcia Ruiz) Date: Tue, 31 Jan 2023 18:02:52 +0100 Subject: [neutron] Bumping of requirements on RDO before bumping them on Neutron Message-ID: Hi Neutrinos! RDO folks proposed that, in order to be able to correctly build CI gates for testing, it would be nice if we tried to update the neutron-distgit requirement file when we want to update the minimal version of a dependency before merging it on our repository. This would allow them to realize whether they need or not to update any Fedora package. In order to do that, we just need to send a small commit to their repository [0]. You can use your GitHub account for the login. Here you can find an example for pyroute2 in RDO [1] and the respective pyroute2 bump in Neutron [2]. Regards! Elvira Garc?a (elvira) [0] https://review.rdoproject.org/r/openstack/neutron-distgit [1] https://review.rdoproject.org/r/c/openstack/neutron-distgit/+/46809 [2] https://review.opendev.org/c/openstack/neutron/+/870963 From fkr at hazardous.org Tue Jan 31 21:58:10 2023 From: fkr at hazardous.org (Felix Kronlage-Dammers) Date: Tue, 31 Jan 2023 22:58:10 +0100 Subject: [publiccloud-sig] Reminder - next meeting Feb 1st - 0800 UTC Message-ID: Hi everyone, a bit late but nevertheless a quick reminder: the next meeting of the Public Cloud SIG is on February 1st (tomorrow - depending on your timezone ;) at 0800 UTC. We meet on IRC in #openstack-operators. See also here for all other details: https://wiki.openstack.org/wiki/PublicCloudSIG regards felix From gmann at ghanshyammann.com Tue Jan 31 22:07:39 2023 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 31 Jan 2023 14:07:39 -0800 Subject: [qa][gate][stable] stable/wallaby gate is broken In-Reply-To: <185ffd139cb.f2315ede356579.5195015287935599210@ghanshyammann.com> References: <185ffd139cb.f2315ede356579.5195015287935599210@ghanshyammann.com> Message-ID: <18609df1222.f902b335124580.685487766025973081@ghanshyammann.com> ---- On Sun, 29 Jan 2023 15:16:20 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > You might know stable/wallaby which is in the EM phase is broken because of the Tempest master > incompatibility. As this is in EM phase, Tempest master does support it and the fix is to use the old > compatible Tempest. > > I have pushed the fix on devstack to pin Tempest 29.0.0 to test stable/wallaby, do not recheck until > that is merged: > > - https://review.opendev.org/c/openstack/devstack/+/871782 Devstack change ^^ is finally merged but now openstacksdk-functional-devstack job is broken on stable/xena and stable/wallaby. Fixes are in osc and need osc new release to make SDK job green. If your project gate is running SDK job then please hold the recheck until it is fixed: - https://review.opendev.org/q/I2a88e79d134ec1362e8361629cb2a8ae14dc7b67 -gmann > > This depends on a few other fixes which are in the gate. > > Like the Tempest pin, we need to pin tempest plugins also on stable/wallaby. I have pushed a few projects > fix for that, please review those if the devstack patch alone does not fix the gate > > - https://review.opendev.org/q/topic:wallaby-pin-tempest+status:open > - https://review.opendev.org/q/topic:bug%252F2003993 > > -gmann > > From haleyb.dev at gmail.com Tue Jan 31 22:17:10 2023 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 31 Jan 2023 17:17:10 -0500 Subject: [neutron] Bumping of requirements on RDO before bumping them on Neutron In-Reply-To: References: Message-ID: <3d6c69e9-d91b-72dd-9cb3-a297696792d4@gmail.com> Hi Elvira, Thanks for starting a discussion on this. On 1/31/23 12:02 PM, Elvira Garcia Ruiz wrote: > Hi Neutrinos! > > RDO folks proposed that, in order to be able to correctly build CI > gates for testing, it would be nice if we tried to update the > neutron-distgit requirement file when we want to update the minimal > version of a dependency before merging it on our repository. > This would allow them to realize whether they need or not to update > any Fedora package. In order to do that, we just need to send a small > commit to their repository [0]. You can use your GitHub account for > the login. I do feel it's a good idea for downstream maintainers to be informed when we change library or binary dependencies on the master branch, we have actually been bit by this recently as well when ovsdb-client started being used in [0]. My only concern here is this is leaving out the other distros like Ubuntu, Debian, etc., so I'm wondering if there is a more generic way? We could do something like send an email to this list at cycle milestones that a maintainer might watch for to then trigger some downstream work, but that is after-the-fact and not before as you are suggesting. I just don't think having developers update all the distros is scalable when there are more than one. Of course this would affect more than Neutron as well. Thoughts? -Brian [0] https://review.opendev.org/c/openstack/neutron/+/860275 > Here you can find an example for pyroute2 in RDO [1] and the > respective pyroute2 bump in Neutron [2]. > > Regards! > Elvira Garc?a (elvira) > > [0] https://review.rdoproject.org/r/openstack/neutron-distgit 404 :( > [1] https://review.rdoproject.org/r/c/openstack/neutron-distgit/+/46809 > [2] https://review.opendev.org/c/openstack/neutron/+/870963 From rosmaita.fossdev at gmail.com Tue Jan 31 22:24:22 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 31 Jan 2023 17:24:22 -0500 Subject: [ALL] Why we dont have an official forum? In-Reply-To: <20230130133435.w3li5bvqlmiw2omg@yuggoth.org> References: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> <20230130133435.w3li5bvqlmiw2omg@yuggoth.org> Message-ID: <1f126f74-1f9a-24aa-5471-10e4a7d76fca@gmail.com> On 1/30/23 8:34 AM, Jeremy Stanley wrote: > [...] > > The OpenDev Collaboratory is in the process of upgrading our mailing > list software to a platform which has a forum-like searchable web > archive and the ability to post to mailing lists from a browser > without needing to use E-mail. Thank you OpenDev Collaboratory! That is excellent news. > [...] > > In my estimation, that should satisfy much of the desires of those > who prefer a web forum, while not breaking the existing experience > for people who would rather use mailing lists. I agree, and hopefully it will address at least some of Nguy?n H?u Kh?i and T Koksal's concerns. Maybe they can think of some ways to promote the new "forum" when it comes online later this year so people will know to start using it. cheers, brian From vmudemela at gmail.com Tue Jan 31 22:23:17 2023 From: vmudemela at gmail.com (Vish Mudemela) Date: Tue, 31 Jan 2023 14:23:17 -0800 Subject: [ZED][Kyestone][Application_credentials] Conflict occurred attempting to store application_credential - Duplicate entry found Message-ID: Hello all, 1. I have created application credentials with the name "sa_capacity" as role - reader. 2. I have deleted application credentials , I wanted to add some access rules. 3. verified by command - *openstack application credential show sa_capacity* got output as *"**No application credential with a name or ID of 'sa_capacity' exists."* 4.verified using *openstack application credential list *, confirmed there is no application credential with name "sa_capacity" 5. no application creds on my username too *"openstack application credential list --user <> --user-domain <>"* 6. Trying to recreate application credentials with the *same name(sa_capacity) *with a role reader and added some access rules. I got an error* "Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity. (HTTP 409) (Request-ID: req-0e58b336-a59b-490d-b946-30c28ca44777)"* *7.* I even tried with a different name that's never been configured* "sa_capcity_1"* but it says *Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity_1. (HTTP 409)* Any idea what's wrong here? Below are the keystone logs. *keystone logs:* 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/common/manager.py", line 115, in wrapped 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application __ret_val = __f(*args, **kwargs) 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/application_credential/core.py", line 137, in create_application_credential 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application ref = self.driver.create_application_credential( 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/common/sql/core.py", line 563, in wrapper 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application raise exception.Conflict(type=conflict_type, 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application keystone.exception.Conflict: Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity. 2023-01-31 21:51:07.193 670 ERROR keystone.server.flask.application 2023-01-31 21:53:10.461 669 INFO keystone.token.token_formatters [None req-75dc9499-18aa-4f55-8a63-633b9ef316c5 910a75448974f088b01f96e09db0667737b84a5663dbaa2a142c4648187f40de e68db2302b1d44beb5d62c6a6268ab26 - - e025affd9237457a9d7036fb10a9b626 e025affd9237457a9d7036fb10a9b626] Fernet token created with length of 268 characters, which exceeds 255 characters 2023-01-31 21:53:14.131 673 INFO keystone.token.token_formatters [None req-66aa818c-754e-46bf-a393-6115eb14b2d3 910a75448974f088b01f96e09db0667737b84a5663dbaa2a142c4648187f40de e68db2302b1d44beb5d62c6a6268ab26 - - e025affd9237457a9d7036fb10a9b626 e025affd9237457a9d7036fb10a9b626] Fernet token created with length of 268 characters, which exceeds 255 characters 2023-01-31 21:53:55.977 671 WARNING py.warnings [None req-0e58b336-a59b-490d-b946-30c28ca44777 fba6720038a31a4c1cf2e001022466a504d6d78ec9b10ac1a8adbfd7b8902fdc e68db2302b1d44beb5d62c6a6268ab26 - - e025affd9237457a9d7036fb10a9b626 e025affd9237457a9d7036fb10a9b626] /var/lib/kolla/venv/lib/python3.10/site-packages/keystone/application_credential/backends/sql.py:144: SAWarning: New instance with identity key (, (96, 249), None) conflicts with persistent instance external_id=access_rule['id']).first() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application [None req-0e58b336-a59b-490d-b946-30c28ca44777 fba6720038a31a4c1cf2e001022466a504d6d78ec9b10ac1a8adbfd7b8902fdc e68db2302b1d44beb5d62c6a6268ab26 - - e025affd9237457a9d7036fb10a9b626 e025affd9237457a9d7036fb10a9b626] Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity.: keystone.exception.Conflict: Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity. 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application Traceback (most recent call last): 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self.dialect.do_execute( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application cursor.execute(statement, parameters) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 148, in execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application result = self._query(query) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 310, in _query 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application conn.query(q) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 548, in query 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 775, in _read_query_result 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application result.read() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 1156, in read 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application first_packet = self.connection._read_packet() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 725, in _read_packet 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application packet.raise_for_error() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/protocol.py", line 221, in raise_for_error 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application err.raise_mysql_exception(self._data) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/err.py", line 143, in raise_mysql_exception 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application raise errorclass(errno, errval) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application pymysql.err.IntegrityError: (1062, "Duplicate entry '96-249' for key 'PRIMARY'") 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application The above exception was the direct cause of the following exception: 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application Traceback (most recent call last): 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/common/sql/core.py", line 528, in wrapper 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return method(*args, **kwargs) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/application_credential/backends/sql.py", line 144, in create_application_credential 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application external_id=access_rule['id']).first() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2823, in first 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return self.limit(1)._iter().first() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/query.py", line 2907, in _iter 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application result = self.session.execute( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 1660, in execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application ) = compile_state_cls.orm_pre_session_exec( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/context.py", line 316, in orm_pre_session_exec 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application session._autoflush() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2246, in _autoflush 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self.flush() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3383, in flush 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self._flush(objects) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3522, in _flush 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application with util.safe_reraise(): 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__ 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application compat.raise_( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_ 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application raise exception 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 3483, in _flush 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application flush_context.execute() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application rec.execute(self) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application util.preloaded.orm_persistence.save_obj( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 245, in save_obj 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application _emit_insert_statements( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 1097, in _emit_insert_statements 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application c = connection._execute_20( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1705, in _execute_20 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return meth(self, args_10style, kwargs_10style, execution_options) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 333, in _execute_on_connection 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return connection._execute_clauseelement( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application ret = self._execute_context( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self._handle_dbapi_exception( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application util.raise_(newraise, with_traceback=exc_info[2], from_=e) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 208, in raise_ 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application raise exception 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self.dialect.do_execute( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application cursor.execute(statement, parameters) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 148, in execute 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application result = self._query(query) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/cursors.py", line 310, in _query 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application conn.query(q) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 548, in query 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 775, in _read_query_result 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application result.read() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 1156, in read 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application first_packet = self.connection._read_packet() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/connections.py", line 725, in _read_packet 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application packet.raise_for_error() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/protocol.py", line 221, in raise_for_error 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application err.raise_mysql_exception(self._data) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/pymysql/err.py", line 143, in raise_mysql_exception 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application raise errorclass(errno, errval) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application oslo_db.exception.DBDuplicateEntry: (pymysql.err.IntegrityError) (1062, "Duplicate entry '96-249' for key 'PRIMARY'") 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application [SQL: INSERT INTO application_credential_access_rule (application_credential_id, access_rule_id) VALUES (%(application_credential_id)s, %(access_rule_id)s)] 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application [parameters: {'application_credential_id': 96, 'access_rule_id': 249}] 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application (Background on this error at: https://sqlalche.me/e/14/gkpj) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application During handling of the above exception, another exception occurred: 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application Traceback (most recent call last): 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/flask/app.py", line 1820, in full_dispatch_request 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application rv = self.dispatch_request() 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/flask/app.py", line 1796, in dispatch_request 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/flask_restful/__init__.py", line 467, in wrapper 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application resp = resource(*args, **kwargs) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/flask/views.py", line 107, in view 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application return current_app.ensure_sync(self.dispatch_request)(**kwargs) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/flask_restful/__init__.py", line 582, in dispatch_request 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application resp = meth(*args, **kwargs) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/api/users.py", line 669, in post 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application ref = app_cred_api.create_application_credential( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/common/manager.py", line 115, in wrapped 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application __ret_val = __f(*args, **kwargs) 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/application_credential/core.py", line 137, in create_application_credential 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application ref = self.driver.create_application_credential( 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application File "/var/lib/kolla/venv/lib/python3.10/site-packages/keystone/common/sql/core.py", line 563, in wrapper 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application raise exception.Conflict(type=conflict_type, 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application keystone.exception.Conflict: Conflict occurred attempting to store application_credential - Duplicate entry found with name sa_capacity. 2023-01-31 21:53:55.981 671 ERROR keystone.server.flask.application Thanks Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From john.vanommen at gmail.com Tue Jan 31 23:57:32 2023 From: john.vanommen at gmail.com (John van Ommen) Date: Tue, 31 Jan 2023 15:57:32 -0800 Subject: [ALL] Why we dont have an official forum? In-Reply-To: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> References: <6691fd3deb431bdf2ede7bc7dd0034e5d9f07efb.camel@redhat.com> Message-ID: > the main reason we do not have an offical fourm any more is that we do not have enough contibutors to maintain one. In my entire life, I've never seen a technology that's used so widely, but so few people are involved in. At this point, I can nearly name all the active OpenStackers in the United States off the top of my head. On Mon, Jan 30, 2023 at 4:57 AM Sean Mooney wrote: > On Mon, 2023-01-30 at 13:33 +0700, Nguy?n H?u Kh?i wrote: > > In my view, > > Having an official forum will make our projects grow faster and users can > > access Openstack easier. > > > > Take a look at K8S or Icinga. They are very good at helping people to > > access their platform by having a nice forum. > > > > I can help set up and configure the forum. > > > > I hope Openstack will become more and more mature and grow. > openstack has had 26 releases over 10+ years and many we would see it as a > very mature comunity. > in fact it has past the hype/fast groth phases and is into the more stable > grandule eveolving and sustaining > phase. the main reason we do not have an offical fourm any more is that we > do not have > enough contibutors to maintain one. as was noted in the tread that is why > ask.openstack.org was removed. > > the opendev infra team is small and manages alot of service on behalf of > the comunity > our prvious fourm attempt largely went unmainteined for years. if one was > to be created again > it would need to be automated, maintaiend and hosted with several people > commiting to maintaining it. > > it would likely be better to collaberate with an exsitign froum or > opensouce comunity then host our own > at this point. e.g. stackoverflow or perhaps a matrix/mastadon space of > some kind. > > the other problem is getting the people with the knowlage to partake. > many wont have the time to be active in such a fourm. > > many of the active members of our comunity have been wearing 2 or 3 hats > already and may not have > the mental bandwith to also act as support in an offical fourm and answer > questions. that would leave > the questions eitehr unanswered or to experinced users/operators. > > some of the more exprience operators may have bandwith to step in, in fact > having an operator lead fourm might be more interesting as > if there is a common issue and/or a solution that they comeup with that > could be feed back to the project > teams to fix or implement for them. its equally likely they will be busy > running there clouds and the questions will be > unansered or poorly answered. > > its a gap i just dont know if its one that can be simply filled. > > > > > > Nguyen Huu Khoi > > > > > > On Mon, Jan 30, 2023 at 12:34 PM T Koksal > wrote: > > > > > Hello > > > > > > I totally agree with Nguyen! I believe, as a new comer into Openstack I > > > have concluded that there is the expectation from the user the to have > > > pre-existing knowledge of the platforms. Additionally, the > documentation is > > > all-over and unstructured for someone wanting to learn. > > > > > > TK > > > > > > On Mon, Jan 30, 2023 at 7:05 AM Bernd Bausch > > > wrote: > > > > > > > There used to be ask.openstack.org, but since nobody maintained the > > > > website, it became unreliable and was eventually disbanded. At the > time, we > > > > were encouraged to ask questions at superuser.com and, in case it's > > > > related to programming, stackoverflow.com. There is also > > > > https://www.reddit.com/r/openstack, which is probably less > "official" > > > > but seems more lively than the two Stackexchange sites. > > > > > > > > On Mon, Jan 30, 2023 at 7:56 AM Nguy?n H?u Kh?i < > > > > nguyenhuukhoinw at gmail.com> wrote: > > > > > > > > > Hello guys. > > > > > > > > > > Openstack is a very interesting project, many questions from users > will > > > > > make it grow more and more but I see that people, including me, > still ask > > > > > the same question. It is hard to sort or find knowledge by this > way. > > > > > > > > > > If we hope this project spreads for people, we need a new way to > share > > > > > knowledge and skills, we are in the modern world but the way to > access and > > > > > exchange information in this project is too obsolete. This is a > wall to > > > > > slow down this project. > > > > > Nguyen Huu Khoi > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: