From swogatpradhan22 at gmail.com Tue Nov 1 06:56:44 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 1 Nov 2022 12:26:44 +0530 Subject: No subject Message-ID: I have configured a 3 node pcs cluster for openstack. To test the HA, i issue the following commands: iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT && iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j ACCEPT && iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j ACCEPT && iptables -A INPUT ! -i lo -j REJECT --reject-with icmp-host-prohibited && iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT && iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT && iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT && iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited When i issue iptables command on 1 node then it is fenced and forced to reboot and cluster works fine. But when i issue this on 2 of the controller nodes the resource bundles fail and doesn't come back up. [root at overcloud-controller-1 ~]# pcs status Cluster name: tripleo_cluster Cluster Summary: * Stack: corosync * Current DC: overcloud-controller-1 (version 2.1.2-4.el8-ada5c3b36e2) - partition WITHOUT quorum * Last updated: Sat Oct 29 03:15:29 2022 * Last change: Sat Oct 29 03:12:26 2022 by root via crm_resource on overcloud-controller-1 * 19 nodes configured * 68 resource instances configured Node List: * Node overcloud-controller-0: UNCLEAN (offline) * Node overcloud-controller-2: UNCLEAN (offline) * Online: [ overcloud-controller-1 ] Full List of Resources: * ip-172.25.201.91 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 (UNCLEAN) * ip-172.25.201.150 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2 (UNCLEAN) * ip-172.25.201.206 (ocf::heartbeat:IPaddr2): Stopped * ip-172.25.201.250 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 (UNCLEAN) * ip-172.25.202.50 (ocf::heartbeat:IPaddr2): Stopped * ip-172.25.202.90 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2 (UNCLEAN) * Container bundle set: haproxy-bundle [ 172.25.201.68:8787/tripleomaster/openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-0 (UNCLEAN) * haproxy-bundle-podman-1 (ocf::heartbeat:podman): Stopped * haproxy-bundle-podman-2 (ocf::heartbeat:podman): Started overcloud-controller-2 (UNCLEAN) * haproxy-bundle-podman-3 (ocf::heartbeat:podman): Stopped * Container bundle set: galera-bundle [ 172.25.201.68:8787/tripleomaster/openstack-mariadb:pcmklatest]: * galera-bundle-0 (ocf::heartbeat:galera): Stopped overcloud-controller-0 (UNCLEAN) * galera-bundle-1 (ocf::heartbeat:galera): Stopped * galera-bundle-2 (ocf::heartbeat:galera): Stopped overcloud-controller-2 (UNCLEAN) * galera-bundle-3 (ocf::heartbeat:galera): Stopped * Container bundle set: redis-bundle [ 172.25.201.68:8787/tripleomaster/openstack-redis:pcmklatest]: * redis-bundle-0 (ocf::heartbeat:redis): Stopped * redis-bundle-1 (ocf::heartbeat:redis): Stopped overcloud-controller-2 (UNCLEAN) * redis-bundle-2 (ocf::heartbeat:redis): Stopped overcloud-controller-0 (UNCLEAN) * redis-bundle-3 (ocf::heartbeat:redis): Stopped * Container bundle set: ovn-dbs-bundle [ 172.25.201.68:8787/tripleomaster/openstack-ovn-northd:pcmklatest]: * ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Stopped overcloud-controller-2 (UNCLEAN) * ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Stopped overcloud-controller-0 (UNCLEAN) * ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Stopped * ovn-dbs-bundle-3 (ocf::ovn:ovndb-servers): Stopped * ip-172.25.201.208 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2 (UNCLEAN) * Container bundle: openstack-cinder-backup [ 172.25.201.68:8787/tripleomaster/openstack-cinder-backup:pcmklatest]: * openstack-cinder-backup-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-0 (UNCLEAN) * Container bundle: openstack-cinder-volume [ 172.25.201.68:8787/tripleomaster/openstack-cinder-volume:pcmklatest]: * openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Stopped * Container bundle set: rabbitmq-bundle [ 172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest]: * rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Stopped overcloud-controller-2 (UNCLEAN) * rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Stopped overcloud-controller-0 (UNCLEAN) * rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Stopped * rabbitmq-bundle-3 (ocf::heartbeat:rabbitmq-cluster): Stopped * ip-172.25.204.250 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 (UNCLEAN) * ceph-nfs (systemd:ceph-nfs at pacemaker): Started overcloud-controller-0 (UNCLEAN) * Container bundle: openstack-manila-share [ 172.25.201.68:8787/tripleomaster/openstack-manila-share:pcmklatest]: * openstack-manila-share-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-0 (UNCLEAN) * stonith-fence_ipmilan-48d539a11820 (stonith:fence_ipmilan): Stopped * stonith-fence_ipmilan-48d539a1188c (stonith:fence_ipmilan): Started overcloud-controller-2 (UNCLEAN) * stonith-fence_ipmilan-246e96349068 (stonith:fence_ipmilan): Started overcloud-controller-2 (UNCLEAN) * stonith-fence_ipmilan-246e96348d30 (stonith:fence_ipmilan): Stopped Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled PCS requires more than half the nodes to be alive for the cluster to work. To fix this step I issued a command:*pcs no-quorum-policy=ignore.* And now the PCS cluster keeps on running even when there is no quorum. Now the issue i have is the mariadb-bundle becomes slave and dosen't get promoted to master. Can you please suggest a proper workaround when more than half nodes go down and my cloud will be still running. With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lpetrut at cloudbasesolutions.com Tue Nov 1 08:22:08 2022 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Tue, 1 Nov 2022 10:22:08 +0200 Subject: Discontinuing Winstackers Message-ID: <6a82bb90-a56b-72fd-eb70-04e91b01d299@cloudbasesolutions.com> Running Openstack on Windows has been driven by Cloudbase Solutions ever since Folsom (2012). Here are the most noteworthy contributions: * nova hyper-v driver - in-tree plus out-of-tree compute-hyperv driver * os-win - common Windows library for Openstack * neutron hyperv ml2 plugin and agent * ovs on Windows and neutron ovs agent support * cinder drivers - SMB and Windows iSCSI * os-brick Windows connectors - iSCSI, FC, SMB, RBD * ceilometer Windows poller * manila Windows driver * glance Windows support * freerdp gateway * last but not least, CI test systems for (most of) the above Due to a shift in business focus and the increased operational costs of maintaining the CI systems, we are no longer in a position to proactively support Openstack on Windows. However, we will continue to support Cloudbase-init (cloud-init equivalent for Windows) as well as the Openstack Windows imaging tools. If there are any interested parties that would like to step up as Winstacker maintainers, please let us know and we will provide any required assistance. Regards, Lucian Petrut Cloudbase Solutions Winstackers PTL From wangkuntian1994 at 163.com Tue Nov 1 09:06:26 2022 From: wangkuntian1994 at 163.com (=?UTF-8?B?546L5Z2k55Sw?=) Date: Tue, 1 Nov 2022 17:06:26 +0800 (GMT+08:00) Subject: [oslo] New driver for oslo.messaging Message-ID: <4eddcca5.3347.18432712271.Coremail.wangkuntian1994@163.com> Hello: I want to develop a new driver for oslo.messaging to use rocketmq in openstack environment. I wonder if the community need this new driver? Best Regards? -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Nov 1 10:01:35 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 1 Nov 2022 15:31:35 +0530 Subject: Cluster fails when 2 controller nodes become down simultaneously | tripleo wallaby In-Reply-To: References: Message-ID: Hi, Updating the subject. On Tue, Nov 1, 2022 at 12:26 PM Swogat Pradhan wrote: > I have configured a 3 node pcs cluster for openstack. > To test the HA, i issue the following commands: > iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT && > iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT > && > iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j > ACCEPT && > iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j > ACCEPT && > iptables -A INPUT ! -i lo -j REJECT --reject-with icmp-host-prohibited && > iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT && > iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT && > iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT && > iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited > > When i issue iptables command on 1 node then it is fenced and forced to > reboot and cluster works fine. > But when i issue this on 2 of the controller nodes the resource bundles > fail and doesn't come back up. > > [root at overcloud-controller-1 ~]# pcs status > Cluster name: tripleo_cluster > Cluster Summary: > * Stack: corosync > * Current DC: overcloud-controller-1 (version 2.1.2-4.el8-ada5c3b36e2) - > partition WITHOUT quorum > * Last updated: Sat Oct 29 03:15:29 2022 > * Last change: Sat Oct 29 03:12:26 2022 by root via crm_resource on > overcloud-controller-1 > * 19 nodes configured > * 68 resource instances configured > > Node List: > * Node overcloud-controller-0: UNCLEAN (offline) > * Node overcloud-controller-2: UNCLEAN (offline) > * Online: [ overcloud-controller-1 ] > > Full List of Resources: > * ip-172.25.201.91 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-0 (UNCLEAN) > * ip-172.25.201.150 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-2 (UNCLEAN) > * ip-172.25.201.206 (ocf::heartbeat:IPaddr2): Stopped > * ip-172.25.201.250 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-0 (UNCLEAN) > * ip-172.25.202.50 (ocf::heartbeat:IPaddr2): Stopped > * ip-172.25.202.90 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-2 (UNCLEAN) > * Container bundle set: haproxy-bundle [ > 172.25.201.68:8787/tripleomaster/openstack-haproxy:pcmklatest]: > * haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started > overcloud-controller-0 (UNCLEAN) > * haproxy-bundle-podman-1 (ocf::heartbeat:podman): Stopped > * haproxy-bundle-podman-2 (ocf::heartbeat:podman): Started > overcloud-controller-2 (UNCLEAN) > * haproxy-bundle-podman-3 (ocf::heartbeat:podman): Stopped > * Container bundle set: galera-bundle [ > 172.25.201.68:8787/tripleomaster/openstack-mariadb:pcmklatest]: > * galera-bundle-0 (ocf::heartbeat:galera): Stopped > overcloud-controller-0 (UNCLEAN) > * galera-bundle-1 (ocf::heartbeat:galera): Stopped > * galera-bundle-2 (ocf::heartbeat:galera): Stopped > overcloud-controller-2 (UNCLEAN) > * galera-bundle-3 (ocf::heartbeat:galera): Stopped > * Container bundle set: redis-bundle [ > 172.25.201.68:8787/tripleomaster/openstack-redis:pcmklatest]: > * redis-bundle-0 (ocf::heartbeat:redis): Stopped > * redis-bundle-1 (ocf::heartbeat:redis): Stopped > overcloud-controller-2 (UNCLEAN) > * redis-bundle-2 (ocf::heartbeat:redis): Stopped > overcloud-controller-0 (UNCLEAN) > * redis-bundle-3 (ocf::heartbeat:redis): Stopped > * Container bundle set: ovn-dbs-bundle [ > 172.25.201.68:8787/tripleomaster/openstack-ovn-northd:pcmklatest]: > * ovn-dbs-bundle-0 (ocf::ovn:ovndb-servers): Stopped > overcloud-controller-2 (UNCLEAN) > * ovn-dbs-bundle-1 (ocf::ovn:ovndb-servers): Stopped > overcloud-controller-0 (UNCLEAN) > * ovn-dbs-bundle-2 (ocf::ovn:ovndb-servers): Stopped > * ovn-dbs-bundle-3 (ocf::ovn:ovndb-servers): Stopped > * ip-172.25.201.208 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-2 (UNCLEAN) > * Container bundle: openstack-cinder-backup [ > 172.25.201.68:8787/tripleomaster/openstack-cinder-backup:pcmklatest]: > * openstack-cinder-backup-podman-0 (ocf::heartbeat:podman): Started > overcloud-controller-0 (UNCLEAN) > * Container bundle: openstack-cinder-volume [ > 172.25.201.68:8787/tripleomaster/openstack-cinder-volume:pcmklatest]: > * openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Stopped > * Container bundle set: rabbitmq-bundle [ > 172.25.201.68:8787/tripleomaster/openstack-rabbitmq:pcmklatest]: > * rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Stopped > overcloud-controller-2 (UNCLEAN) > * rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Stopped > overcloud-controller-0 (UNCLEAN) > * rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Stopped > * rabbitmq-bundle-3 (ocf::heartbeat:rabbitmq-cluster): Stopped > * ip-172.25.204.250 (ocf::heartbeat:IPaddr2): Started > overcloud-controller-0 (UNCLEAN) > * ceph-nfs (systemd:ceph-nfs at pacemaker): Started overcloud-controller-0 > (UNCLEAN) > * Container bundle: openstack-manila-share [ > 172.25.201.68:8787/tripleomaster/openstack-manila-share:pcmklatest]: > * openstack-manila-share-podman-0 (ocf::heartbeat:podman): Started > overcloud-controller-0 (UNCLEAN) > * stonith-fence_ipmilan-48d539a11820 (stonith:fence_ipmilan): Stopped > * stonith-fence_ipmilan-48d539a1188c (stonith:fence_ipmilan): Started > overcloud-controller-2 (UNCLEAN) > * stonith-fence_ipmilan-246e96349068 (stonith:fence_ipmilan): Started > overcloud-controller-2 (UNCLEAN) > * stonith-fence_ipmilan-246e96348d30 (stonith:fence_ipmilan): Stopped > > Daemon Status: > corosync: active/enabled > pacemaker: active/enabled > pcsd: active/enabled > > PCS requires more than half the nodes to be alive for the cluster to work. > To fix this step I issued a command:*pcs no-quorum-policy=ignore.* > > And now the PCS cluster keeps on running even when there is no quorum. > > Now the issue i have is the mariadb-bundle becomes slave and dosen't get > promoted to master. > > Can you please suggest a proper workaround when more than half nodes go > down and my cloud will be still running. > > > With regards, > > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue Nov 1 12:58:45 2022 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 1 Nov 2022 13:58:45 +0100 Subject: [all] [tripleo] opendev Gerrit is down Message-ID: <53def861-d6c4-29bd-58c2-d6da085774db@redhat.com> Hello there, For those who didn't get the IRC notification earlier today: gerrit is dead. The Infra team is working on that issue, but it will apparently take some time. Thank you for your patience! Cheers, C. -- C?dric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Tue Nov 1 14:25:24 2022 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 1 Nov 2022 15:25:24 +0100 Subject: [all] [tripleo] opendev Gerrit is down In-Reply-To: <53def861-d6c4-29bd-58c2-d6da085774db@redhat.com> References: <53def861-d6c4-29bd-58c2-d6da085774db@redhat.com> Message-ID: <227e9ae6-f5be-68d1-8873-09b38849efa8@redhat.com> Hello there, Gerrit seems to be back online! Cheers, C. On 11/1/22 13:58, C?dric Jeanneret wrote: > Hello there, > > For those who didn't get the IRC notification earlier today: gerrit is > dead. > > The Infra team is working on that issue, but it will apparently take > some time. > > Thank you for your patience! > > Cheers, > > C. > -- C?dric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From rdhasman at redhat.com Tue Nov 1 15:29:13 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 1 Nov 2022 20:59:13 +0530 Subject: [cinder][PTG] 2023.1 Antelope PTG Summary Message-ID: Hello everyone, Here[1] is the summary of 2023.1 Antelope Cinder PTG conducted from 18th October to 21st October 2022, 1300-1700 UTC. The link to the etherpad and recordings are included. The action items are added in *conclusion* section of each topic. Let me know if any information needs to be updated. [1] https://wiki.openstack.org/wiki/CinderAntelopePTGSummary - Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Nov 1 16:49:38 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 1 Nov 2022 13:49:38 -0300 Subject: [CloudKitty] use of Monasca Message-ID: Hello guys, As discussed in the PTG [1], in October, we wanted to check with the community if there are people using CloudKitty with Monasca. This discussion was brought up during the PTG that Kolla-ansible is deprecating support to Monasca, and we wanted to check if others are using CloudKitty with Monasca. This integration is not being actively tested and maintained; therefore, we are considering the issue of a deprecation notice and further removal of the integration. What do you guys think? Are there people using CloudKitty with Monasca? [1] https://etherpad.opendev.org/p/oct2022-ptg-cloudkitty -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 1 17:14:36 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 1 Nov 2022 18:14:36 +0100 Subject: [Kolla-ansible][Xena] How to update an Openstack deployment with new containers? In-Reply-To: References: Message-ID: Anyone? some help please. Le mer. 26 oct. 2022 ? 11:16, wodel youchi a ?crit : > Hi, > > The documentation of kolla-ansible Xena does not talk about updating an > existing Xena deployment with new containers. > > Could you please help with this? > > I found some lines about that in Yoga version, saying that, to update an > existing deployment you have to : > 1 - Update kolla-ansible it self : > $ source xenavenv > (xenavenv) $ pip install --upgrade git+ > https://opendev.org/openstack/kolla-ansible at stable/xena > > 2 - Update the container Images with Docker pull > > 3 - Update my local registry if I am using one, and in my case I am, so I > deleted the registry images then I recreated them. > > 4 - Then finally deploy again > > (xenavenv) $ kolla-ansible -i multinode deploy > > Is this the right procedure? because I followed the same procedure and I > think it didn't change anything. > > For example I am taking nova-libvirt container as and example, in my local > registry I have this : > [root at rcdndeployer2 ~]# docker images | grep nova-libvirt > 192.168.2.34:4000/openstack.kolla/centos-source-nova-libvirt > xena 5be83d680102 31 hours ago 2. > 34GB > quay.io/openstack.kolla/centos-source-nova-libvirt > xena *5be83d680102* *31 > hours ago * 2. > 34GB > > [root at rcdndeployer2 ~]# docker inspect -f '{{ .Created }}' *5be83d680102 * > *2022-10-25*T02:33:13.172550584Z > > > > But in my compute nodes I have this : > root at computehci24 ~]# docker ps | grep nova-lib > b56a12bfd482 192.168.2.34:4000/openstack.kolla/centos-source-nova-libvirt:xena > "dumb-init --single-?" *5 months ag**o Up Up 5 months (healthy) > nova_libvirt* > > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 1 17:16:55 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 1 Nov 2022 18:16:55 +0100 Subject: [Kolla-ansible][Yoga] which version of fedora-core to use with Magnum? Message-ID: Hi, Could you point out which version of fedora-core to use with magnum? Is it the same as Xena : fedora-core 33? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 1 17:33:39 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 1 Nov 2022 18:33:39 +0100 Subject: [Kolla-ansible][Yoga][Trove] Creation of instance fails Message-ID: Hi, The creation of the Trove instance fails, from my debugging I found this : - The db instance is not responding on its trove interface. - The controller (trove-management) can't reach it. I have a vlan network dedicated to the Trove instances, I have created a simple VM on that network and used the same security group used by the db instance. The simple VM responds to ping and ssh connections, the Trove instance does not. I added another interface to the db instance in another network, and the instance pings and responds to ssh connections on it!!! still I don't know how to access the db instance via ssh. I don't understand. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Nov 1 19:12:02 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 1 Nov 2022 12:12:02 -0700 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors Message-ID: Hey all, I've been looking into the various zuul config errors showing up for Ironic-program branches. Almost all of our old bugfix branches are in the list. Additionally, not properly retiring the bugfix branches leads to an ever-growing list of branches which makes it a lot more difficult, for contributors and operators alike, to tell which ones are currently supported. I've put together a document describing the situation as it is now, and my proposal: https://etherpad.opendev.org/p/IronicBugfixBranchCleanup Essentially, I think we need to: - identify bugfix branches to cleanup (I've done this in the above etherpad, but some of the ) - clean them up (the next step) - update Ironic policy to set a regular cadence for when to retire bugfix branches, and encode the process for doing so This means there are two overall questions to answer in this email: 1) Mechanically, what's the process for doing this? I don't believe the existing release tooling will be useful for this, but I'm not 100% sure. I've pulled (in the above etherpad and a local spreadsheet) the last SHA for each branch; so we should be able to EOL these branches similarly to how we EOL stable branches; except manually instead of with tooling. Who is going to do this work? (I'd prefer releases team continue to hold the keys to do this; but I understand if you don't want to take on this manual work). 2) What's the pattern for Ironic to adopt regarding these branches? We just need to write down the expected lifecycle and enforce it -- so we prevent being this deep into "branch debt" in the future. What do folks think? - Jay Faulkner -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Tue Nov 1 21:56:35 2022 From: sbaker at redhat.com (Steve Baker) Date: Wed, 2 Nov 2022 10:56:35 +1300 Subject: [ironic][stable] Proposing EOL of ironic project branches older than Wallaby In-Reply-To: References: Message-ID: On 12/10/22 05:53, Jay Faulkner wrote: > We discussed stable branches in the most recent ironic meeting > (https://meetings.opendev.org/meetings/ironic/2022/ironic.2022-10-10-15.01.log.txt). > The decision was made to do the following: > > EOL these branches: > - stable/queens > - stable/rocky > - stable/stein > > Reduce testing considerably on these branches, and only backport > critical bugfixes or security bugfixes: > - stable/train > - stable/ussuri > - stable/victoria > Just coming back to this, keeping stable/train jobs green has become untenable so I think its time we consider EOLing it. It is the extended-maintenance branch of interest to me, so I'd be fine with stable/ussuri and stable/victoria being EOLed also. > Our remaining branches will continue to get most eligible patches > backported to them. > > This email, plus earlier communications including a tweet, will serve > as notice that these branches are being EOL'd. > > Thanks, > Jay Faulkner > > On Tue, Oct 4, 2022 at 11:18 AM Jay Faulkner wrote: > > Hi all, > > Ironic has a large amount of stable branches still in EM. We need > to take action to ensure those branches are either retired or have > CI repaired to the point of being usable. > > Specifically, I'm looking at these branches across all Ironic > projects: > - stable/queens > - stable/rocky > - stable/stein > - stable/train > - stable/ussuri > - stable/victoria > > In lieu of any volunteers to maintain the CI, my recommendation > for all the branches listed above is that they be marked EOL. If > someone wants to volunteer to maintain CI for those branches, they > can propose one of the below paths be taken instead: > > 1 - Someone volunteers to maintain these branches, and also report > the status of CI of these older branches periodically on the > Ironic whiteboard and in Ironic meetings. If you feel strongly > that one of these branches needs to continue to be in service; > volunteering in this way is how to save them. > > 2 - We seriously reduce CI. Basically removing all tempest tests > to ensure that CI remains reliable and able to merge emergency or > security fixes when needed. In some cases; this still requires CI > fixes as some older inspector branches are failing *installing > packages* in unit tests. I would still like, in this case, that > someone volunteers to ensure the minimalist CI remains happy. > > My intention is to let this message serve as notice and a waiting > period; and if I've not heard any response here or in Monday's > Ironic meeting (in 6 days), I will begin taking action on retiring > these branches. > > This is simply a start; other branches (including bugfix branches) > are also in bad shape in CI, but getting these retired will > significantly reduce the surface area of projects and branches to > evaluate. > > I know it's painful to drop support for these branches; but we've > provided good EM support for these branches for a long time and by > pruning them away, we'll be able to save time to dedicate to other > items. > > Thanks, > Jay Faulkner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Nov 2 03:27:04 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 2 Nov 2022 08:57:04 +0530 Subject: (openstack-ansible) Container installation in openstack In-Reply-To: References: Message-ID: Hello Dmitry, I was looking for Zun installation in OpenStack Xena version using OpenStack Ansible. Regards Adivya Singh On Tue, Nov 1, 2022 at 12:06 AM Dmitriy Rabotyagov wrote: > Hi Adivya, > > Can you please elaborate more about what container service you are > thinking about? Is it Magnum or Zun or your question is more about how > to install all openstack services in containers? > > ??, 31 ???. 2022 ?. ? 19:34, Adivya Singh : > > > > Hi Team, > > > > Any input on this, to install container service in openstack using > ansible. > > > > standard global parametre > > > > Regards > > Adivya Singh > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Nov 2 03:30:35 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 2 Nov 2022 09:00:35 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password Message-ID: Hi Team, There is one issue , where a user is getting " Authenticated Failure" all of a sudden, and this user is the only user who is facing this problem. I tried to disable and enable the project if, Check the logs but do not found anything related to Keystone authentication Delete the Project id and Create it again , Results are same , Any insights what i can do more to fix this issue Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From wchy1001 at gmail.com Wed Nov 2 08:41:56 2022 From: wchy1001 at gmail.com (W Ch) Date: Wed, 2 Nov 2022 16:41:56 +0800 Subject: [Kolla-ansible][Yoga][Trove] Creation of instance fails In-Reply-To: References: Message-ID: Hi, you can ssh into the instance via the other interface and check whether the management port had got the ip address. if you can't ssh into the instances, you can try to login into the instance via web console. thanks. wodel youchi ?2022?11?2??? 01:37??? > Hi, > > The creation of the Trove instance fails, from my debugging I found this : > - The db instance is not responding on its trove interface. > - The controller (trove-management) can't reach it. > > I have a vlan network dedicated to the Trove instances, I have created a > simple VM on that network and used the same security group used by the db > instance. The simple VM responds to ping and ssh connections, the Trove > instance does not. > I added another interface to the db instance in another network, and the > instance pings and responds to ssh connections on it!!! still I don't know > how to access the db instance via ssh. > > I don't understand. > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Wed Nov 2 09:06:21 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 2 Nov 2022 10:06:21 +0100 Subject: [Kolla-ansible][Yoga][Trove] Creation of instance fails In-Reply-To: References: Message-ID: Hi. Thanks for your help, I tried the master image and it is pingable and accepts ssh connection over the Trove network I couldn't wait for build of the instance to complete it was late but for now at least I can say that the yoga image has a problem. Regards On Tue, Nov 1, 2022, 18:33 wodel youchi wrote: > Hi, > > The creation of the Trove instance fails, from my debugging I found this : > - The db instance is not responding on its trove interface. > - The controller (trove-management) can't reach it. > > I have a vlan network dedicated to the Trove instances, I have created a > simple VM on that network and used the same security group used by the db > instance. The simple VM responds to ping and ssh connections, the Trove > instance does not. > I added another interface to the db instance in another network, and the > instance pings and responds to ssh connections on it!!! still I don't know > how to access the db instance via ssh. > > I don't understand. > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Nov 2 09:54:45 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 2 Nov 2022 10:54:45 +0100 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: Hi Jay, On Tue, Nov 1, 2022 at 8:17 PM Jay Faulkner wrote: > Hey all, > > I've been looking into the various zuul config errors showing up for > Ironic-program branches. Almost all of our old bugfix branches are in the > list. Additionally, not properly retiring the bugfix branches leads to an > ever-growing list of branches which makes it a lot more difficult, for > contributors and operators alike, to tell which ones are currently > supported. > I'd like to see the errors. We update Zuul configuration manually for each bugfix branch, mapping appropriate branches for other projects (devstack, nova, etc). It's possible that we always overlook a few jobs, which causes Zuul to be upset (but quietly upset, so we don't notice). > > I've put together a document describing the situation as it is now, and my > proposal: > https://etherpad.opendev.org/p/IronicBugfixBranchCleanup > Going with the "I would like to retire" would cause us so much trouble that we'll have to urgently create a downstream mirror of them. Once we do this, using upstream bugfix branches at all will be questionable. Especially bugfix/19.0 (and corresponding IPA/inspector branches) is used in a very actively maintained release. > > Essentially, I think we need to: > - identify bugfix branches to cleanup (I've done this in the above > etherpad, but some of the ) > - clean them up (the next step) > - update Ironic policy to set a regular cadence for when to retire bugfix > branches, and encode the process for doing so > > This means there are two overall questions to answer in this email: > 1) Mechanically, what's the process for doing this? I don't believe the > existing release tooling will be useful for this, but I'm not 100% sure. > I've pulled (in the above etherpad and a local spreadsheet) the last SHA > for each branch; so we should be able to EOL these branches similarly to > how we EOL stable branches; except manually instead of with tooling. Who is > going to do this work? (I'd prefer releases team continue to hold the keys > to do this; but I understand if you don't want to take on this manual work). > EOL tags will be created by the release team, yes. I don't think we can get the keys without going "independent". > > 2) What's the pattern for Ironic to adopt regarding these branches? We > just need to write down the expected lifecycle and enforce it -- so we > prevent being this deep into "branch debt" in the future. > With my vendor's (red) hat on, I'd prefer to have a dual approach: the newest branches are supported by the community (i.e. us all), the oldest - by vendors who need them (EOLed if nobody volunteers). I think you already have a list of branches that OCP uses? Feel free to point Riccardo, Iury or myself at any issues with them. Dmitry > > > What do folks think? > > - > Jay Faulkner > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Wed Nov 2 10:12:17 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 2 Nov 2022 11:12:17 +0100 Subject: [oslo] New driver for oslo.messaging In-Reply-To: <4eddcca5.3347.18432712271.Coremail.wangkuntian1994@163.com> References: <4eddcca5.3347.18432712271.Coremail.wangkuntian1994@163.com> Message-ID: On 01/11/2022 10:06, ??? wrote: > I want to develop a new driver for oslo.messaging to use rocketmq in > openstack environment. I wonder if the community need this new driver? There is a larger discussion around adding a driver for NATS (https://lists.openstack.org/pipermail/openstack-discuss/2022-August/030179.html). Maybe the reasoning, discussion and also the PoC there is helpful to answer your question. I suppose you are also "not happy" with using RabbitMQ? Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Nov 2 12:41:02 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 2 Nov 2022 18:11:02 +0530 Subject: (Open Stack Designate) Query regarding bind containers in Openstack Message-ID: Hi Team, I have a query, where I have installed the Designate Container and configured the name server accordingly. My question is related to Designate Container, Do i need to install the bind services independently in each designate container of all the 3 controllers Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Nov 2 13:50:32 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 2 Nov 2022 13:50:32 +0000 Subject: No bug meeting today Message-ID: -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Wed Nov 2 14:02:02 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Wed, 2 Nov 2022 07:02:02 -0700 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: On Wed, Nov 2, 2022 at 3:04 AM Dmitry Tantsur wrote: > Hi Jay, > > On Tue, Nov 1, 2022 at 8:17 PM Jay Faulkner wrote: > >> Hey all, >> >> I've been looking into the various zuul config errors showing up for >> Ironic-program branches. Almost all of our old bugfix branches are in the >> list. Additionally, not properly retiring the bugfix branches leads to an >> ever-growing list of branches which makes it a lot more difficult, for >> contributors and operators alike, to tell which ones are currently >> supported. >> > > I'd like to see the errors. We update Zuul configuration manually for each > bugfix branch, mapping appropriate branches for other projects (devstack, > nova, etc). It's possible that we always overlook a few jobs, which causes > Zuul to be upset (but quietly upset, so we don't notice). > > The errors show up in https://zuul.opendev.org/t/openstack/config-errors -- although they seem to be broken this morning. Most of them are older bugfix branches, ones that are out of support, that have the `Queue: Ironic` param that's no longer supported. I am not in favor of anyone going to dead bugfix branches and fixing CI; instead we should retire the ones out of use. > >> I've put together a document describing the situation as it is now, and >> my proposal: >> https://etherpad.opendev.org/p/IronicBugfixBranchCleanup >> > > Going with the "I would like to retire" would cause us so much trouble > that we'll have to urgently create a downstream mirror of them. Once we do > this, using upstream bugfix branches at all will be questionable. > Especially bugfix/19.0 (and corresponding IPA/inspector branches) is used > in a very actively maintained release. > > Then we won't; but we do need to think about what timeline we can talk about upstream for getting a cadence for getting these retired out, just like we have a cadence for getting them cut every two months. I'll revise the list and remove the "I would like to retire" section (move it to keep-em-up). > >> Essentially, I think we need to: >> - identify bugfix branches to cleanup (I've done this in the above >> etherpad, but some of the ) >> - clean them up (the next step) >> - update Ironic policy to set a regular cadence for when to retire bugfix >> branches, and encode the process for doing so >> >> This means there are two overall questions to answer in this email: >> 1) Mechanically, what's the process for doing this? I don't believe the >> existing release tooling will be useful for this, but I'm not 100% sure. >> I've pulled (in the above etherpad and a local spreadsheet) the last SHA >> for each branch; so we should be able to EOL these branches similarly to >> how we EOL stable branches; except manually instead of with tooling. Who is >> going to do this work? (I'd prefer releases team continue to hold the keys >> to do this; but I understand if you don't want to take on this manual work). >> > > EOL tags will be created by the release team, yes. I don't think we can > get the keys without going "independent". > > It's a gerrit ACL you can enable to give other people access to tags; but like I said, I don't want that access anyway :). > >> 2) What's the pattern for Ironic to adopt regarding these branches? We >> just need to write down the expected lifecycle and enforce it -- so we >> prevent being this deep into "branch debt" in the future. >> > > With my vendor's (red) hat on, I'd prefer to have a dual approach: the > newest branches are supported by the community (i.e. us all), the oldest - > by vendors who need them (EOLed if nobody volunteers). I think you already > have a list of branches that OCP uses? Feel free to point Riccardo, Iury or > myself at any issues with them. > > That's not really an option IMO. These branches exist in the upstream community, and are seen by upstream contributors and operators. If they're going to live here; they need to have some reasonable documentation about what folks should expect out of them and efforts being put towards them. Even if the documentation is "bugfix/1.2 is maintained as long as Product A 1.2 is maintained", that's better than leaving the community guessing about what these are used for, and why some are more-supported than others. -Jay Dmitry > > >> >> >> What do folks think? >> >> - >> Jay Faulkner >> > > > -- > > Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany > Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Nov 2 14:27:02 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 2 Nov 2022 15:27:02 +0100 Subject: (Open Stack Designate) Query regarding bind containers in Openstack In-Reply-To: References: Message-ID: Hi, If you're talking about DNS servers that should be defined under pools.yaml (via variable designate_pools_yaml) then yes - they should be installed additionally. Eventually, Designate does support more backends then just Bind9 [1] and they're usually deployed outside of the designate containers or even OpenStack control nodes. So as OpenStack-Ansible we don't provide a roles/playbooks to install this kind of infrastructure. But there're quite a lot of good roles on galaxy.ansible.com that provide bind9/pdns4/etc installation. [1] https://docs.openstack.org/designate/latest/admin/backends/index.html ??, 2 ????. 2022 ?., 13:43 Adivya Singh : > Hi Team, > > I have a query, where I have installed the Designate Container and > configured the name server accordingly. > > My question is related to Designate Container, Do i need to install the > bind services independently in each designate container of all the 3 > controllers > > Regards > Adivya Singh > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Nov 2 14:30:46 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 2 Nov 2022 15:30:46 +0100 Subject: (openstack-ansible) Container installation in openstack In-Reply-To: References: Message-ID: Ok, so for that in openstack_user_config.yml you will need to define following groups: * zun-infra_hosts - usually these are infra nodes as there only api/proxy services will be located * zun-compute_hosts - these hosts will be used by zun-compute and kuryr for spawning containers. So usually it's a standalone hardware, but maybe it can be co-located with nova-compute, I'm not absolutely sure about that tbh. ??, 2 ????. 2022 ?. ? 04:27, Adivya Singh : > > Hello Dmitry, > > I was looking for Zun installation in OpenStack Xena version using OpenStack Ansible. > > Regards > Adivya Singh > > On Tue, Nov 1, 2022 at 12:06 AM Dmitriy Rabotyagov wrote: >> >> Hi Adivya, >> >> Can you please elaborate more about what container service you are >> thinking about? Is it Magnum or Zun or your question is more about how >> to install all openstack services in containers? >> >> ??, 31 ???. 2022 ?. ? 19:34, Adivya Singh : >> > >> > Hi Team, >> > >> > Any input on this, to install container service in openstack using ansible. >> > >> > standard global parametre >> > >> > Regards >> > Adivya Singh From fungi at yuggoth.org Wed Nov 2 15:42:31 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 2 Nov 2022 15:42:31 +0000 Subject: [security-sig] Cancelling this month's meeting Message-ID: <20221102154231.b2jojvwamldxxzyq@yuggoth.org> I'm on vacation this week and don't expect to be around a computer tomorrow in order to chair the meeting, so am cancelling it for November. If anyone had topics they wanted to bring up, please just raise them here on the ML or in the #openstack-security channel on OFTC and I'll try to follow up there as appropriate. Additionally, we discussed at the PTG that the standing meeting time is inconvenient for multiple participants, so I'll be following up with a poll to work out a more consensual schedule. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From josh at openinfra.dev Wed Nov 2 20:25:36 2022 From: josh at openinfra.dev (josh at openinfra.dev) Date: Wed, 2 Nov 2022 16:25:36 -0400 (EDT) Subject: [tc][ptls] OpenStack User Survey Update Message-ID: <1667420736.47123119@apps.rackspace.com> Hello everyone, Thank you all for submitting updates to the OpenStack User Survey. We have made adjustments to the survey, and published them as the OpenStack User Survey 2023, which you can find here: [ https://www.openstack.org/user-survey/survey-2023/ ]( https://www.openstack.org/user-survey/survey-2023/ ) If you have any additional adjustments or additions, please don't hesitate to let me know. Thank you! Josh Lohse josh at openinfra.dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Nov 2 22:30:41 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 02 Nov 2022 15:30:41 -0700 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 3 at 1500 UTC Message-ID: <1843a77d113.ea828066449630.1672703904068259218@ghanshyammann.com> Hello Everyone, Below is the agenda for Tomorrow's TC meeting scheduled at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Roll call * Follow up on past action items * Gate health check * 2023.1 TC tracker ** https://etherpad.opendev.org/p/tc-2023.1-tracker * TC questions for the 2023 user survey ** https://etherpad.opendev.org/p/tc-2023-user-survey-questions ** Deadline: Oct 30 ** TC chair election process * TC weekly meeting time ** https://framadate.org/xR6HoeDpdXXfiueb * Recurring tasks check ** Bare 'recheck' state ** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann From wodel.youchi at gmail.com Thu Nov 3 09:54:21 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 3 Nov 2022 10:54:21 +0100 Subject: [Kolla-ansible][Yoga][Trove] Creation of instance fails In-Reply-To: References: Message-ID: Hi, Using the master image did create a VM that responds on the Trove network and accepts ssh logins, but didn't complete the build of the database, I got timeout on pulling the docker image. I tested my connection with curl within the VM and it works fine. I will test with the master build image to see if this will work. Regards. Le mer. 2 nov. 2022 ? 10:06, wodel youchi a ?crit : > Hi. > > Thanks for your help, I tried the master image and it is pingable and > accepts ssh connection over the Trove network I couldn't wait for build of > the instance to complete it was late but for now at least I can say that > the yoga image has a problem. > > Regards > > On Tue, Nov 1, 2022, 18:33 wodel youchi wrote: > >> Hi, >> >> The creation of the Trove instance fails, from my debugging I found this : >> - The db instance is not responding on its trove interface. >> - The controller (trove-management) can't reach it. >> >> I have a vlan network dedicated to the Trove instances, I have created a >> simple VM on that network and used the same security group used by the db >> instance. The simple VM responds to ping and ssh connections, the Trove >> instance does not. >> I added another interface to the db instance in another network, and the >> instance pings and responds to ssh connections on it!!! still I don't know >> how to access the db instance via ssh. >> >> I don't understand. >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Thu Nov 3 10:28:39 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 3 Nov 2022 11:28:39 +0100 Subject: [Kolla-ansible][Yoga][Trove] Creation of instance fails In-Reply-To: References: Message-ID: Hi again, I have this on trove_api : /var/lib/kolla/venv/lib/python3.6/site-packages/oslo_db/sqlalchemy/enginefacade.py:342: NotSupportedWarning: Configuration option(s) ['idle_timeout'] not supported warning.NotSupportedWarning and I have this on the database instance : -- The unit guest-agent.service has entered the 'failed' state with result 'resources'. Nov 03 10:24:42 mydb01dev systemd[1]: Failed to start OpenStack Trove Guest Agent Service for Development. -- Subject: A start job for unit guest-agent.service has failed -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- A start job for unit guest-agent.service has finished with a failure. -- -- The job identifier is 1053 and the job result is failed. No much to work with to find out the why the agent doesn't start. Regards. Le jeu. 3 nov. 2022 ? 10:54, wodel youchi a ?crit : > Hi, > > Using the master image did create a VM that responds on the Trove network > and accepts ssh logins, but didn't complete the build of the database, I > got timeout on pulling the docker image. I tested my connection with curl > within the VM and it works fine. > > I will test with the master build image to see if this will work. > > Regards. > > Le mer. 2 nov. 2022 ? 10:06, wodel youchi a > ?crit : > >> Hi. >> >> Thanks for your help, I tried the master image and it is pingable and >> accepts ssh connection over the Trove network I couldn't wait for build of >> the instance to complete it was late but for now at least I can say that >> the yoga image has a problem. >> >> Regards >> >> On Tue, Nov 1, 2022, 18:33 wodel youchi wrote: >> >>> Hi, >>> >>> The creation of the Trove instance fails, from my debugging I found this >>> : >>> - The db instance is not responding on its trove interface. >>> - The controller (trove-management) can't reach it. >>> >>> I have a vlan network dedicated to the Trove instances, I have created a >>> simple VM on that network and used the same security group used by the db >>> instance. The simple VM responds to ping and ssh connections, the Trove >>> instance does not. >>> I added another interface to the db instance in another network, and the >>> instance pings and responds to ssh connections on it!!! still I don't know >>> how to access the db instance via ssh. >>> >>> I don't understand. >>> >>> Regards. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Thu Nov 3 15:00:08 2022 From: hjensas at redhat.com (Harald Jensas) Date: Thu, 3 Nov 2022 16:00:08 +0100 Subject: Cluster fails when 2 controller nodes become down simultaneously | tripleo wallaby In-Reply-To: References: Message-ID: <8287ab18-63cb-f730-054e-aeffaf12038b@redhat.com> On 11/1/22 11:01, Swogat Pradhan wrote: > Hi, > Updating the subject. > > On Tue, Nov 1, 2022 at 12:26 PM Swogat Pradhan > > wrote: > > I have configured a 3 node pcs cluster for openstack. > To test the HA, i issue the following commands: > iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT && > iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j > ACCEPT && > iptables -A INPUT -p tcp -m state --state NEW -m tcp --dport 5016 -j > ACCEPT && > iptables -A INPUT -p udp -m state --state NEW -m udp --dport 5016 -j > ACCEPT && > iptables -A INPUT ! -i lo -j REJECT --reject-with > icmp-host-prohibited && > iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT && > iptables -A OUTPUT -p tcp --sport 5016 -j ACCEPT && > iptables -A OUTPUT -p udp --sport 5016 -j ACCEPT && > iptables -A OUTPUT ! -o lo -j REJECT --reject-with icmp-host-prohibited > > When i issue iptables command on 1 node then it is fenced and forced > to reboot and cluster works fine. > But when i issue this on 2 of the controller nodes the resource > bundles fail and doesn't come back up. > This is expected behavior. In a cluster you need a majority quorum to be able to make the decision to fence a failing node, and keep services running on the nodes with the majority quorum. When you disconnect two nodes from the cluster with firewall rules, none of the 3 nodes can talk to any other node, i.e they are all isolated with no knowledge on what is the status on the 2 peer cluster nodes. Each node can only assume it is the only node that has been isolated, and the two other nodes are operational. To ensure data integrity any isolated node should stop it's services immediately. Imagine if all three nodes, isolated from each-other but still available to the loadbalancer. Requests would come in and each node would continue to service requests and write data. Each node servicing ~1/3 of the requests, the result would be a inconsistent data stores on all three nodes. A situation that would be practically impossible to recover from. -- Harald From wodel.youchi at gmail.com Thu Nov 3 18:05:02 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Thu, 3 Nov 2022 19:05:02 +0100 Subject: [Yoga][Cloudkitty] Some projects have their rate to 0 on some services Message-ID: Hi, I deployed Cloudkitty with the following metrics.yml file : metrics: cpu: unit: instance alt_name: instance groupby: - id - user_id - project_id metadata: - flavor_name - flavor_id - vcpus mutate: NUMBOOL extra_args: aggregation_method: mean resource_type: instance force_granularity: 300 image.size: unit: MiB factor: 1/1048576 groupby: - id - user_id - project_id metadata: - container_format - disk_format extra_args: aggregation_method: mean resource_type: image force_granularity: 300 * volume.size: unit: GiB groupby: - id - user_id - project_id metadata: - volume_type extra_args: aggregation_method: mean resource_type: volume force_granularity: 300* I created a service for volume.size following the example here : https://docs.openstack.org/cloudkitty/yoga/user/rating/hashmap.html I added the user cloudkitty to the admin project and to another project named Project01. When showing the rates I have 0 rate on the Project01. For example : executing this command : * openstack rating dataframes get | grep volume.size* This volume belongs to an instance in the Admin project, as you can see rating is 4.5: | 2022-11-02T18:25:00 | 2022-11-02T18:30:00 | 31bfb5bcf7b7413da269d7a35a2fe69a |* [{'rating': '4.5', 'service': 'volume.size*', 'desc': {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': ' *07811807-474a-4eb5-91b5-ce2dcdd7be26*', 'project_id': '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': '1.5000'}, {'rating': '4.5', 'service': 'volume.size', 'desc': {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': '8a345711-0486-4733-b8bc-fd1966678aec', 'project_id': '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': '1.5000'}, {'rating': '4.5', 'service': 'volume.size', 'desc': {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': 'afd22819-8faa-47ee-8c09-75290d2cf18e', 'project_id': '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': '1.5000'}] This volume belongs to an instance in the Project01 project, as you can see rating is 0.0 : | 2022-11-03T10:35:00 | 2022-11-03T10:40:00 | 2e80eb3b3d344ef9993065ce689395d9 | *[{'rating': '0.0'*, 'service': 'volume.size', 'desc': {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': ' *1c396d46-8954-4e8c-b3e8-8e5e4eb6aba4*', 'project_id': '2e80eb3b3d344ef9993065ce689395d9', 'user_id': 'd9e5696e99954ae1ac87db9cca82c839'}, 'volume': '20.0', 'rate_value': '0.0000'}] I don't understand why it works for one and not the other? More info : (yogavenv) [deployer at rscdeployer ~]$ openstack rating hashmap service list +------------------------+--------------------------------------+ | Name | Service ID | +------------------------+--------------------------------------+ | instance | 06e17b49-8cd4-4cb9-8965-cb929ee12909 | | network.incoming.bytes | 634069b2-ca42-4a28-8778-ac69144fcc23 | | network.outgoing.bytes | 6c1fdaa7-15cb-41b4-be0e-109d64810dde | | volume.size | b6934ab1-8326-4281-89b9-f80294430321 | | image.size | d3652e08-8645-45fd-b7db-b710ae716876 | +------------------------+--------------------------------------+ (yogavenv) [deployer at rscdeployer ~]$ openstack rating hashmap mapping list -s b6934ab1-8326-4281-89b9-f80294430321 +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ | Mapping ID | Value | Cost | Type | Field ID | Service ID |* Group ID | Project ID* | +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ | f81aea1e-0651-4c0a-b043-496fdd892635 | None | 1.5000000000000000000000000000 | flat | None | b6934ab1-8326-4281-89b9-f80294430321 | *None | None * | +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From west.andrew.pro at gmail.com Thu Nov 3 18:11:38 2022 From: west.andrew.pro at gmail.com (Andrew West) Date: Thu, 3 Nov 2022 19:11:38 +0100 Subject: Neutron : Routed Provider Networks (no subnets declared in compute placement API: consequences?) Message-ID: Hi neutron experts Have a cloud built with multiple network segments (RPN) . (kolla-ansible, openstack ussuri), things are running OK (on the network level): networks have DHCP agents ok (*os network agent list --agent-type dhcp --network $networkID* ) all network segments are listed in Host Aggregates BUT if I run through all the existing segments , NONE have a (compute service) inventory declared for each segment IPv4 subnet i.e *os resource provider inventory list $segmentID * returns no output. (official doc on RPN says this should exist) What feature may not function if this inventory is missing ? I don't quite understand what role this IPv4 subnet compute service inventory plays here (during placement? port declaration ?) thanks Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkchn.in at gmail.com Fri Nov 4 04:18:19 2022 From: kkchn.in at gmail.com (KK CHN) Date: Fri, 4 Nov 2022 09:48:19 +0530 Subject: Infrastructure monitoring Message-ID: List,, Looking for an Infrastructure monitoring tool which is in the open source domain without any proprietary license involved for production use. Need to monitor Switches, UTMs/Firewalls, Physical servers, Virtual Machines/Virtual resources, Storage /Storage servers, Applications/daemons running status etc. Complete Infrastructure monitoring and alerts in advance for rectifying possible issues/outages. Regards, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Nov 4 08:29:22 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 04 Nov 2022 08:29:22 +0000 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: Message-ID: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> I assume this isn't the only user trying to login from AD, correct? Then compare the properties/settings between a working and the non-working user, you should probably find something. Also enable debug logs in keystone to find more details. And by "all of a sudden" you mean that it worked before? So what changed between then and now? Zitat von Adivya Singh : > Hi Team, > > There is one issue , where a user is getting " Authenticated Failure" all > of a sudden, and this user is the only user who is facing this problem. > > I tried to disable and enable the project if, Check the logs but do not > found anything related to Keystone authentication > > Delete the Project id and Create it again , Results are same , Any insights > what i can do more to fix this issue > > Regards > Adivya Singh From adivya1.singh at gmail.com Fri Nov 4 12:43:00 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Fri, 4 Nov 2022 18:13:00 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> Message-ID: Hi Eugen, All the users are AD based authentication, but this user only facing a problem Trying to Find out the AD Team , what happened all of a sudden for this user Regards Adivya Singh R On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: > I assume this isn't the only user trying to login from AD, correct? > Then compare the properties/settings between a working and the > non-working user, you should probably find something. Also enable > debug logs in keystone to find more details. And by "all of a sudden" > you mean that it worked before? So what changed between then and now? > > Zitat von Adivya Singh : > > > Hi Team, > > > > There is one issue , where a user is getting " Authenticated Failure" all > > of a sudden, and this user is the only user who is facing this problem. > > > > I tried to disable and enable the project if, Check the logs but do not > > found anything related to Keystone authentication > > > > Delete the Project id and Create it again , Results are same , Any > insights > > what i can do more to fix this issue > > > > Regards > > Adivya Singh > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Fri Nov 4 13:00:51 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Fri, 4 Nov 2022 18:30:51 +0530 Subject: (openstack-horizon) Message-ID: hi Team, I'm facing one issue after the actual outage happened in my openstack env using horizon, The horizon overview page does not happen. i can get in to Stack page directly, But not the overview it just get timed out i restarted the mem cache services to no avail Is there anything else i can try Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 4 13:02:34 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 4 Nov 2022 14:02:34 +0100 Subject: [neutron] Drivers meeting cancelled Message-ID: Hello Neutrinos: Today's meeting is cancelled due to the lack of agenda. See you next week. Have a nice weekend. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Fri Nov 4 13:47:10 2022 From: jean-francois.taltavull at elca.ch (=?iso-8859-1?Q?Taltavull_Jean-Fran=E7ois?=) Date: Fri, 4 Nov 2022 13:47:10 +0000 Subject: [openstack-ansible] Designate: rndc config file not generated Message-ID: <237889f3239b475da5700ab5d2e4ef73@elca.ch> Hello, I'm deploying Designate on OpenStack Wallaby/Ubuntu 20.04 with DNS servers located outside the OpenStack platform. After running 'os-designate-install.yml' playbook, 'bind9-utils' package is correctly installed but I can't find rndc config file anywhere inside the lxc container. This prevents rndc from running well and communicating with the DNS servers. Any idea ? Regards, Jean-Francois From adivya1.singh at gmail.com Fri Nov 4 17:35:43 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Fri, 4 Nov 2022 23:05:43 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> Message-ID: Hi Eugen, I see the below error while authenticating Conflict occurred attempting to store nonlocal_user - Duplicate entry found with name at domain ID How can we fix this? Regards Adivya Singh On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh wrote: > Hi Eugen, > > All the users are AD based authentication, but this user only facing a > problem > Trying to Find out the AD Team , what happened all of a sudden for this > user > > Regards > Adivya Singh > > R > > > On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: > >> I assume this isn't the only user trying to login from AD, correct? >> Then compare the properties/settings between a working and the >> non-working user, you should probably find something. Also enable >> debug logs in keystone to find more details. And by "all of a sudden" >> you mean that it worked before? So what changed between then and now? >> >> Zitat von Adivya Singh : >> >> > Hi Team, >> > >> > There is one issue , where a user is getting " Authenticated Failure" >> all >> > of a sudden, and this user is the only user who is facing this problem. >> > >> > I tried to disable and enable the project if, Check the logs but do not >> > found anything related to Keystone authentication >> > >> > Delete the Project id and Create it again , Results are same , Any >> insights >> > what i can do more to fix this issue >> > >> > Regards >> > Adivya Singh >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Nov 4 19:02:59 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 04 Nov 2022 19:02:59 +0000 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> Message-ID: <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> I know nothing about AD, I?m afraid. But where exactly do you see that message? Is it in keystone or AD? Anyway, you seem to have a duplicate entry (somewhere), so check the keystone database and the AD entries and compare (with working users). Zitat von Adivya Singh : > Hi Eugen, > > I see the below error while authenticating > Conflict occurred attempting to store nonlocal_user - Duplicate entry found > with name at domain ID > > How can we fix this? > > Regards > Adivya Singh > > On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh wrote: > >> Hi Eugen, >> >> All the users are AD based authentication, but this user only facing a >> problem >> Trying to Find out the AD Team , what happened all of a sudden for this >> user >> >> Regards >> Adivya Singh >> >> R >> >> >> On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: >> >>> I assume this isn't the only user trying to login from AD, correct? >>> Then compare the properties/settings between a working and the >>> non-working user, you should probably find something. Also enable >>> debug logs in keystone to find more details. And by "all of a sudden" >>> you mean that it worked before? So what changed between then and now? >>> >>> Zitat von Adivya Singh : >>> >>> > Hi Team, >>> > >>> > There is one issue , where a user is getting " Authenticated Failure" >>> all >>> > of a sudden, and this user is the only user who is facing this problem. >>> > >>> > I tried to disable and enable the project if, Check the logs but do not >>> > found anything related to Keystone authentication >>> > >>> > Delete the Project id and Create it again , Results are same , Any >>> insights >>> > what i can do more to fix this issue >>> > >>> > Regards >>> > Adivya Singh >>> >>> >>> >>> >>> From james.denton at rackspace.com Fri Nov 4 19:21:49 2022 From: james.denton at rackspace.com (James Denton) Date: Fri, 4 Nov 2022 19:21:49 +0000 Subject: [openstack-ansible] Designate: rndc config file not generated In-Reply-To: <237889f3239b475da5700ab5d2e4ef73@elca.ch> References: <237889f3239b475da5700ab5d2e4ef73@elca.ch> Message-ID: Hello Jean-Francois, When I did this recently, I seem to recall generating the RNDC key and conf on the BIND server(s) and copying those over to the Designate hosts (controller nodes, in my case). But looking at the playbook variables, it looks like there is a ?designate_rndc_keys? var that you can define to have it create the keys in the specified location. Have you tried that? Regards, James Denton Rackspace Private Cloud From: Taltavull Jean-Fran?ois Date: Friday, November 4, 2022 at 8:55 AM To: openstack-discuss Subject: [openstack-ansible] Designate: rndc config file not generated CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello, I'm deploying Designate on OpenStack Wallaby/Ubuntu 20.04 with DNS servers located outside the OpenStack platform. After running 'os-designate-install.yml' playbook, 'bind9-utils' package is correctly installed but I can't find rndc config file anywhere inside the lxc container. This prevents rndc from running well and communicating with the DNS servers. Any idea ? Regards, Jean-Francois -------------- next part -------------- An HTML attachment was scrubbed... URL: From rishat.azizov at gmail.com Thu Nov 3 14:27:22 2022 From: rishat.azizov at gmail.com (=?UTF-8?B?0KDQuNGI0LDRgiDQkNC30LjQt9C+0LI=?=) Date: Thu, 3 Nov 2022 20:27:22 +0600 Subject: [swift] Periodically crashes of proxy-server Message-ID: Hello! Periodically we get "UNCAUGHT EXCEPTION#012Traceback" errors on swift-proxy, log attached to this email. After that the swift-proxy process crashes, clients get 502 errors. Could you please help with this? Swift version - yoga 2.29.1. Thank you. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- proxy-server[2656473]: STDERR: Traceback (most recent call last): proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 111, in wait#012 listener.cb(fileno) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 221, in main#012 result = function(*args, **kwargs) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 825, in process_request#012 proto.__init__(conn_state, self) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 395, in __init__#012 wsgi.HttpProtocol.__init__(self, *args, **kwargs) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 357, in __init__#012 self.handle() proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 390, in handle#012 self.handle_one_request() proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 521, in handle_one_request#012 got = wsgi.HttpProtocol.handle_one_request(self) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 419, in handle_one_request#012 self.raw_requestline = self._read_request_line() proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 513, in _read_request_line#012 got = wsgi.HttpProtocol._read_request_line(self) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 402, in _read_request_line#012 return self.rfile.readline(self.server.url_length_limit) proxy-server[2656473]: STDERR: File "/usr/lib/python3.8/socket.py", line 669, in readinto#012 return self._sock.recv_into(b) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 374, in recv_into#012 return self._recv_loop(self.fd.recv_into, 0, buffer, nbytes, flags) proxy-server[2656473]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 352, in _recv_loop#012 return recv_meth(*args) proxy-server[2656473]: STDERR: TimeoutError: [Errno 110] Connection timed out proxy-server[2656473]: STDERR: Removing descriptor: 66 proxy-server[2657864]: UNCAUGHT EXCEPTION#012Traceback (most recent call last):#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 111, in wait#012 listener.cb(fileno)#012 File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 221, in main#012 result = function(*args, **kwargs)#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 825, in process_request#012 proto.__init__(conn _state, self)#012 File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 395, in __init__#012 wsgi.HttpProtocol.__init__(self, *args, **kwargs)#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 357, in __i nit__#012 self.handle()#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 390, in handle#012 self.handle_one_request()#012 File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 521, in handle_one_reque st#012 got = wsgi.HttpProtocol.handle_one_request(self)#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 419, in handle_one_request#012 self.raw_requestline = self._read_request_line()#012 File "/usr/lib/python3 /dist-packages/swift/common/wsgi.py", line 513, in _read_request_line#012 got = wsgi.HttpProtocol._read_request_line(self)#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 402, in _read_request_line#012 return se lf.rfile.readline(self.server.url_length_limit)#012 File "/usr/lib/python3.8/socket.py", line 669, in readinto#012 return self._sock.recv_into(b)#012 File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 374, in rec v_into#012 return self._recv_loop(self.fd.recv_into, 0, buffer, nbytes, flags)#012 File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 352, in _recv_loop#012 return recv_meth(*args)#012TimeoutError: [Errno 110] Connection timed out#012#012During handling of the above exception, another exception occurred:#012#012Traceback (most recent call last):#012 File "/usr/lib/python3/dist-packages/swift/common/utils.py", line 6146, in acquire#012 os .read(self.rfd, 1)#012BlockingIOError: [Errno 11] Resource temporarily unavailable#012#012During handling of the above exception, another exception occurred:#012#012Traceback (most recent call last):#012 File "/usr/bin/swift-proxy-ser ver", line 23, in #012 sys.exit(run_wsgi(conf_file, 'proxy-server', **options))#012 File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 1108, in run_wsgi#012 run_server(conf, logger, sock, ready_callback=not ify,#012 File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 658, in run_server#012 wsgi.server(sock, app, wsgi_logger, **server_kwargs)#012 File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 990, in server# 012 client_socket, client_addr = sock.accept()#012 File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 233, in accept#012 self._trampoline(fd, read=True, timeout=self.gettimeout(), timeout_exc=_timeout_exc)#012 File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 211, in _trampoline#012 return trampoline(fd, read=read, write=write, timeout=timeout,#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/__init__.py", line 159, in trampoline#012 return hub.switch()#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 313, in switch#012 return self.greenlet.switch()#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", li ne 365, in run#012 self.wait(sleep_time)#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 115, in wait#012 self.squelch_exception(fileno, sys.exc_info())#012 File "/usr/lib/python3/dist-packages/eventlet/hu bs/hub.py", line 316, in squelch_exception#012 traceback.print_exception(*exc_info)#012 File "/usr/lib/python3.8/traceback.py", line 105, in print_exception#012 print(line, file=file, end="")#012 File "/usr/lib/python3/dist-packages/swift/common/utils.py", line 1837, in write#012 self.logger.error(_('%(type)s: %(value)s'),#012 File "/usr/lib/python3.8/logging/__init__.py", line 1823, in error#012 self.log(ERROR, msg, *args, **kwargs)#012 File "/usr/lib/python3.8/logging/__init__.py", line 1844, in log#012 self.logger.log(level, msg, *args, **kwargs)#012 File "/usr/lib/python3.8/logging/__init__.py", line 1512, in log#012 self._log(level, msg, args, **kwargs)#012 File "/usr/lib/python3.8/logging/__init__.py", line 1589, in _log#012 self.handle(record)#012 File "/usr/lib/python3.8/logging/__init__.py", line 1599, in handle#012 self.callHandlers(record)#012 File "/usr/lib/python3.8/logging/__init__.py", line 1661, in callHandlers#012 hdlr.handle(record)#012 File "/usr/lib/python3.8/logging/__init__.py", line 952, in handle#012 self.acquire()#012 File "/usr/lib/python3.8/logging/__init__.py", line 903, in acquire#012 self.lock.acquire()#012 File "/usr/lib/python3/dist-packages/swift/common/utils.py", line 6159, in acquire#012 eventlet.hubs.trampoline(self.rfd, read=True)#012 File "/usr/lib/python3/dist-packages/eventlet/hubs/__init__.py", line 141, in trampoline#012 assert hub.greenlet is not current, 'do not call blocking functions from the mainloop'#012AssertionError: do not call blocking functions from the mainloop proxy-server[2603325]: Removing dead child 2657864 from parent 2603325 proxy-server[2657952]: STDERR: The option "bind_port" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "workers" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "user" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "log_level" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "log_facility" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "max_clients" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "client_timeout" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "recheck_account_existence" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "recheck_container_existence" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "log_name" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "auth_url" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "project_domain_id" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "user_domain_id" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "project_name" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "username" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "password" is not known to keystonemiddleware proxy-server[2657952]: STDERR: The option "__name__" is not known to keystonemiddleware From 2292613444 at qq.com Fri Nov 4 14:32:12 2022 From: 2292613444 at qq.com (=?gb18030?B?zt7K/bXE0MfH8g==?=) Date: Fri, 4 Nov 2022 22:32:12 +0800 Subject: this is a openstack swift error(s) Message-ID: After installing swift, I run "swift --debug stat" A serious error occurred, HTTP 401 error. The output information is shown below ? root at controller:~# swift --debug stat DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://controller:5000/v3/auth/tokens DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:5000 DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens HTTP/1.1" 201 5316 DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "ebe0c2002931431c9f6057f216c9aad1", "name": "swift", "password_expires_at": null}, "audit_ids": ["uxfmM8EtSDuf4TjLXur8LQ"], "expires_at": "2022-11-04T10:03:08.000000Z", "issued_at": "2022-11-04T09:03:08.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "6c50811c608f44ebb32132f039fd3ac0", "name": "service"}, "is_domain": false, "roles": [{"id": "8eb2665e258844bdb6daa7826ff7fd96", "name": "reader"}, {"id": "527a74431f6349738de18528c6d445e1", "name": "admin"}, {"id": "101015b8b34e41b4962b31440cf61520", "name": "member"}], "catalog": [{"endpoints": [{"id": "6e624d24e7e046e8bda14929bbb345c5", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, {"id": "bcdb2ad98de94420926cae68b27e4ccc", "interface": "public", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, {"id": "cc52757d98b04bdd88026d1087ae9a5d", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}], "id": "03943f266612412ebd2ea111596cfb75", "type": "identity", "name": "keystone"}, {"endpoints": [{"id": "68950739e40340768da67915b2cf6aac", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, {"id": "8a68f4ac2d4845959e9515304dd75eb3", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, {"id": "da32e6e91bff4279bafa3329053e3704", "interface": "public", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}], "id": "33f2243a764d4bdc9a157e6a9d5a9c6a", "type": "network", "name": ""}, {"endpoints": [{"id": "166e07fa638e49b987595323bd3bcf61", "interface": "public", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, {"id": "cbb01761367e4acf83c658333e2de386", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, {"id": "db8dd3cb0dd04b29bfb96a608fb72d99", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}], "id": "4f86e32926a64fb99de6e6919c714926", "type": "image", "name": ""}, {"endpoints": [{"id": "93e135f5a0824caa8b8fc9fa0df74bd4", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "f21e50d72b914333952b7d3d8fd2578d", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "f669072bd0a84c5b85ca70714e6b212d", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}], "id": "554089befdd94ee1a87a03de006a33c6", "type": "volumev2", "name": ""}, {"endpoints": [{"id": "7f0d59134e60478ab7fbb0b72772cbab", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": "ac50c3fb0af947508497d8344b8e6d0a", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": "b41204a96b7a43f68a7f8f2613163df3", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}], "id": "5e09e36a7744495389ea6159048eef5e", "type": "compute", "name": ""}, {"endpoints": [{"id": "1e56c094a9ea4e7a80132280df1925f4", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, {"id": "896c40aa1ada41ed82bcb0c39d87d8f5", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, {"id": "b4d50334c3174633aa1372f13b7c19ed", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}], "id": "90e53b94f1b84b99ae6fa03c793a30b1", "type": "placement", "name": ""}, {"endpoints": [{"id": "50653fa05fde458c9e04be6410d3407f", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "794a0750211b4034bc40d4095fab09df", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "951627fca8974f27a21fcb0a075cecc1", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}], "id": "d8cca1398508471f937533ab72ffae96", "type": "volumev3", "name": ""}, {"endpoints": [{"id": "29c38e196a6743aa9ff92c7a26362bb2", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "adce2914f26f4d1ead7036255a690d42", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "bc4246d33f3146be87c49fd47af98172", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8080/v1", "region": "RegionOne"}], "id": "f256c47fcfaa48f78baaf34066c54960", "type": "object-store", "name": "swift"}]}} DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:8080 DEBUG:urllib3.connectionpool:http://controller:8080 "HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 HTTP/1.1" 401 0 INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 -I -H "X-Auth-Token: gAAAAABjZNVMHpSugcariFAL92JRrJXtfLTk8sAtac7CZY9BEUoOxg3hnI2xKyv-DBkYuFQNVEAQiL68-5in7gfh3SrWTn_P5s3QB_GWOVGMOobkSYXiiKtqzL46nUySqLKpJiG0IMd06ufiC3ILCZguXu5yfoAA9U9Z--XhBmyhXmcWLaQ1Ybo" INFO:swiftclient:RESP STATUS: 401 Unauthorized INFO:swiftclient:RESP HEADERS: {'Content-Type': 'text/html; charset=UTF-8', 'Www-Authenticate': 'Swift realm="AUTH_6c50811c608f44ebb32132f039fd3ac0"', 'Content-Length': '0', 'X-Trans-Id': 'txe65f29e7acb04920bb9f6-006364d54c', 'X-Openstack-Request-Id': 'txe65f29e7acb04920bb9f6-006364d54c', 'Date': 'Fri, 04 Nov 2022 09:03:08 GMT'} DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request to http://controller:5000/v3/auth/tokens DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:5000 DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens HTTP/1.1" 201 5316 DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": ["password"], "user": {"domain": {"id": "default", "name": "Default"}, "id": "ebe0c2002931431c9f6057f216c9aad1", "name": "swift", "password_expires_at": null}, "audit_ids": ["_gjaOZg2TquDXq7PTISWNA"], "expires_at": "2022-11-04T10:03:09.000000Z", "issued_at": "2022-11-04T09:03:09.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "6c50811c608f44ebb32132f039fd3ac0", "name": "service"}, "is_domain": false, "roles": [{"id": "527a74431f6349738de18528c6d445e1", "name": "admin"}, {"id": "8eb2665e258844bdb6daa7826ff7fd96", "name": "reader"}, {"id": "101015b8b34e41b4962b31440cf61520", "name": "member"}], "catalog": [{"endpoints": [{"id": "6e624d24e7e046e8bda14929bbb345c5", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, {"id": "bcdb2ad98de94420926cae68b27e4ccc", "interface": "public", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, {"id": "cc52757d98b04bdd88026d1087ae9a5d", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}], "id": "03943f266612412ebd2ea111596cfb75", "type": "identity", "name": "keystone"}, {"endpoints": [{"id": "68950739e40340768da67915b2cf6aac", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, {"id": "8a68f4ac2d4845959e9515304dd75eb3", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, {"id": "da32e6e91bff4279bafa3329053e3704", "interface": "public", "region_id": "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}], "id": "33f2243a764d4bdc9a157e6a9d5a9c6a", "type": "network", "name": ""}, {"endpoints": [{"id": "166e07fa638e49b987595323bd3bcf61", "interface": "public", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, {"id": "cbb01761367e4acf83c658333e2de386", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, {"id": "db8dd3cb0dd04b29bfb96a608fb72d99", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}], "id": "4f86e32926a64fb99de6e6919c714926", "type": "image", "name": ""}, {"endpoints": [{"id": "93e135f5a0824caa8b8fc9fa0df74bd4", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "f21e50d72b914333952b7d3d8fd2578d", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "f669072bd0a84c5b85ca70714e6b212d", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}], "id": "554089befdd94ee1a87a03de006a33c6", "type": "volumev2", "name": ""}, {"endpoints": [{"id": "7f0d59134e60478ab7fbb0b72772cbab", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": "ac50c3fb0af947508497d8344b8e6d0a", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": "b41204a96b7a43f68a7f8f2613163df3", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", "region": "RegionOne"}], "id": "5e09e36a7744495389ea6159048eef5e", "type": "compute", "name": ""}, {"endpoints": [{"id": "1e56c094a9ea4e7a80132280df1925f4", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, {"id": "896c40aa1ada41ed82bcb0c39d87d8f5", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, {"id": "b4d50334c3174633aa1372f13b7c19ed", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}], "id": "90e53b94f1b84b99ae6fa03c793a30b1", "type": "placement", "name": ""}, {"endpoints": [{"id": "50653fa05fde458c9e04be6410d3407f", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "794a0750211b4034bc40d4095fab09df", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "951627fca8974f27a21fcb0a075cecc1", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}], "id": "d8cca1398508471f937533ab72ffae96", "type": "volumev3", "name": ""}, {"endpoints": [{"id": "29c38e196a6743aa9ff92c7a26362bb2", "interface": "public", "region_id": "RegionOne", "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "adce2914f26f4d1ead7036255a690d42", "interface": "internal", "region_id": "RegionOne", "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", "region": "RegionOne"}, {"id": "bc4246d33f3146be87c49fd47af98172", "interface": "admin", "region_id": "RegionOne", "url": "http://controller:8080/v1", "region": "RegionOne"}], "id": "f256c47fcfaa48f78baaf34066c54960", "type": "object-store", "name": "swift"}]}} DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): controller:8080 DEBUG:urllib3.connectionpool:http://controller:8080 "HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 HTTP/1.1" 401 0 INFO:swiftclient:REQ: curl -i http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 -I -H "X-Auth-Token: gAAAAABjZNVNWPHppqLAGxDFedVVT4FiMam_wPsK0HL2siFsQhBz2-LewBYcX5FCspKLclWzrGRj0ChEE2Rd-6auT_fpiI5uHFt11zOpDZRN83zxvGTp_FgtN5p7h5dG6I1yvPVi6ICurC8MhfK7SpfKupdK5TZVt7r2hZkdjd7TdHj3-m-Bgdc" INFO:swiftclient:RESP STATUS: 401 Unauthorized INFO:swiftclient:RESP HEADERS: {'Content-Type': 'text/html; charset=UTF-8', 'Www-Authenticate': 'Swift realm="AUTH_6c50811c608f44ebb32132f039fd3ac0"', 'Content-Length': '0', 'X-Trans-Id': 'txff99e52141174a038c7d1-006364d54d', 'X-Openstack-Request-Id': 'txff99e52141174a038c7d1-006364d54d', 'Date': 'Fri, 04 Nov 2022 09:03:09 GMT'} ERROR:swiftclient.service:Account HEAD failed: http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 Unauthorized (txn: txff99e52141174a038c7d1-006364d54d) Traceback (most recent call last):   File "/usr/lib/python3/dist-packages/swiftclient/service.py", line 555, in stat     items, headers = get_future_result(stats_future)   File "/usr/lib/python3/dist-packages/swiftclient/service.py", line 251, in get_future_result     res = f.result(timeout=timeout)   File "/usr/lib/python3.8/concurrent/futures/_base.py", line 444, in result     return self.__get_result()   File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result     raise self._exception   File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run     result = self.fn(*self.args, **self.kwargs)   File "/usr/lib/python3/dist-packages/swiftclient/multithreading.py", line 201, in conn_fn     return fn(*conn_args, **kwargs)   File "/usr/lib/python3/dist-packages/swiftclient/command_helpers.py", line 24, in stat_account     headers = conn.head_account(headers=req_headers)   File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 1902, in head_account     return self._retry(None, head_account, headers=headers)   File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 1856, in _retry     rv = func(self.url, self.token, *args,   File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 924, in head_account     raise ClientException.from_response(resp, 'Account HEAD failed', body) swiftclient.exceptions.ClientException: Account HEAD failed: http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 Unauthorized (txn: txff99e52141174a038c7d1-006364d54d) Account HEAD failed: http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 Unauthorized Failed Transaction ID: txff99e52141174a038c7d1-006364d54d ? The output of "tail -f/var/log/*"  is as follows Nov 4 13:49:42 controller proxy-server: - - 04/Nov/2022/13/49/42 HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0%3Fformat%3Djson HTTP/1.0 200 - Swift - - - - tx967ed9b9c29b4828b646d-0063651876 - 0.0039 RL - 1667569782.696980715 1667569782.700921297 - Nov 4 13:49:42 controller proxy-server: 192.168.100.10 192.168.100.10 04/Nov/2022/13/49/42 HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0%3Fformat%3Djson HTTP/1.0 401 - python-swiftclient-3.11.0 gAAAAABjZRh2... - - - tx967ed9b9c29b4828b646d-0063651876 - 0.0055 - - 1667569782.696368933 1667569782.701885939 - Nov 4 13:49:44 controller proxy-server: 192.168.100.10 192.168.100.10 04/Nov/2022/13/49/44 HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0%3Fformat%3Djson HTTP/1.0 401 - python-swiftclient-3.11.0 gAAAAABjZRh4... - - - tx4ac048f132544deaaac45-0063651878 - 0.0009 - - 1667569784.155652046 1667569784.156536341 - If you need/etc/swift/proxy-server.conf Please receive in the attachment thank you!!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: proxy-server.conf Type: application/octet-stream Size: 2568 bytes Desc: not available URL: From Aidan.Collins at twosigma.com Fri Nov 4 16:55:51 2022 From: Aidan.Collins at twosigma.com (Aidan Collins) Date: Fri, 4 Nov 2022 16:55:51 +0000 Subject: [kolla] Kolla-ansible plays with --limit failing if a compute host is down. Message-ID: Hello, It seems that the kolla-ansible plays reconfigure, prechecks, bootstrap-servers and deploy all fail when using limit if any compute host is down, even if it is not the one being specified by limit. Is there any way to configure gather-facts in these plays to not fail if this is the case? Due to the size of our plant we sometimes need to take down a compute host for maintenance and still provisiion new ones. We are using Victoria. Thanks a lot -aidan -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Fri Nov 4 21:34:22 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Fri, 4 Nov 2022 22:34:22 +0100 Subject: [Kolla-ansible][Yoga] How to update an Openstack deployment with new containers? Message-ID: Hi, I followed this link : https://docs.openstack.org/kolla-ansible/yoga/user/operating-kolla.html It says that : This procedure is for upgrading from series to series, not for doing updates within a series. Inside a series, *it is usually sufficient to just update the kolla-ansible package, rebuild (if needed) and pull the images, and run kolla-ansible deploy again. Please follow release notes to check if there are any issues to be aware of.* In my deployment I am using a local registry, so I pulled the new images, tagged them then pushed them into my registry. I can see that the container's images are newer than the ones I had. I then updated my kolla-ansible package using : $ source yogavenv (yogavenv) $ pip install --upgrade git+ https://opendev.org/openstack/kolla-ansible at stable/yoga Finally I launched the deploy, but it seems that nothing has changed. The containers have not been restarted. Any ideas? Is there a way to verify that a container belongs to a particular image build? How can I be sure that my deployment has been updated with new containers? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Nov 5 00:37:26 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 04 Nov 2022 17:37:26 -0700 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Nov 04: Reading: 5 min Message-ID: <18445389463.11b97c727613416.4380707062071478967@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's video meeting on Nov 03. Most of the meeting discussions are summarized in this email. Meeting recording is available @ https://www.youtube.com/watch?v=cjpxiT2Egok and summary logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-03-15.05.log.html * Next TC weekly meeting will be on Nov 9 Wed at 16:00 UTC, please make note of new day/time for TC meetings. Feel free to add the topic to the agenda[1] by Nov 8. 2. What we completed this week: ========================= * Deprecated openstack-ansible rsyslog roles[2] * Completed the user survey TC question updates for 2023 user survey[3] 3. Activities In progress: ================== TC Tracker for 2023.1 cycle --------------------------------- * Current cycle working items and their progress are present in 2023.1 tracker etherpad[4]. Open Reviews ----------------- * Seven open reviews for ongoing activities[5]. Renovate translation SIG i18 ---------------------------------- * SIG i18 user Survey questions are finalized for 2023 user survey * rosmaita is coordinating with Weblate and as per them there shouldn't be any issue in hosting OpenStack translations. He will connect them with SIG i18n to work out details. TC Weekly meeting new day and time -------------------------------------------- We agreed to shift TC weekly meeting on evry wed on 16:00 UTC. This will be effective from next week (next meeting is on Nov 9, 16:00 UTC)[6]. TC Video meeting discussion ---------------------------------- Even though we discussed it many times in past and TC explained about having a monthly Video call and rest three week as IRC meeting, this came up again in yesterday meeting. We discussed both pros and cons of monthly video call but did not conclude anything. JayF is proposed the resolution to discuss it further if we want to continue on this monthly Video call or not[7]. TC chair nomination & election process ----------------------------------------------- We are formalizing the process of TC chair nomination process. Two options are up for the review[8][9]. Fixing Zuul config error ---------------------------- We are tracking this in one of the item in TC tracker, please refer there for the progress[10]. Project updates ------------------- * Add zookeeper role under OpenStack-Ansible governance[11] * Add Skyline repository for OpenStack-Ansible[12] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. See you all next week in PTG! [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/863076 [3] https://etherpad.opendev.org/p/tc-2023-user-survey-questions [4] https://etherpad.opendev.org/p/tc-zed-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-03-15.05.log.html#l-28 [7] https://review.opendev.org/c/openstack/governance/+/863685 [8] https://review.opendev.org/c/openstack/governance/+/862772 [9] https://review.opendev.org/c/openstack/governance/+/862774 [10] https://etherpad.opendev.org/p/zuul-config-error-openstack [11] https://review.opendev.org/c/openstack/governance/+/863161 [12] https://review.opendev.org/c/openstack/governance/+/863166 [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From p.aminian.server at gmail.com Sun Nov 6 06:16:02 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Sun, 6 Nov 2022 09:46:02 +0330 Subject: openstack multi region with internal ip Message-ID: hello Im using kolla-ansible and for security reasons I use invalid ip for main compute ip . my compute have 2 network adapter and first adapter that is for management is not accessible from outside . the problem is I want to add new region in other datacenter and on openstack with multiregion keystone is deploy only on one region not all regions so my computes with invalid ip can not access keystone anymore . is there any solution for this case ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 7 08:23:24 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 07 Nov 2022 08:23:24 +0000 Subject: [Kolla-ansible][Yoga] How to update an Openstack deployment with new containers? In-Reply-To: References: Message-ID: <67017f7c97cb6d137336b9e8b2684a7c850678fa.camel@redhat.com> On Fri, 2022-11-04 at 22:34 +0100, wodel youchi wrote: > Hi, > I followed this link : > https://docs.openstack.org/kolla-ansible/yoga/user/operating-kolla.html > > It says that : This procedure is for upgrading from series to series, not > for doing updates within a series. Inside a series, *it is usually > sufficient to just update the kolla-ansible package, rebuild (if needed) > and pull the images, and run kolla-ansible deploy again. Please follow > release notes to check if there are any issues to be aware of.* > > > In my deployment I am using a local registry, so I pulled the new images, > tagged them then pushed them into my registry. > I can see that the container's images are newer than the ones I had. > > I then updated my kolla-ansible package using : > $ source yogavenv > (yogavenv) $ pip install --upgrade git+ > https://opendev.org/openstack/kolla-ansible at stable/yoga > > Finally I launched the deploy, but it seems that nothing has changed. > > The containers have not been restarted. Any ideas? if you tagged them with the same tag then you will need to manually run an image pull instead. you are generally better off using a new tag so that you can revert if needed but the basic minor update flow is really just? kolla-ansible pull -i kolla-ansoble deploy ... if you use diffent tag then after you push to the local registry with the new tag update your global.yaml with the new tag then pull and deploy updating the kolla-ansible package is generaly only needed if doing a major upgrade between openstack release but it can be useful if tehre have been bugfixes to kolla itself. > > Is there a way to verify that a container belongs to a particular image > build? > How can I be sure that my deployment has been updated with new containers? > > > Regards. From thierry at openstack.org Mon Nov 7 08:29:48 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 7 Nov 2022 09:29:48 +0100 Subject: [largescale-sig] Next meeting: Nov 9, 15utc Message-ID: Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. Note that most countries are no longer in DST! You should doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20221109T15 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From egarciar at redhat.com Mon Nov 7 10:47:30 2022 From: egarciar at redhat.com (Elvira Garcia Ruiz) Date: Mon, 7 Nov 2022 11:47:30 +0100 Subject: [neutron] Bug Deputy Report October 31 - November 6 Message-ID: Hi everyone, I was the bug deputy last week. Find the summary below: High: ------- - neutron.privileged.agent.linux.ip_lib.InterfaceOperationNotSupported: Operation not supported on interface https://bugs.launchpad.net/neutron/+bug/1995735 Assigned to Felipe Reyes Fix proposed https://review.opendev.org/c/openstack/neutron/+/863779 - bulk port create: TypeError: Bad prefix type for generating IPv6 address by EUI-64 https://bugs.launchpad.net/neutron/+bug/1995732 Unassigned - OVN: HA chassis group priority is different than gateway chassis priority https://bugs.launchpad.net/neutron/+bug/1995078 I think this is a legit bug. However, a second look by someone else would be appreciated. Unassigned - ORM session: SQL execution without transaction in progress https://bugs.launchpad.net/neutron/+bug/1995738 Assigned to Felipe Reyes Fix proposed https://review.opendev.org/c/openstack/neutron/+/863780 Undecided: --------------- - [ML2/OVN] After upgrading from Xena to Yoga neutron-dhcp-agent is not working for Baremetals https://bugs.launchpad.net/neutron/+bug/1995287 Rodolfo is discussing with the reporter. It has not yet been confirmed to be a bug. - floating ip portforwarding from external not working https://bugs.launchpad.net/neutron/+bug/1995614 Unassigned >From the description this could be legit but I have no experience with the neutron- L3-agent-floatingip, so an experienced opinion would be appreciated on this one. I left a question for the reporter. Kind regards, Elvira -------------- next part -------------- An HTML attachment was scrubbed... URL: From vmarkov at mirantis.com Mon Nov 7 12:56:32 2022 From: vmarkov at mirantis.com (Vadym Markov) Date: Mon, 7 Nov 2022 14:56:32 +0200 Subject: [Heat] X-OpenStack-Request-ID changed during request processing Message-ID: Hi everyone Found that Heat doesn't keep the value of X-OpenStack-Request-ID header during request processing. Simple stack delete leads to 4 different request id, only the last of them corresponds to heat engine processing the request. Investigation shows that req-id "duplicates" twice - when client goes via redirect response returned by API and at version negotiation at every connect to API endpoint. I assume that this situation is not expected, and it really spoils work with heat api logs. Any ideas on how to make the situation more clear? Also looks like that header name is case-dependent in some places, and Oslo-middleware and Heat handles them differently. However, it needs additional investigation. I propose to turn off logging for successful negotiation and re-use req-id in keystoneuth during redirect. If it is ok, I plan to submit corresponding patches to upstream. Here is a piece of log related to stack delete. Only "req-843ac3a4..." exists in the heat-engine log. ??? 07 12:21:59 ubuntu devstack at h-api.service[172816]: DEBUG heat.api.middleware.version_negotiation [None req-5b613e29-8dd9-458d-b908-4802f90c0b2f admin admin] Processing request: DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1 Accept: application/json {{(pid=172816) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:57}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172816]: DEBUG heat.api.middleware.version_negotiation [None req-5b613e29-8dd9-458d-b908-4802f90c0b2f admin admin] Matched versioned URI. Version: 1.0 {{(pid=172816) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:69}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172816]: INFO heat.common.wsgi [None req-ec73b5d8-5025-40e0-90d7-4010d39cb450 admin admin] Processing request: DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1 ??? 07 12:21:59 ubuntu devstack at h-api.service[172816]: DEBUG heat.common.wsgi [None req-ec73b5d8-5025-40e0-90d7-4010d39cb450 admin admin] Calling StackController.lookup {{(pid=172816) __call__ /opt/stack/heat/heat/common/wsgi.py:823}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172816]: [pid: 172816|app: 0|req: 12/24] 192.168.123.5 () {66 vars in 1391 bytes} [Mon Nov 7 12:21:59 2022] DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1 => generated 377 bytes in 167 msecs (HTTP/1.1 302) 5 headers in 289 bytes (1 switches on core 3) ??? 07 12:21:59 ubuntu devstack at h-api.service[172815]: DEBUG heat.api.middleware.version_negotiation [None req-f1eb3c1b-b76c-42fa-8998-31bb2cc43030 admin admin] Processing request: DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1/05e56d91-6b11-4e20-83f3-e2923667019b Accept: application/json {{(pid=172815) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:57}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172815]: DEBUG heat.api.middleware.version_negotiation [None req-f1eb3c1b-b76c-42fa-8998-31bb2cc43030 admin admin] Matched versioned URI. Version: 1.0 {{(pid=172815) process_request /opt/stack/heat/heat/api/middleware/version_negotiation.py:69}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172815]: INFO heat.common.wsgi [None req-843ac3a4-408d-4473-844f-f4b109ad1725 admin admin] Processing request: DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1/05e56d91-6b11-4e20-83f3-e2923667019b ??? 07 12:21:59 ubuntu devstack at h-api.service[172815]: DEBUG heat.common.wsgi [None req-843ac3a4-408d-4473-844f-f4b109ad1725 admin admin] Calling StackController.delete {{(pid=172815) __call__ /opt/stack/heat/heat/common/wsgi.py:823}} ??? 07 12:21:59 ubuntu devstack at h-api.service[172815]: [pid: 172815|app: 0|req: 13/25] 192.168.123.5 () {66 vars in 1502 bytes} [Mon Nov 7 12:21:59 2022] DELETE /heat-api/v1/8a76e6c8cc714288823dfe3677174893/stacks/stack_1/05e56d91-6b11-4e20-83f3-e2923667019b => generated 0 bytes in 118 msecs (HTTP/1.1 204) 2 headers in 112 bytes (1 switches on core 0) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozzzo at yahoo.com Mon Nov 7 13:24:54 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Mon, 7 Nov 2022 13:24:54 +0000 (UTC) Subject: [kolla] Kolla-ansible plays with --limit failing if a compute host is down. In-Reply-To: References: Message-ID: <126201570.921971.1667827494352@mail.yahoo.com> When I encounter that I edit the inventory and comment out the down host. On Friday, November 4, 2022, 05:16:32 PM EDT, Aidan Collins wrote: Hello, It seems that the kolla-ansible plays reconfigure, prechecks, bootstrap-servers and deploy all fail when using limit if any compute host is down, even if it is not the one being specified by limit. Is there any way to configure gather-facts in these plays to not fail if this is the case? Due to the size of our plant we sometimes need to take down a compute host for maintenance and still provisiion new ones. We are using Victoria. ? Thanks a lot -aidan ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Nov 7 13:38:49 2022 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 7 Nov 2022 14:38:49 +0100 Subject: [cinder][kolla][OpenstackAnsible] Zed Cycle-Trailing Release Deadline Message-ID: Hello teams with trailing projects, The Zed cycle-trailing release deadline is in ~1 months [1], and all projects following the cycle-trailing release model must release their Zed deliverables by 16 Dec, 2022. The following trailing projects haven't been released yet for Zed (aside the release candidates versions if exists). Cinder team's deliverables: - cinderlib OSA team's deliverables: - openstack-ansible-roles - openstack-ansible Kolla team's deliverables: - kayobe - kolla - kolla-ansible - ansible-collection-kolla This is just a friendly reminder to allow you to release these projects in time. Do not hesitate to ping us if you have any questions or concerns. Thanks for your time. -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdeeghayu at gmail.com Mon Nov 7 06:29:27 2022 From: kdeeghayu at gmail.com (Deeghayu Baddegama) Date: Mon, 7 Nov 2022 11:59:27 +0530 Subject: openstack mentee Message-ID: Hi, I hope to use openstack as the platform in one of my projects. I would like to be a mentee to study about openstack. But when I try to sign up as a mentee it shows an error message. The screenshot attached here. Could you please guide me through this? Thank you. Best regards, Deeghayu Baddegama, Sri Lanka -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mentee.png Type: image/png Size: 40285 bytes Desc: not available URL: From sajeyksmwangi at gmail.com Mon Nov 7 12:01:20 2022 From: sajeyksmwangi at gmail.com (sajeyks mwangi) Date: Mon, 7 Nov 2022 15:01:20 +0300 Subject: Neutron Message-ID: Hello! I installed openstack on Ubuntu using devstack and after running "./stack.sh" I tried to log in the dashboard and I got the error "Connection to neutron failed" -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Nov 7 14:01:59 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 7 Nov 2022 15:01:59 +0100 Subject: Neutron In-Reply-To: References: Message-ID: Hi Sajeyks: You need to provide more details to debug this issue. Can you open a bug in https://bugs.launchpad.net/neutron/ and upload the Neutron service logs (server, agents, etc). Did the stack process finish correctly? Do you have connectivity from the computer running the dashboard to the server? Regards. On Mon, Nov 7, 2022 at 2:41 PM sajeyks mwangi wrote: > Hello! > I installed openstack on Ubuntu using devstack and after running > "./stack.sh" I tried to log in the dashboard and I got the error > "Connection to neutron failed" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Mon Nov 7 14:46:04 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Mon, 7 Nov 2022 20:16:04 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> Message-ID: Ok, I will check. On Sat, Nov 5, 2022 at 12:33 AM Eugen Block wrote: > I know nothing about AD, I?m afraid. But where exactly do you see that > message? Is it in keystone or AD? Anyway, you seem to have a duplicate > entry (somewhere), so check the keystone database and the AD entries > and compare (with working users). > > Zitat von Adivya Singh : > > > Hi Eugen, > > > > I see the below error while authenticating > > Conflict occurred attempting to store nonlocal_user - Duplicate entry > found > > with name at domain ID > > > > How can we fix this? > > > > Regards > > Adivya Singh > > > > On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh > wrote: > > > >> Hi Eugen, > >> > >> All the users are AD based authentication, but this user only facing a > >> problem > >> Trying to Find out the AD Team , what happened all of a sudden for this > >> user > >> > >> Regards > >> Adivya Singh > >> > >> R > >> > >> > >> On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: > >> > >>> I assume this isn't the only user trying to login from AD, correct? > >>> Then compare the properties/settings between a working and the > >>> non-working user, you should probably find something. Also enable > >>> debug logs in keystone to find more details. And by "all of a sudden" > >>> you mean that it worked before? So what changed between then and now? > >>> > >>> Zitat von Adivya Singh : > >>> > >>> > Hi Team, > >>> > > >>> > There is one issue , where a user is getting " Authenticated Failure" > >>> all > >>> > of a sudden, and this user is the only user who is facing this > problem. > >>> > > >>> > I tried to disable and enable the project if, Check the logs but do > not > >>> > found anything related to Keystone authentication > >>> > > >>> > Delete the Project id and Create it again , Results are same , Any > >>> insights > >>> > what i can do more to fix this issue > >>> > > >>> > Regards > >>> > Adivya Singh > >>> > >>> > >>> > >>> > >>> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Mon Nov 7 15:30:17 2022 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Mon, 7 Nov 2022 09:30:17 -0600 Subject: this is a openstack swift error(s) In-Reply-To: References: Message-ID: it looks like you only have tempauth in your pipeline, but you're trying to use keystone. Try to change the [pipeline:main] section to match the keystone auth pipeline example: pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk tempurl ratelimit authtoken keystoneauth copy container-quotas account-quotas slo dlo versioned_writes symlink proxy-logging proxy-server On Fri, Nov 4, 2022 at 4:14 PM ????? <2292613444 at qq.com> wrote: > After installing swift, I run "swift --debug stat" > > > A serious error occurred, HTTP 401 error. > > The output information is shown below > ? > root at controller:~# swift --debug stat > DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request > to http://controller:5000/v3/auth/tokens > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): > controller:5000 > DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens > HTTP/1.1" 201 5316 > DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": > ["password"], "user": {"domain": {"id": "default", "name": "Default"}, > "id": "ebe0c2002931431c9f6057f216c9aad1", "name": "swift", "password_ > expires_at": null}, "audit_ids": ["uxfmM8EtSDuf4TjLXur8LQ"], > "expires_at": "2022-11-04T10:03:08.000000Z", "issued_at": "2022-11- > 04T09:03:08.000000Z", "project": {"domain": {"id": "default", "name": > "Default"}, "id": "6c50811c608f44ebb32132f039fd3ac0", "name": "service"}, > "is_domain": false, "roles": [{"id": "8eb2665e258844bdb6daa7826ff7fd96", > "name": "reader"}, {"id": "527a74431f6349738de18528c6d445e1", "name": > "admin"}, {"id": "101015b8b34e41b4962b31440cf61520", "name": "member"}], > "catalog": [{"endpoints": [{"id": "6e624d24e7e046e8bda14929bbb345c5", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:5000/v3/", "region": "RegionOne"}, {"id": > "bcdb2ad98de94420926cae68b27e4ccc", "interface": "public", "region_id": > "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, > {"id": "cc52757d98b04bdd88026d1087ae9a5d", "interface": "admin", > "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": > "RegionOne"}], "id": "03943f266612412ebd2ea111596cfb75", "type": > "identity", "name": "keystone"}, {"endpoints": [{"id": "68950739e40340 > 768da67915b2cf6aac", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:9696", "region": "RegionOne"}, {"id": > "8a68f4ac2d4845959e9515304dd75eb3", "interface": "internal", "region_id": > "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, > {"id": "da32e6e91bff4279bafa3329053e3704", "interface": "public", > "region_id": "RegionOne", "url": "http://controller:9696", "region": > "RegionOne"}], "id": "33f2243a764d4bdc9a157e6a9d5a9c6a", "type": > "network", "name": ""}, {"endpoints": [{"id": "166e07fa638e49 > b987595323bd3bcf61", "interface": "public", "region_id": "RegionOne", > "url": "http://controller:9292", "region": "RegionOne"}, {"id": > "cbb01761367e4acf83c658333e2de386", "interface": "admin", "region_id": > "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, > {"id": "db8dd3cb0dd04b29bfb96a608fb72d99", "interface": "internal", > "region_id": "RegionOne", "url": "http://controller:9292", "region": > "RegionOne"}], "id": "4f86e32926a64fb99de6e6919c714926", "type": "image", > "name": ""}, {"endpoints": [{"id": "93e135f5a0824caa8b8fc9fa0df74bd4", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "f21e50d72b914333952b7d3d8fd2578d", "interface": > "public", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "f669072bd0a84c5b85ca70714e6b212d", "interface": > "admin", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}], "id": "554089befdd94ee1a87a03de006a33c6", "type": > "volumev2", "name": ""}, {"endpoints": [{"id": "7f0d59134e6047 > 8ab7fbb0b72772cbab", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": > "ac50c3fb0af947508497d8344b8e6d0a", "interface": "internal", "region_id": > "RegionOne", "url": "http://controller:8774/v2.1", "region": > "RegionOne"}, {"id": "b41204a96b7a43f68a7f8f2613163df3", "interface": > "public", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", > "region": "RegionOne"}], "id": "5e09e36a7744495389ea6159048eef5e", > "type": "compute", "name": ""}, {"endpoints": [{"id": "1e56c094a9ea4e > 7a80132280df1925f4", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:8778", "region": "RegionOne"}, {"id": > "896c40aa1ada41ed82bcb0c39d87d8f5", "interface": "public", "region_id": > "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, > {"id": "b4d50334c3174633aa1372f13b7c19ed", "interface": "internal", > "region_id": "RegionOne", "url": "http://controller:8778", "region": > "RegionOne"}], "id": "90e53b94f1b84b99ae6fa03c793a30b1", "type": > "placement", "name": ""}, {"endpoints": [{"id": "50653fa05fde45 > 8c9e04be6410d3407f", "interface": "internal", "region_id": "RegionOne", > "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "794a0750211b4034bc40d4095fab09df", > "interface": "admin", "region_id": "RegionOne", "url": " > http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "951627fca8974f27a21fcb0a075cecc1", "interface": > "public", "region_id": "RegionOne", "url": " > http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}], "id": "d8cca1398508471f937533ab72ffae96", "type": > "volumev3", "name": ""}, {"endpoints": [{"id": "29c38e196a6743 > aa9ff92c7a26362bb2", "interface": "public", "region_id": "RegionOne", > "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "adce2914f26f4d1ead7036255a690d42", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "bc4246d33f3146be87c49fd47af98172", > "interface": "admin", "region_id": "RegionOne", "url": " > http://controller:8080/v1", "region": "RegionOne"}], "id": "f256c47fcfaa48 > f78baaf34066c54960", "type": "object-store", "name": "swift"}]}} > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): > controller:8080 > DEBUG:urllib3.connectionpool:http://controller:8080 "HEAD /v1/AUTH_ > 6c50811c608f44ebb32132f039fd3ac0 HTTP/1.1" 401 0 > INFO:swiftclient:REQ: curl -i > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 -I -H > "X-Auth-Token: gAAAAABjZNVMHpSugcariFAL92JRrJXtfLTk8sAtac7CZ > Y9BEUoOxg3hnI2xKyv-DBkYuFQNVEAQiL68-5in7gfh3SrWTn_P5s3QB_GWOVGMOobkSYXii > KtqzL46nUySqLKpJiG0IMd06ufiC3ILCZguXu5yfoAA9U9Z--XhBmyhXmcWLaQ1Ybo" > INFO:swiftclient:RESP STATUS: 401 Unauthorized > INFO:swiftclient:RESP HEADERS: {'Content-Type': 'text/html; > charset=UTF-8', 'Www-Authenticate': 'Swift realm="AUTH_6c50811c60 > 8f44ebb32132f039fd3ac0"', 'Content-Length': '0', 'X-Trans-Id': > 'txe65f29e7acb04920bb9f6-006364d54c', 'X-Openstack-Request-Id': > 'txe65f29e7acb04920bb9f6-006364d54c', 'Date': 'Fri, 04 Nov 2022 09:03:08 > GMT'} > DEBUG:keystoneclient.auth.identity.v3.base:Making authentication request > to http://controller:5000/v3/auth/tokens > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): > controller:5000 > DEBUG:urllib3.connectionpool:http://controller:5000 "POST /v3/auth/tokens > HTTP/1.1" 201 5316 > DEBUG:keystoneclient.auth.identity.v3.base:{"token": {"methods": > ["password"], "user": {"domain": {"id": "default", "name": "Default"}, > "id": "ebe0c2002931431c9f6057f216c9aad1", "name": "swift", "password_ > expires_at": null}, "audit_ids": ["_gjaOZg2TquDXq7PTISWNA"], > "expires_at": "2022-11-04T10:03:09.000000Z", "issued_at": "2022-11- > 04T09:03:09.000000Z", "project": {"domain": {"id": "default", "name": > "Default"}, "id": "6c50811c608f44ebb32132f039fd3ac0", "name": "service"}, > "is_domain": false, "roles": [{"id": "527a74431f6349738de18528c6d445e1", > "name": "admin"}, {"id": "8eb2665e258844bdb6daa7826ff7fd96", "name": > "reader"}, {"id": "101015b8b34e41b4962b31440cf61520", "name": "member"}], > "catalog": [{"endpoints": [{"id": "6e624d24e7e046e8bda14929bbb345c5", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:5000/v3/", "region": "RegionOne"}, {"id": > "bcdb2ad98de94420926cae68b27e4ccc", "interface": "public", "region_id": > "RegionOne", "url": "http://controller:5000/v3/", "region": "RegionOne"}, > {"id": "cc52757d98b04bdd88026d1087ae9a5d", "interface": "admin", > "region_id": "RegionOne", "url": "http://controller:5000/v3/", "region": > "RegionOne"}], "id": "03943f266612412ebd2ea111596cfb75", "type": > "identity", "name": "keystone"}, {"endpoints": [{"id": "68950739e40340 > 768da67915b2cf6aac", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:9696", "region": "RegionOne"}, {"id": > "8a68f4ac2d4845959e9515304dd75eb3", "interface": "internal", "region_id": > "RegionOne", "url": "http://controller:9696", "region": "RegionOne"}, > {"id": "da32e6e91bff4279bafa3329053e3704", "interface": "public", > "region_id": "RegionOne", "url": "http://controller:9696", "region": > "RegionOne"}], "id": "33f2243a764d4bdc9a157e6a9d5a9c6a", "type": > "network", "name": ""}, {"endpoints": [{"id": "166e07fa638e49 > b987595323bd3bcf61", "interface": "public", "region_id": "RegionOne", > "url": "http://controller:9292", "region": "RegionOne"}, {"id": > "cbb01761367e4acf83c658333e2de386", "interface": "admin", "region_id": > "RegionOne", "url": "http://controller:9292", "region": "RegionOne"}, > {"id": "db8dd3cb0dd04b29bfb96a608fb72d99", "interface": "internal", > "region_id": "RegionOne", "url": "http://controller:9292", "region": > "RegionOne"}], "id": "4f86e32926a64fb99de6e6919c714926", "type": "image", > "name": ""}, {"endpoints": [{"id": "93e135f5a0824caa8b8fc9fa0df74bd4", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "f21e50d72b914333952b7d3d8fd2578d", "interface": > "public", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "f669072bd0a84c5b85ca70714e6b212d", "interface": > "admin", "region_id": "RegionOne", "url": " > http://controller:8776/v2/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}], "id": "554089befdd94ee1a87a03de006a33c6", "type": > "volumev2", "name": ""}, {"endpoints": [{"id": "7f0d59134e6047 > 8ab7fbb0b72772cbab", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:8774/v2.1", "region": "RegionOne"}, {"id": > "ac50c3fb0af947508497d8344b8e6d0a", "interface": "internal", "region_id": > "RegionOne", "url": "http://controller:8774/v2.1", "region": > "RegionOne"}, {"id": "b41204a96b7a43f68a7f8f2613163df3", "interface": > "public", "region_id": "RegionOne", "url": "http://controller:8774/v2.1", > "region": "RegionOne"}], "id": "5e09e36a7744495389ea6159048eef5e", > "type": "compute", "name": ""}, {"endpoints": [{"id": "1e56c094a9ea4e > 7a80132280df1925f4", "interface": "admin", "region_id": "RegionOne", > "url": "http://controller:8778", "region": "RegionOne"}, {"id": > "896c40aa1ada41ed82bcb0c39d87d8f5", "interface": "public", "region_id": > "RegionOne", "url": "http://controller:8778", "region": "RegionOne"}, > {"id": "b4d50334c3174633aa1372f13b7c19ed", "interface": "internal", > "region_id": "RegionOne", "url": "http://controller:8778", "region": > "RegionOne"}], "id": "90e53b94f1b84b99ae6fa03c793a30b1", "type": > "placement", "name": ""}, {"endpoints": [{"id": "50653fa05fde45 > 8c9e04be6410d3407f", "interface": "internal", "region_id": "RegionOne", > "url": "http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "794a0750211b4034bc40d4095fab09df", > "interface": "admin", "region_id": "RegionOne", "url": " > http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}, {"id": "951627fca8974f27a21fcb0a075cecc1", "interface": > "public", "region_id": "RegionOne", "url": " > http://controller:8776/v3/6c50811c608f44ebb32132f039fd3ac0", "region": > "RegionOne"}], "id": "d8cca1398508471f937533ab72ffae96", "type": > "volumev3", "name": ""}, {"endpoints": [{"id": "29c38e196a6743 > aa9ff92c7a26362bb2", "interface": "public", "region_id": "RegionOne", > "url": "http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "adce2914f26f4d1ead7036255a690d42", > "interface": "internal", "region_id": "RegionOne", "url": " > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0", > "region": "RegionOne"}, {"id": "bc4246d33f3146be87c49fd47af98172", > "interface": "admin", "region_id": "RegionOne", "url": " > http://controller:8080/v1", "region": "RegionOne"}], "id": "f256c47fcfaa48 > f78baaf34066c54960", "type": "object-store", "name": "swift"}]}} > DEBUG:urllib3.connectionpool:Starting new HTTP connection (1): > controller:8080 > DEBUG:urllib3.connectionpool:http://controller:8080 "HEAD /v1/AUTH_ > 6c50811c608f44ebb32132f039fd3ac0 HTTP/1.1" 401 0 > INFO:swiftclient:REQ: curl -i > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 -I -H > "X-Auth-Token: gAAAAABjZNVNWPHppqLAGxDFedVVT4FiMam_wPsK0HL2siFsQhBz2- > LewBYcX5FCspKLclWzrGRj0ChEE2Rd-6auT_fpiI5uHFt11zOpDZRN83zxvG > Tp_FgtN5p7h5dG6I1yvPVi6ICurC8MhfK7SpfKupdK5TZVt7r2hZkdjd7TdHj3-m-Bgdc" > INFO:swiftclient:RESP STATUS: 401 Unauthorized > INFO:swiftclient:RESP HEADERS: {'Content-Type': 'text/html; > charset=UTF-8', 'Www-Authenticate': 'Swift realm="AUTH_6c50811c60 > 8f44ebb32132f039fd3ac0"', 'Content-Length': '0', 'X-Trans-Id': > 'txff99e52141174a038c7d1-006364d54d', 'X-Openstack-Request-Id': > 'txff99e52141174a038c7d1-006364d54d', 'Date': 'Fri, 04 Nov 2022 09:03:09 > GMT'} > ERROR:swiftclient.service:Account HEAD failed: > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 > Unauthorized (txn: txff99e52141174a038c7d1-006364d54d) > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/swiftclient/service.py", line 555, > in stat > items, headers = get_future_result(stats_future) > File "/usr/lib/python3/dist-packages/swiftclient/service.py", line 251, > in get_future_result > res = f.result(timeout=timeout) > File "/usr/lib/python3.8/concurrent/futures/_base.py", line 444, in > result > return self.__get_result() > File "/usr/lib/python3.8/concurrent/futures/_base.py", line 389, in > __get_result > raise self._exception > File "/usr/lib/python3.8/concurrent/futures/thread.py", line 57, in run > result = self.fn(*self.args, **self.kwargs) > File "/usr/lib/python3/dist-packages/swiftclient/multithreading.py", > line 201, in conn_fn > return fn(*conn_args, **kwargs) > File "/usr/lib/python3/dist-packages/swiftclient/command_helpers.py", > line 24, in stat_account > headers = conn.head_account(headers=req_headers) > File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 1902, > in head_account > return self._retry(None, head_account, headers=headers) > File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 1856, > in _retry > rv = func(self.url, self.token, *args, > File "/usr/lib/python3/dist-packages/swiftclient/client.py", line 924, > in head_account > raise ClientException.from_response(resp, 'Account HEAD failed', body) > swiftclient.exceptions.ClientException: Account HEAD failed: > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 > Unauthorized (txn: txff99e52141174a038c7d1-006364d54d) > Account HEAD failed: > http://controller:8080/v1/AUTH_6c50811c608f44ebb32132f039fd3ac0 401 > Unauthorized > Failed Transaction ID: txff99e52141174a038c7d1-006364d54d > ? > > The output of "tail -f/var/log/*" is as follows > > Nov 4 13:49:42 controller proxy-server: - - 04/Nov/2022/13/49/42 HEAD > /v1/AUTH_6c50811c608f44ebb32132f039fd3ac0%3Fformat%3Djson HTTP/1.0 200 - > Swift - - - - tx967ed9b9c29b4828b646d-0063651876 - 0.0039 RL - 1667569782.696980715 > 1667569782.700921297 - > Nov 4 13:49:42 controller proxy-server: 192.168.100.10 192.168.100.10 > 04/Nov/2022/13/49/42 HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3a > c0%3Fformat%3Djson HTTP/1.0 401 - python-swiftclient-3.11.0 > gAAAAABjZRh2... - - - tx967ed9b9c29b4828b646d-0063651876 - 0.0055 - - > 1667569782.696368933 1667569782.701885939 - > Nov 4 13:49:44 controller proxy-server: 192.168.100.10 192.168.100.10 > 04/Nov/2022/13/49/44 HEAD /v1/AUTH_6c50811c608f44ebb32132f039fd3a > c0%3Fformat%3Djson HTTP/1.0 401 - python-swiftclient-3.11.0 > gAAAAABjZRh4... - - - tx4ac048f132544deaaac45-0063651878 - 0.0009 - - > 1667569784.155652046 1667569784.156536341 - > > *If you need/etc/swift/proxy-server.conf Please receive in the attachment* > > *thank you!!!* > > -- Clay Gerrard -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Nov 7 21:21:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 07 Nov 2022 13:21:21 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 9 at 1600 UTC (PLEASE NOTE: MEETING DAY/TIME CHANGED) Message-ID: <18453f8242c.b055cc70747277.7691206993487182794@ghanshyammann.com> Hello Everyone, The technical Committee's weekly meeting day and time have changed to every Wed, 16 UTC. The next weekly meeting is scheduled for 2022 Nov 9, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, Nov 8 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From calestyo at scientia.org Tue Nov 8 02:11:22 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Tue, 08 Nov 2022 03:11:22 +0100 Subject: how to remove image with still used volumes Message-ID: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> Hey. I have instances with a volume that were once created from some image. Now the volume's OS was upgraded over time to the respective current releases, while the image is long obsolete and just uses up space. Is there a way to remove those images? It seems normal commands don't allow me, as long as there are volumes which were created from them. Thanks, Chris. From emccormick at cirrusseven.com Tue Nov 8 03:01:22 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Mon, 7 Nov 2022 22:01:22 -0500 Subject: how to remove image with still used volumes In-Reply-To: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> Message-ID: On Mon, Nov 7, 2022 at 9:13 PM Christoph Anton Mitterer < calestyo at scientia.org> wrote: > Hey. > > I have instances with a volume that were once created from some image. > > Now the volume's OS was upgraded over time to the respective current > releases, while the image is long obsolete and just uses up space. > > Is there a way to remove those images? It seems normal commands don't > allow me, as long as there are volumes which were created from them. > > Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants. What you can do is set it to "hidden" so it won't show up in the default image list. You'll still be able to explicitly look for it though, and instances that depend on it can find it as well. Check the --hidden option here. https://docs.openstack.org/glance/train/admin/manage-images.html If you have older Openstack, you can make "visibility" private which should hide it from most people. I'm not sure how long --hidden has existed. > Thanks, > Chris. > > -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From calestyo at scientia.org Tue Nov 8 03:15:43 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Tue, 08 Nov 2022 04:15:43 +0100 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> Message-ID: <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Hey Erik. On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote: > Instance disks are changes over?time from a baseline. What this means > is, you can't delete the origin without destroying all of its > descendants. But isn't that quite inefficient? If one never re-installs the images but only upgrades them over many years, any shared extents will be long gone and one just keeps the old copy of the original image around for no good. [The whole concept of images doesn't really fit my workflow, TBH. I simply have a number of existing systems I'd like to move into openstack... they already are installed and I'd just like to copy the raw image (of them) into a storage volume for instance - without any (OpenStack) images, especially as I'd have then one such (OpenStack) image for each server I want to move.] I even tried to circumvent this, attach a empty volume, copy the OS from the original volume to that and trying to remove the latter. But openstack won't let me for obscure reasons. Next I tried to simply use the copied-volume (which is then not based on an image) and create a new instance with that. While that works, the new instance then no longer boots via UEFI. Which is also a weird thing, I don't understand in OpenStack: Whether a VM boots from BIOS or UEFI, should be completely independent of any storage (volumes or images), however: Only(!) when I set --property hw_firmware_type=uefi while I create a image (and a volume/instance from that) the instance actually boots UEFI. When I set the same on either the server or the volume (when the image wasn't created so - or, as above, when no image was used at all)... it simply seems to ignore this and always uses SeaBIOS. I think I've experienced the same when I set the the hw_disk_bus to something else (like sata). Thanks, Chris. From adivya1.singh at gmail.com Tue Nov 8 04:34:14 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 8 Nov 2022 10:04:14 +0530 Subject: (openstack-ansible) Container installation in openstack In-Reply-To: References: Message-ID: hi Dmitry, Any input on this Adivya Singh Mon, Nov 7, 8:38 PM (13 hours ago) to Dmitriy hi Dmitry, I have added below stanza in openstack_user_config.yml magnum-infra_hosts: aio1: ip:172.29.236.100 magnum-compute_hosts: aio1: ip: 172.29.236.100 and also in conf.d magnum__hosts: aio1: ip: 172.29.236.100 but still when i am running the playbook for setup-hosts it is failing, Can you give me some hint Regards Adivya Singh On Wed, Nov 2, 2022 at 8:06 PM Dmitriy Rabotyagov wrote: > Ok, so for that in openstack_user_config.yml you will need to define > following groups: > * zun-infra_hosts - usually these are infra nodes as there only > api/proxy services will be located > * zun-compute_hosts - these hosts will be used by zun-compute and > kuryr for spawning containers. So usually it's a standalone hardware, > but maybe it can be co-located with nova-compute, I'm not absolutely > sure about that tbh. > > ??, 2 ????. 2022 ?. ? 04:27, Adivya Singh : > > > > Hello Dmitry, > > > > I was looking for Zun installation in OpenStack Xena version using > OpenStack Ansible. > > > > Regards > > Adivya Singh > > > > On Tue, Nov 1, 2022 at 12:06 AM Dmitriy Rabotyagov < > noonedeadpunk at gmail.com> wrote: > >> > >> Hi Adivya, > >> > >> Can you please elaborate more about what container service you are > >> thinking about? Is it Magnum or Zun or your question is more about how > >> to install all openstack services in containers? > >> > >> ??, 31 ???. 2022 ?. ? 19:34, Adivya Singh : > >> > > >> > Hi Team, > >> > > >> > Any input on this, to install container service in openstack using > ansible. > >> > > >> > standard global parametre > >> > > >> > Regards > >> > Adivya Singh > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Tue Nov 8 04:35:13 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 8 Nov 2022 10:05:13 +0530 Subject: how to remove image with still used volumes In-Reply-To: <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: hi, Any input on this Regards Adivya Singh On Tue, Nov 8, 2022 at 8:50 AM Christoph Anton Mitterer < calestyo at scientia.org> wrote: > Hey Erik. > > On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote: > > Instance disks are changes over time from a baseline. What this means > > is, you can't delete the origin without destroying all of its > > descendants. > > But isn't that quite inefficient? If one never re-installs the images > but only upgrades them over many years, any shared extents will be long > gone and one just keeps the old copy of the original image around for > no good. > > [The whole concept of images doesn't really fit my workflow, TBH. I > simply have a number of existing systems I'd like to move into > openstack... they already are installed and I'd just like to copy the > raw image (of them) into a storage volume for instance - without any > (OpenStack) images, especially as I'd have then one such (OpenStack) > image for each server I want to move.] > > > I even tried to circumvent this, attach a empty volume, copy the OS > from the original volume to that and trying to remove the latter. > But openstack won't let me for obscure reasons. > > > > Next I tried to simply use the copied-volume (which is then not based > on an image) and create a new instance with that. > While that works, the new instance then no longer boots via UEFI. > > > Which is also a weird thing, I don't understand in OpenStack: > Whether a VM boots from BIOS or UEFI, should be completely independent > of any storage (volumes or images), however: > > Only(!) when I set --property hw_firmware_type=uefi while I create a > image (and a volume/instance from that) the instance actually boots > UEFI. > When I set the same on either the server or the volume (when the image > wasn't created so - or, as above, when no image was used at all)... it > simply seems to ignore this and always uses SeaBIOS. > > I think I've experienced the same when I set the the hw_disk_bus to > something else (like sata). > > > Thanks, > Chris. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Tue Nov 8 04:36:21 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 8 Nov 2022 10:06:21 +0530 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: hi, you can directly delete from Database if they are obsolete Regards Adivya Singh On Tue, Nov 8, 2022 at 10:05 AM Adivya Singh wrote: > hi, > > Any input on this > > Regards > Adivya Singh > > On Tue, Nov 8, 2022 at 8:50 AM Christoph Anton Mitterer < > calestyo at scientia.org> wrote: > >> Hey Erik. >> >> On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote: >> > Instance disks are changes over time from a baseline. What this means >> > is, you can't delete the origin without destroying all of its >> > descendants. >> >> But isn't that quite inefficient? If one never re-installs the images >> but only upgrades them over many years, any shared extents will be long >> gone and one just keeps the old copy of the original image around for >> no good. >> >> [The whole concept of images doesn't really fit my workflow, TBH. I >> simply have a number of existing systems I'd like to move into >> openstack... they already are installed and I'd just like to copy the >> raw image (of them) into a storage volume for instance - without any >> (OpenStack) images, especially as I'd have then one such (OpenStack) >> image for each server I want to move.] >> >> >> I even tried to circumvent this, attach a empty volume, copy the OS >> from the original volume to that and trying to remove the latter. >> But openstack won't let me for obscure reasons. >> >> >> >> Next I tried to simply use the copied-volume (which is then not based >> on an image) and create a new instance with that. >> While that works, the new instance then no longer boots via UEFI. >> >> >> Which is also a weird thing, I don't understand in OpenStack: >> Whether a VM boots from BIOS or UEFI, should be completely independent >> of any storage (volumes or images), however: >> >> Only(!) when I set --property hw_firmware_type=uefi while I create a >> image (and a volume/instance from that) the instance actually boots >> UEFI. >> When I set the same on either the server or the volume (when the image >> wasn't created so - or, as above, when no image was used at all)... it >> simply seems to ignore this and always uses SeaBIOS. >> >> I think I've experienced the same when I set the the hw_disk_bus to >> something else (like sata). >> >> >> Thanks, >> Chris. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From calestyo at scientia.org Tue Nov 8 04:38:44 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Tue, 08 Nov 2022 05:38:44 +0100 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: <8250b3185bbbf0c2776215ec8aafc795db5ed707.camel@scientia.org> On Tue, 2022-11-08 at 10:06 +0530, Adivya Singh wrote: > you can directly delete from Database if they are obsolete Uhm... I guess that would require some form of admin rights (which I don't have on that cluster)? Thanks, Chris. From noonedeadpunk at gmail.com Tue Nov 8 06:09:43 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 8 Nov 2022 07:09:43 +0100 Subject: (openstack-ansible) Container installation in openstack In-Reply-To: References: Message-ID: Hey, First, there's no such thing as magnum-compute_hosts, also I'm not sure what for magnum__hosts is. I'd say you can safely drop both of these and leave just magnum-infra_hosts. But that should not cause playbook failure. But without error setup-hosts.yml fails with it is not possible to say what is the reason for that. So would be great to get more information on the issue. ??, 8 ????. 2022 ?., 05:34 Adivya Singh : > hi Dmitry, > > Any input on this > > Adivya Singh > Mon, Nov 7, 8:38 PM (13 hours ago) > to Dmitriy > hi Dmitry, > > I have added below stanza in openstack_user_config.yml > > magnum-infra_hosts: > aio1: > ip:172.29.236.100 > magnum-compute_hosts: > aio1: > ip: 172.29.236.100 > > and also in conf.d > > magnum__hosts: > aio1: > ip: 172.29.236.100 > > but still when i am running the playbook for setup-hosts it is failing, > Can you give me some hint > > Regards > Adivya Singh > > > On Wed, Nov 2, 2022 at 8:06 PM Dmitriy Rabotyagov > wrote: > >> Ok, so for that in openstack_user_config.yml you will need to define >> following groups: >> * zun-infra_hosts - usually these are infra nodes as there only >> api/proxy services will be located >> * zun-compute_hosts - these hosts will be used by zun-compute and >> kuryr for spawning containers. So usually it's a standalone hardware, >> but maybe it can be co-located with nova-compute, I'm not absolutely >> sure about that tbh. >> >> ??, 2 ????. 2022 ?. ? 04:27, Adivya Singh : >> > >> > Hello Dmitry, >> > >> > I was looking for Zun installation in OpenStack Xena version using >> OpenStack Ansible. >> > >> > Regards >> > Adivya Singh >> > >> > On Tue, Nov 1, 2022 at 12:06 AM Dmitriy Rabotyagov < >> noonedeadpunk at gmail.com> wrote: >> >> >> >> Hi Adivya, >> >> >> >> Can you please elaborate more about what container service you are >> >> thinking about? Is it Magnum or Zun or your question is more about how >> >> to install all openstack services in containers? >> >> >> >> ??, 31 ???. 2022 ?. ? 19:34, Adivya Singh : >> >> > >> >> > Hi Team, >> >> > >> >> > Any input on this, to install container service in openstack using >> ansible. >> >> > >> >> > standard global parametre >> >> > >> >> > Regards >> >> > Adivya Singh >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From eki+openstack at uperate.fi Tue Nov 8 06:28:49 2022 From: eki+openstack at uperate.fi (Erkki Peura) Date: Tue, 8 Nov 2022 08:28:49 +0200 Subject: how to remove image with still used volumes In-Reply-To: <8250b3185bbbf0c2776215ec8aafc795db5ed707.camel@scientia.org> References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> <8250b3185bbbf0c2776215ec8aafc795db5ed707.camel@scientia.org> Message-ID: Hi, yes, accessing database requires some form of admin rights. IMO it's bad idea delete images from database unless you clean up related storage (Glance store) also, just deleting related db records leaves actual image file untouched Br, - Eki - On Tue, 8 Nov 2022 at 06:46, Christoph Anton Mitterer wrote: > > On Tue, 2022-11-08 at 10:06 +0530, Adivya Singh wrote: > > you can directly delete from Database if they are obsolete > > Uhm... I guess that would require some form of admin rights (which I > don't have on that cluster)? > > > Thanks, > Chris. > From skaplons at redhat.com Tue Nov 8 08:19:40 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 08 Nov 2022 09:19:40 +0100 Subject: [neutron] CI meeting 8th Nov cancelled Message-ID: <12254517.3RSvA1TmNm@p1> Hi, I can't chair CI meeting this week too. As there is nothing urgent really to speak about I talked with Rodolfo and we agreed to cancel this week's meeting. See You on the meeting next week (Nov 15th) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From hanguangyu2 at gmail.com Tue Nov 8 08:48:47 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Tue, 8 Nov 2022 16:48:47 +0800 Subject: [Keystone] Confusion about the admin role Message-ID: Hi, I'd like to ask some questions about the admin role. When I grant the admin role to a user in a project, that user can also get the admin role for other projects in the same domain. If I do the following? ```shell openstack project create --domain default --description "Demo Project" myproject openstack user create --domain default --password-prompt myuser openstack role add --project myproject --user myuser admin ``` Then, the myuser user has the permission to grant himself the admin role of another project in the same domain. I used to understand that 'openstack role add --project myproject --user myuser admin' was simply granted to myuser as admin within the myproject project, but now I find that This is equivalent to having the admin role for the entire domain. Can I ask the design idea here, or what I think is wrong? Thanks, Han Guangyu From smooney at redhat.com Tue Nov 8 09:07:40 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 08 Nov 2022 09:07:40 +0000 Subject: [Keystone] Confusion about the admin role In-Reply-To: References: Message-ID: <1d21319bba85c16e2e911dfb5106cce202f5b7d0.camel@redhat.com> On Tue, 2022-11-08 at 16:48 +0800, ??? wrote: > Hi, > > I'd like to ask some questions about the admin role. > > When I grant the admin role to a user in a project, that user can also > get the admin role for other projects in the same domain. > If I do the following? > ```shell > openstack project create --domain default --description "Demo Project" myproject > openstack user create --domain default --password-prompt myuser > openstack role add --project myproject --user myuser admin > ``` > Then, the myuser user has the permission to grant himself the admin > role of another project in the same domain. today openstack only has gloabl admin. we do not have project or domain scoped admin currently. so this is the expected behaivor. > > I used to understand that 'openstack role add --project myproject > --user myuser admin' was simply granted to myuser as admin within the > myproject project, but now I find that This is equivalent to having > the admin role for the entire domain. yes it is > > Can I ask the design idea here, or what I think is wrong? no so the admin role is cloud wide. > > Thanks, > Han Guangyu > From hanguangyu2 at gmail.com Tue Nov 8 09:20:51 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Tue, 8 Nov 2022 17:20:51 +0800 Subject: [Keystone] Confusion about the admin role In-Reply-To: <1d21319bba85c16e2e911dfb5106cce202f5b7d0.camel@redhat.com> References: <1d21319bba85c16e2e911dfb5106cce202f5b7d0.camel@redhat.com> Message-ID: Hi Sean, Thank you so much, I get it. Han Sean Mooney ?2022?11?8??? 17:08??? > > On Tue, 2022-11-08 at 16:48 +0800, ??? wrote: > > Hi, > > > > I'd like to ask some questions about the admin role. > > > > When I grant the admin role to a user in a project, that user can also > > get the admin role for other projects in the same domain. > > If I do the following? > > ```shell > > openstack project create --domain default --description "Demo Project" myproject > > openstack user create --domain default --password-prompt myuser > > openstack role add --project myproject --user myuser admin > > ``` > > Then, the myuser user has the permission to grant himself the admin > > role of another project in the same domain. > today openstack only has gloabl admin. > > we do not have project or domain scoped admin currently. > so this is the expected behaivor. > > > > I used to understand that 'openstack role add --project myproject > > --user myuser admin' was simply granted to myuser as admin within the > > myproject project, but now I find that This is equivalent to having > > the admin role for the entire domain. > yes it is > > > > Can I ask the design idea here, or what I think is wrong? > no so the admin role is cloud wide. > > > > Thanks, > > Han Guangyu > > > From hanguangyu2 at gmail.com Tue Nov 8 09:38:36 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Tue, 8 Nov 2022 17:38:36 +0800 Subject: [oslo] New driver for oslo.messaging In-Reply-To: References: <4eddcca5.3347.18432712271.Coremail.wangkuntian1994@163.com> Message-ID: Hi, Wang and me are colleagues in a team. I would like to ask, put aside the working process of NATS and look at Rocketmq independently, if we want to do the work of adding Rocketmq[1] drivers, is the community welcome? We have already seen the oslo driver policy[2] in the documentation. I also want to ask, if the community is willing to accept the Rocketmq driver, whether we need to do other efforts besides the development task itself. For example, I see that "Must have at least two individuals from the community committed to triaging and fixing bugs, and responding to test failures in a timely manner". I want to ask: ?1?Is the current policy still ?2?Are there community members willing to take responsibility for this, or is it okay if we commit to triaging and fixing bugs, and responding to test failures in a timely manner by ourselves Cheers, Han [1] https://github.com/apache/rocketmq [2] https://docs.openstack.org/oslo.messaging/latest/contributor/supported-messaging-drivers.html Christian Rohmann ?2022?11?2??? 18:20??? > > On 01/11/2022 10:06, ??? wrote: > > I want to develop a new driver for oslo.messaging to use rocketmq in openstack environment. I wonder if the community need this new driver? > > > There is a larger discussion around adding a driver for NATS (https://lists.openstack.org/pipermail/openstack-discuss/2022-August/030179.html). > Maybe the reasoning, discussion and also the PoC there is helpful to answer your question. I suppose you are also "not happy" with using RabbitMQ? > > > > Regards > > > Christian From ygk.kmr at gmail.com Tue Nov 8 11:03:18 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Tue, 8 Nov 2022 16:33:18 +0530 Subject: Need assistance Message-ID: Hi All, I have a OSA setup. I am trying to trace the control flow of nova-api using pdb in the file "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". My goal is to trace the flow for "nova list --all" command. I am launching the nova-api service manually from the command line as follows: #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini /etc/uwsgi/nova-api-os-compute.ini --workers 1 I am executing "nova list --all" command in another terminal. I have inserted pdb in instance.py as follows: ---- @base.remotable_classmethod def get_all(cls, context, expected_attrs=None): import pdb; pdb.set_trace() """Returns all instances on all nodes.""" db_instances = db.instance_get_all( context, columns_to_join=_expected_cols(expected_attrs)) return _make_instance_list(context, cls(), db_instances, expected_attrs) --- But when I fire the nova list --all command, I see no pdb prompt being shown in the nova-api window. Can anyone help me how to use the pdb to trace the flow of control for "nova list --all" command ? Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From calestyo at scientia.org Tue Nov 8 12:44:46 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Tue, 08 Nov 2022 13:44:46 +0100 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> <8250b3185bbbf0c2776215ec8aafc795db5ed707.camel@scientia.org> Message-ID: Hey. So is there no way to get some raw data as a volume (without any images) into openstack and boot via UEFI from it? Thanks, Chris. From wodel.youchi at gmail.com Tue Nov 8 13:44:15 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 8 Nov 2022 14:44:15 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate Message-ID: Hi, To deploy Openstack with a self-signed certificate, the documentation says to generate the certificates using kolla-ansible certificates, to configure the support of TLS in globals.yml and to deploy. I am facing a problem, my old certificate has expired, I want to use a self-signed certificate. I backported my servers to an older date, then generated a self-signed certificate using kolla, but the deploy/reconfigure won't work, they say : self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED certificate verify failed PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue Nov 8 13:50:26 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 8 Nov 2022 10:50:26 -0300 Subject: [Yoga][Cloudkitty] Some projects have their rate to 0 on some services In-Reply-To: References: Message-ID: What is the CloudKitty fetcher that you are using? On Thu, Nov 3, 2022 at 3:11 PM wodel youchi wrote: > Hi, > > I deployed Cloudkitty with the following metrics.yml file : > metrics: > cpu: > unit: instance > alt_name: instance > groupby: > - id > - user_id > - project_id > metadata: > - flavor_name > - flavor_id > - vcpus > mutate: NUMBOOL > extra_args: > aggregation_method: mean > resource_type: instance > force_granularity: 300 > > image.size: > unit: MiB > factor: 1/1048576 > groupby: > - id > - user_id > - project_id > metadata: > - container_format > - disk_format > extra_args: > aggregation_method: mean > resource_type: image > force_granularity: 300 > > > > > > > > > > > > > * volume.size: unit: GiB groupby: - id - user_id - > project_id metadata: - volume_type extra_args: > aggregation_method: mean resource_type: volume force_granularity: > 300* > > > > I created a service for volume.size following the example here : > https://docs.openstack.org/cloudkitty/yoga/user/rating/hashmap.html > > I added the user cloudkitty to the admin project and to another project > named Project01. > > When showing the rates I have 0 rate on the Project01. > > > For example : > executing this command : * openstack rating dataframes get | grep > volume.size* > > This volume belongs to an instance in the Admin project, as you can see > rating is 4.5: > | 2022-11-02T18:25:00 | 2022-11-02T18:30:00 | > 31bfb5bcf7b7413da269d7a35a2fe69a |* [{'rating': '4.5', 'service': > 'volume.size*', 'desc': {'volume_type': > '246853e3-1215-4147-aef2-54012221ecc9', 'id': ' > *07811807-474a-4eb5-91b5-ce2dcdd7be26*', 'project_id': > '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': > '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': > '1.5000'}, {'rating': '4.5', 'service': 'volume.size', 'desc': > {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': > '8a345711-0486-4733-b8bc-fd1966678aec', 'project_id': > '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': > '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': > '1.5000'}, {'rating': '4.5', 'service': 'volume.size', 'desc': > {'volume_type': '246853e3-1215-4147-aef2-54012221ecc9', 'id': > 'afd22819-8faa-47ee-8c09-75290d2cf18e', 'project_id': > '31bfb5bcf7b7413da269d7a35a2fe69a', 'user_id': > '2a3f2478e334473e85527102b76f7a2e'}, 'volume': '3.0', 'rate_value': > '1.5000'}] > > This volume belongs to an instance in the Project01 project, as you can > see rating is 0.0 : > | 2022-11-03T10:35:00 | 2022-11-03T10:40:00 | > 2e80eb3b3d344ef9993065ce689395d9 | *[{'rating': '0.0'*, 'service': > 'volume.size', 'desc': {'volume_type': > '246853e3-1215-4147-aef2-54012221ecc9', 'id': ' > *1c396d46-8954-4e8c-b3e8-8e5e4eb6aba4*', 'project_id': > '2e80eb3b3d344ef9993065ce689395d9', 'user_id': > 'd9e5696e99954ae1ac87db9cca82c839'}, 'volume': '20.0', 'rate_value': > '0.0000'}] > > I don't understand why it works for one and not the other? > > More info : > (yogavenv) [deployer at rscdeployer ~]$ openstack rating hashmap service list > +------------------------+--------------------------------------+ > | Name | Service ID | > +------------------------+--------------------------------------+ > | instance | 06e17b49-8cd4-4cb9-8965-cb929ee12909 | > | network.incoming.bytes | 634069b2-ca42-4a28-8778-ac69144fcc23 | > | network.outgoing.bytes | 6c1fdaa7-15cb-41b4-be0e-109d64810dde | > | volume.size | b6934ab1-8326-4281-89b9-f80294430321 | > | image.size | d3652e08-8645-45fd-b7db-b710ae716876 | > +------------------------+--------------------------------------+ > > > (yogavenv) [deployer at rscdeployer ~]$ openstack rating hashmap mapping > list -s b6934ab1-8326-4281-89b9-f80294430321 > > +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ > | Mapping ID | Value | Cost > | Type | Field ID | Service ID |* Group > ID | Project ID* | > > +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ > | f81aea1e-0651-4c0a-b043-496fdd892635 | None | > 1.5000000000000000000000000000 | flat | None | > b6934ab1-8326-4281-89b9-f80294430321 | *None | None * | > > +--------------------------------------+-------+--------------------------------+------+----------+--------------------------------------+----------+------------+ > > Regards. > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Tue Nov 8 14:17:44 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 8 Nov 2022 19:47:44 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> Message-ID: hi Eugen, I checked and I did not find a duplicate entry from the AD side. What i tried, was delete all the resources which are with the user, and delete the project id and re register the user again, but it does not work Also i tried to delete the project for the user id, and try to login for that user name but same error What i found is still there is a entry in Keystone Database , in a non_user local table for the user. Can i manually delete from the Database, or is there any way from Open stack to delete a non_local user Regards Adivya Singh On Mon, Nov 7, 2022 at 8:16 PM Adivya Singh wrote: > Ok, I will check. > > On Sat, Nov 5, 2022 at 12:33 AM Eugen Block wrote: > >> I know nothing about AD, I?m afraid. But where exactly do you see that >> message? Is it in keystone or AD? Anyway, you seem to have a duplicate >> entry (somewhere), so check the keystone database and the AD entries >> and compare (with working users). >> >> Zitat von Adivya Singh : >> >> > Hi Eugen, >> > >> > I see the below error while authenticating >> > Conflict occurred attempting to store nonlocal_user - Duplicate entry >> found >> > with name at domain ID >> > >> > How can we fix this? >> > >> > Regards >> > Adivya Singh >> > >> > On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh >> wrote: >> > >> >> Hi Eugen, >> >> >> >> All the users are AD based authentication, but this user only facing a >> >> problem >> >> Trying to Find out the AD Team , what happened all of a sudden for this >> >> user >> >> >> >> Regards >> >> Adivya Singh >> >> >> >> R >> >> >> >> >> >> On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: >> >> >> >>> I assume this isn't the only user trying to login from AD, correct? >> >>> Then compare the properties/settings between a working and the >> >>> non-working user, you should probably find something. Also enable >> >>> debug logs in keystone to find more details. And by "all of a sudden" >> >>> you mean that it worked before? So what changed between then and now? >> >>> >> >>> Zitat von Adivya Singh : >> >>> >> >>> > Hi Team, >> >>> > >> >>> > There is one issue , where a user is getting " Authenticated >> Failure" >> >>> all >> >>> > of a sudden, and this user is the only user who is facing this >> problem. >> >>> > >> >>> > I tried to disable and enable the project if, Check the logs but do >> not >> >>> > found anything related to Keystone authentication >> >>> > >> >>> > Delete the Project id and Create it again , Results are same , Any >> >>> insights >> >>> > what i can do more to fix this issue >> >>> > >> >>> > Regards >> >>> > Adivya Singh >> >>> >> >>> >> >>> >> >>> >> >>> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 8 14:37:20 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 08 Nov 2022 14:37:20 +0000 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> Message-ID: <20221108143720.Horde.LrpYhZhxhp6TvAAFqXANRoX@webmail.nde.ag> That table is populated by keystone, I'm not sure if modifying the database is the right approach here. If you check control01:~ # openstack user list --long --domain you should see the affected (non-local) user there, correct? I'm not sure if it's a good idea and what the consequences would be if you tried to delete the user with the openstack cli. Depending on the configs keystone could actually delete it from the AD backend, but as I said, I'm not sure what will happen, so be careful. Just to compare, what do your keystone configs look like, especially these two sections: [assignment] [identity] We use LDAP as backend for non-local users (but I'm not an admin) so it should be a similar setup. Our identity section looks like this: [identity] domain_specific_drivers_enabled = true domain_configurations_from_database = true driver = sql and this is the assignment section: [assignment] driver = sql Do you know any history with this specific user why it stopped working? Zitat von Adivya Singh : > hi Eugen, > > I checked and I did not find a duplicate entry from the AD side. > > What i tried, was delete all the resources which are with the user, and > delete the project id and re register the user again, but it does not work > > Also i tried to delete the project for the user id, and try to login for > that user name but same error > > What i found is still there is a entry in Keystone Database , in a non_user > local table for the user. > > Can i manually delete from the Database, or is there any way from Open > stack to delete a non_local user > > Regards > Adivya Singh > > On Mon, Nov 7, 2022 at 8:16 PM Adivya Singh wrote: > >> Ok, I will check. >> >> On Sat, Nov 5, 2022 at 12:33 AM Eugen Block wrote: >> >>> I know nothing about AD, I?m afraid. But where exactly do you see that >>> message? Is it in keystone or AD? Anyway, you seem to have a duplicate >>> entry (somewhere), so check the keystone database and the AD entries >>> and compare (with working users). >>> >>> Zitat von Adivya Singh : >>> >>> > Hi Eugen, >>> > >>> > I see the below error while authenticating >>> > Conflict occurred attempting to store nonlocal_user - Duplicate entry >>> found >>> > with name at domain ID >>> > >>> > How can we fix this? >>> > >>> > Regards >>> > Adivya Singh >>> > >>> > On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh >>> wrote: >>> > >>> >> Hi Eugen, >>> >> >>> >> All the users are AD based authentication, but this user only facing a >>> >> problem >>> >> Trying to Find out the AD Team , what happened all of a sudden for this >>> >> user >>> >> >>> >> Regards >>> >> Adivya Singh >>> >> >>> >> R >>> >> >>> >> >>> >> On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: >>> >> >>> >>> I assume this isn't the only user trying to login from AD, correct? >>> >>> Then compare the properties/settings between a working and the >>> >>> non-working user, you should probably find something. Also enable >>> >>> debug logs in keystone to find more details. And by "all of a sudden" >>> >>> you mean that it worked before? So what changed between then and now? >>> >>> >>> >>> Zitat von Adivya Singh : >>> >>> >>> >>> > Hi Team, >>> >>> > >>> >>> > There is one issue , where a user is getting " Authenticated >>> Failure" >>> >>> all >>> >>> > of a sudden, and this user is the only user who is facing this >>> problem. >>> >>> > >>> >>> > I tried to disable and enable the project if, Check the logs but do >>> not >>> >>> > found anything related to Keystone authentication >>> >>> > >>> >>> > Delete the Project id and Create it again , Results are same , Any >>> >>> insights >>> >>> > what i can do more to fix this issue >>> >>> > >>> >>> > Regards >>> >>> > Adivya Singh >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> From eblock at nde.ag Tue Nov 8 15:09:34 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 08 Nov 2022 15:09:34 +0000 Subject: [keystone][cache] How to tune role cache In-Reply-To: <20221026074417.Horde.2xZbFrscSR34uUqXAQy5PBQ@webmail.nde.ag> Message-ID: <20221108150934.Horde.FBhcfFPmn5ZeQuGkQ8s2DPg@webmail.nde.ag> Does anyone have a comment on this? I can't imagine that creating the same project within a short period of time is a corner case. How do others deal with this? Zitat von Eugen Block : > Hi *, > > one of our customers has two almost identical clouds (Victoria), the > only difference is that one of them has three control nodes (HA via > pacemaker) and the other one only one control node. They use > terraform to deploy lots of different k8s clusters and other stuff. > In the HA cloud they noticed keystone errors when they purged a > project (cleanly) and started the redeployment immediately after > that. We did some tests to find out which exact keystone cache it is > and it seems to be the role cache (default 600 seconds) which leads > to an error in terraform, it reports that the project was not found > and refers to the previous ID of the project. > The same deployment seems to work in the single-control environment > without these errors, it just works although the cache is enabled as > well. > I already tried to reduce the cache_time to 30 seconds but that > doesn't help (although it takes more than 30 seconds until terraform > is ready after the prechecks). But the downside of disabling the > role cache entirely leads to significantly longer response times > when using the dashboard or querying the APIs. > Is there any way to tune the role cache in a way so we could have > both a reasonable performance as well as being able to redeploy > projects without a "sleep 600"? > Any comments or recommendations are appreciated! > > Regards, > Eugen From grasza at redhat.com Tue Nov 8 08:34:16 2022 From: grasza at redhat.com (Grzegorz Grasza) Date: Tue, 8 Nov 2022 09:34:16 +0100 Subject: [barbican] Meeting canceled today (2022-11-08) Message-ID: Hi Team, I'm canceling the meeting today, since I'm attending an internal conference/meetup. Looking at the agenda, I see there is the storyboard issue we need to discuss, but this can surely wait another week. / Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Nov 8 17:09:01 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 8 Nov 2022 12:09:01 -0500 Subject: how to remove image with still used volumes In-Reply-To: <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: On Mon, Nov 7, 2022 at 10:15 PM Christoph Anton Mitterer < calestyo at scientia.org> wrote: > Hey Erik. > > On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote: > > Instance disks are changes over time from a baseline. What this means > > is, you can't delete the origin without destroying all of its > > descendants. > > But isn't that quite inefficient? If one never re-installs the images > but only upgrades them over many years, any shared extents will be long > gone and one just keeps the old copy of the original image around for > no good. > I suppose you could consider it inefficient over a very long term in that you have a source image taking up storage that has very little resemblance to the instances that were spawned from it. However, what you're running in to here is the "pets vs. cattle" argument. Openstack is a *cloud* platform, not a virtualization platform. It is built for cattle. Long-lived instances are not what it's targeted to. That being said, it deals with them just fine. You simply have to accept you're going to end up with these relics. If you're able to nuke and recreate instances frequently and not upgrade them over years, you end up using far less storage and have instances that can quickly migrate around if you're using local storage. > > [The whole concept of images doesn't really fit my workflow, TBH. I > simply have a number of existing systems I'd like to move into > openstack... they already are installed and I'd just like to copy the > raw image (of them) into a storage volume for instance - without any > (OpenStack) images, especially as I'd have then one such (OpenStack) > image for each server I want to move.] > You can import an existing disk (after some conversion depending on your source hypervisor) into a Ceph-backed Cinder volume and boot from it just fine. You have to make sure to tick the box that tells it it's bootable, but otherwise should be fine. I even tried to circumvent this, attach a empty volume, copy the OS > from the original volume to that and trying to remove the latter. > But openstack won't let me for obscure reasons. > > > > Next I tried to simply use the copied-volume (which is then not based > on an image) and create a new instance with that. > While that works, the new instance then no longer boots via UEFI. > > > Which is also a weird thing, I don't understand in OpenStack: > Whether a VM boots from BIOS or UEFI, should be completely independent > of any storage (volumes or images), however: > > Only(!) when I set --property hw_firmware_type=uefi while I create a > image (and a volume/instance from that) the instance actually boots > UEFI. > When I set the same on either the server or the volume (when the image > wasn't created so - or, as above, when no image was used at all)... it > simply seems to ignore this and always uses SeaBIOS. > > I think I've experienced the same when I set the the hw_disk_bus to > something else (like sata). > > Those properties you're setting on images are simply being passed to nova when it boots the instance. You should be able to specify them on a command-line boot from a volume. For your conversion purposes, you could check out virt-v2v. I used that to convert a bunch of old vmware instances to KVM and import them into Openstack. It was slow but worked pretty well. > > Thanks, > Chris. > -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From jay at gr-oss.io Tue Nov 8 22:33:03 2022 From: jay at gr-oss.io (Jay Faulkner) Date: Tue, 8 Nov 2022 14:33:03 -0800 Subject: [ironic][release] Bugfix branch status and cleanup w/r/t zuul-config-errors In-Reply-To: References: Message-ID: Does anyone from the Releases team want to chime in on the best way to execute this kind of change? -JayF On Wed, Nov 2, 2022 at 7:02 AM Jay Faulkner wrote: > > > On Wed, Nov 2, 2022 at 3:04 AM Dmitry Tantsur wrote: > >> Hi Jay, >> >> On Tue, Nov 1, 2022 at 8:17 PM Jay Faulkner wrote: >> >>> Hey all, >>> >>> I've been looking into the various zuul config errors showing up for >>> Ironic-program branches. Almost all of our old bugfix branches are in the >>> list. Additionally, not properly retiring the bugfix branches leads to an >>> ever-growing list of branches which makes it a lot more difficult, for >>> contributors and operators alike, to tell which ones are currently >>> supported. >>> >> >> I'd like to see the errors. We update Zuul configuration manually for >> each bugfix branch, mapping appropriate branches for other projects >> (devstack, nova, etc). It's possible that we always overlook a few jobs, >> which causes Zuul to be upset (but quietly upset, so we don't notice). >> >> > > The errors show up in https://zuul.opendev.org/t/openstack/config-errors > -- although they seem to be broken this morning. Most of them are older > bugfix branches, ones that are out of support, that have the `Queue: > Ironic` param that's no longer supported. I am not in favor of anyone going > to dead bugfix branches and fixing CI; instead we should retire the ones > out of use. > > >> >>> I've put together a document describing the situation as it is now, and >>> my proposal: >>> https://etherpad.opendev.org/p/IronicBugfixBranchCleanup >>> >> >> Going with the "I would like to retire" would cause us so much trouble >> that we'll have to urgently create a downstream mirror of them. Once we do >> this, using upstream bugfix branches at all will be questionable. >> Especially bugfix/19.0 (and corresponding IPA/inspector branches) is used >> in a very actively maintained release. >> >> > > Then we won't; but we do need to think about what timeline we can talk > about upstream for getting a cadence for getting these retired out, just > like we have a cadence for getting them cut every two months. I'll revise > the list and remove the "I would like to retire" section (move it to > keep-em-up). > > >> >>> Essentially, I think we need to: >>> - identify bugfix branches to cleanup (I've done this in the above >>> etherpad, but some of the ) >>> - clean them up (the next step) >>> - update Ironic policy to set a regular cadence for when to retire >>> bugfix branches, and encode the process for doing so >>> >>> This means there are two overall questions to answer in this email: >>> 1) Mechanically, what's the process for doing this? I don't believe the >>> existing release tooling will be useful for this, but I'm not 100% sure. >>> I've pulled (in the above etherpad and a local spreadsheet) the last SHA >>> for each branch; so we should be able to EOL these branches similarly to >>> how we EOL stable branches; except manually instead of with tooling. Who is >>> going to do this work? (I'd prefer releases team continue to hold the keys >>> to do this; but I understand if you don't want to take on this manual work). >>> >> >> EOL tags will be created by the release team, yes. I don't think we can >> get the keys without going "independent". >> >> > > It's a gerrit ACL you can enable to give other people access to tags; but > like I said, I don't want that access anyway :). > > >> >>> 2) What's the pattern for Ironic to adopt regarding these branches? We >>> just need to write down the expected lifecycle and enforce it -- so we >>> prevent being this deep into "branch debt" in the future. >>> >> >> With my vendor's (red) hat on, I'd prefer to have a dual approach: the >> newest branches are supported by the community (i.e. us all), the oldest - >> by vendors who need them (EOLed if nobody volunteers). I think you already >> have a list of branches that OCP uses? Feel free to point Riccardo, Iury or >> myself at any issues with them. >> >> > That's not really an option IMO. These branches exist in the upstream > community, and are seen by upstream contributors and operators. If they're > going to live here; they need to have some reasonable documentation about > what folks should expect out of them and efforts being put towards them. > Even if the documentation is "bugfix/1.2 is maintained as long as Product A > 1.2 is maintained", that's better than leaving the community guessing about > what these are used for, and why some are more-supported than others. > > -Jay > > > Dmitry >> >> >>> >>> >>> What do folks think? >>> >>> - >>> Jay Faulkner >>> >> >> >> -- >> >> Red Hat GmbH , Registered seat: Werner von Siemens Ring 12, D-85630 Grasbrunn, Germany >> Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Nov 9 00:09:06 2022 From: melwittt at gmail.com (melanie witt) Date: Tue, 8 Nov 2022 16:09:06 -0800 Subject: Need assistance In-Reply-To: References: Message-ID: On Tue Nov 08 2022 03:03:18 GMT-0800 (Pacific Standard Time), Gk Gk wrote: > Hi All, > > I have a OSA setup. I am trying to trace the control flow of nova-api > using pdb in the file > "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". > > My goal is to trace the flow for "nova list --all" command. I am > launching the nova-api service? manually from the command line as follows: > > #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini > /etc/uwsgi/nova-api-os-compute.ini ? --workers 1 > > I am executing "nova list --all" command in another terminal.? I have > inserted pdb in instance.py as follows: > > ---- > ? ? @base.remotable_classmethod > ? ? def get_all(cls, context, expected_attrs=None): > ? ? ? ? import pdb; pdb.set_trace() > ? ? ? ? """Returns all instances on all nodes.""" > ? ? ? ? db_instances = db.instance_get_all( > ? ? ? ? ? ? ? ? context, columns_to_join=_expected_cols(expected_attrs)) > ? ? ? ? return _make_instance_list(context, cls(), db_instances, > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?expected_attrs) > --- > > But when I fire the nova list --all command, I see no pdb prompt being > shown in the nova-api window. Can anyone help me how to use the pdb to > trace the flow of control? for "nova list --all" command ? It looks like running nova-api that way is still running as a background process: https://stackoverflow.com/questions/34914704/bdbquit-raised-when-debugging-python-with-pdb I got that result ^ when I tried it locally. I was however able to get success with remote pdb: https://docs.openstack.org/devstack/latest/systemd.html#using-remote-pdb so maybe give that a try. Note that the code where you set the trace in nova/objects/instance.py is not actually hit when doing a server list. You may have instead meant: https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/api.py#L2991 Also note that as a community we're trying to get away from using the legacy 'nova' command and recommend using the openstackclient instead: https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-list The 'nova' CLI is no longer being maintained and we're adding to the novaclient python bindings only when necessary. HTH, -melwitt From gmann at ghanshyammann.com Wed Nov 9 02:44:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 08 Nov 2022 18:44:31 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 9 at 1600 UTC (PLEASE NOTE: MEETING DAY/TIME CHANGED) In-Reply-To: <18453f8242c.b055cc70747277.7691206993487182794@ghanshyammann.com> References: <18453f8242c.b055cc70747277.7691206993487182794@ghanshyammann.com> Message-ID: <1845a465d43.c643615a846303.1880800900415890589@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Nov 9 at 1600 UTC. Location:' IRC #openstack-tc Details: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Roll call * Follow up on past action items * Gate health check * TC chair election process ** option 1: https://review.opendev.org/c/openstack/governance/+/862772 ** option 2: https://review.opendev.org/c/openstack/governance/+/862774 * TC stop using storyboard? ** https://storyboard.openstack.org/#!/project/923 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 07 Nov 2022 13:21:21 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's weekly meeting day and time have changed to every Wed, 16 UTC. The next > weekly meeting is scheduled for 2022 Nov 9, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, Nov 8 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From adivya1.singh at gmail.com Wed Nov 9 03:28:01 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 9 Nov 2022 08:58:01 +0530 Subject: (Openstack-Keystone)Regarding Authentication issue of one user while login to Open Stack using AD password In-Reply-To: <20221108143720.Horde.LrpYhZhxhp6TvAAFqXANRoX@webmail.nde.ag> References: <20221104082922.Horde.Za8eX6p6eb8iA9Cj-IDrSM7@webmail.nde.ag> <20221104190259.Horde.cMQ3VNeIOR8HwwUU0R9icD6@webmail.nde.ag> <20221108143720.Horde.LrpYhZhxhp6TvAAFqXANRoX@webmail.nde.ag> Message-ID: Hi Eugen, There is a user whose attributes got changed in AD because of Last Name change, but the AD domain name remains the name, that triggered the issue i think deleting from the Database will have no issue, but i will check it down Regards Adivya Singh On Tue, Nov 8, 2022 at 8:07 PM Eugen Block wrote: > That table is populated by keystone, I'm not sure if modifying the > database is the right approach here. If you check > > control01:~ # openstack user list --long --domain > > you should see the affected (non-local) user there, correct? I'm not > sure if it's a good idea and what the consequences would be if you > tried to delete the user with the openstack cli. Depending on the > configs keystone could actually delete it from the AD backend, but as > I said, I'm not sure what will happen, so be careful. Just to compare, > what do your keystone configs look like, especially these two sections: > > [assignment] > [identity] > > We use LDAP as backend for non-local users (but I'm not an admin) so > it should be a similar setup. Our identity section looks like this: > > [identity] > domain_specific_drivers_enabled = true > domain_configurations_from_database = true > driver = sql > > and this is the assignment section: > > [assignment] > driver = sql > > Do you know any history with this specific user why it stopped working? > > > Zitat von Adivya Singh : > > > hi Eugen, > > > > I checked and I did not find a duplicate entry from the AD side. > > > > What i tried, was delete all the resources which are with the user, and > > delete the project id and re register the user again, but it does not > work > > > > Also i tried to delete the project for the user id, and try to login for > > that user name but same error > > > > What i found is still there is a entry in Keystone Database , in a > non_user > > local table for the user. > > > > Can i manually delete from the Database, or is there any way from Open > > stack to delete a non_local user > > > > Regards > > Adivya Singh > > > > On Mon, Nov 7, 2022 at 8:16 PM Adivya Singh > wrote: > > > >> Ok, I will check. > >> > >> On Sat, Nov 5, 2022 at 12:33 AM Eugen Block wrote: > >> > >>> I know nothing about AD, I?m afraid. But where exactly do you see that > >>> message? Is it in keystone or AD? Anyway, you seem to have a duplicate > >>> entry (somewhere), so check the keystone database and the AD entries > >>> and compare (with working users). > >>> > >>> Zitat von Adivya Singh : > >>> > >>> > Hi Eugen, > >>> > > >>> > I see the below error while authenticating > >>> > Conflict occurred attempting to store nonlocal_user - Duplicate entry > >>> found > >>> > with name at domain ID > >>> > > >>> > How can we fix this? > >>> > > >>> > Regards > >>> > Adivya Singh > >>> > > >>> > On Fri, Nov 4, 2022 at 6:13 PM Adivya Singh > > >>> wrote: > >>> > > >>> >> Hi Eugen, > >>> >> > >>> >> All the users are AD based authentication, but this user only > facing a > >>> >> problem > >>> >> Trying to Find out the AD Team , what happened all of a sudden for > this > >>> >> user > >>> >> > >>> >> Regards > >>> >> Adivya Singh > >>> >> > >>> >> R > >>> >> > >>> >> > >>> >> On Fri, Nov 4, 2022 at 2:06 PM Eugen Block wrote: > >>> >> > >>> >>> I assume this isn't the only user trying to login from AD, correct? > >>> >>> Then compare the properties/settings between a working and the > >>> >>> non-working user, you should probably find something. Also enable > >>> >>> debug logs in keystone to find more details. And by "all of a > sudden" > >>> >>> you mean that it worked before? So what changed between then and > now? > >>> >>> > >>> >>> Zitat von Adivya Singh : > >>> >>> > >>> >>> > Hi Team, > >>> >>> > > >>> >>> > There is one issue , where a user is getting " Authenticated > >>> Failure" > >>> >>> all > >>> >>> > of a sudden, and this user is the only user who is facing this > >>> problem. > >>> >>> > > >>> >>> > I tried to disable and enable the project if, Check the logs but > do > >>> not > >>> >>> > found anything related to Keystone authentication > >>> >>> > > >>> >>> > Delete the Project id and Create it again , Results are same , > Any > >>> >>> insights > >>> >>> > what i can do more to fix this issue > >>> >>> > > >>> >>> > Regards > >>> >>> > Adivya Singh > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> >>> > >>> > >>> > >>> > >>> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Nov 9 06:59:00 2022 From: mnasiadka at gmail.com (=?utf-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 9 Nov 2022 06:59:00 +0000 Subject: [kolla] Weekly meeting 9th Nov 22 cancelled Message-ID: <27C3B414-B1CD-4B83-9653-E1C0834BD574@gmail.com> Hola Koalas, The weekly meeting today is cancelled (I?m on full day workshops) - let?s talk next week. Michal From hberaud at redhat.com Wed Nov 9 08:59:21 2022 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 9 Nov 2022 09:59:21 +0100 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model Message-ID: Hey Osloers, Months ago we moved a couple of oslo deliverables to the independent release model [1][2], however it led us to issues with backports [3]. Backports are not an option for those deliverables. During the previous release management team' PTG we discussed this topic and we decided [4] to move back the oslo deliverables to the cycle-with-intermediary model. You can follow the transition to the CWI model through this patch https://review.opendev.org/c/openstack/releases/+/864095 Do not hesitate to react directly to the patch. Thanks for your time. [1] https://lists.openstack.org/pipermail/openstack-discuss/2020-November/018527.html [2] https://opendev.org/openstack/releases/commit/5ecb80c82ed3ab0144c8e5860ee62df458dfc2b5 [3] https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030612.html [4] https://etherpad.opendev.org/p/oct2022-ptg-rel-mgt -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Wed Nov 9 10:44:59 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Wed, 9 Nov 2022 16:14:59 +0530 Subject: Need assistance In-Reply-To: References: Message-ID: Thanks Melanie for the reply. I am able to use pdb successfully for the trace. But I am observing a strange behaviour with the python source files. Whenever I make any changes to the source files , for example, insert a pdb statement in servers.py, it is taking a minute or more for the changes to take effect. For example, after the change, if I run the uwsgi command at the terminal manually with --honour-stdin option, then immediately if I fire the nova list command, it is not taking effect. Only after a minute or so of making the change, it is taking effect. Somewhat strange. My next question is, inside the nova-api container, I am trying to trace how nova-api service starts. The systemd file has this content: --- ExecStart = /openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --autoload --ini /etc/uwsgi/nova-api-os-compute.ini ---- So I have checked the file /etc/uwsgi/nova-api-os-compute.ini , which has the below content: --- wsgi-file = /openstack/venvs/nova-20.2.1/bin/nova-api-wsgi -- Is the above file '/openstack/venvs/nova-20.2.1/bin/nova-api-wsgi' the one from which the nova-api service starts at all ? Thanks Kumar On Wed, Nov 9, 2022 at 5:39 AM melanie witt wrote: > On Tue Nov 08 2022 03:03:18 GMT-0800 (Pacific Standard Time), Gk Gk > wrote: > > Hi All, > > > > I have a OSA setup. I am trying to trace the control flow of nova-api > > using pdb in the file > > > "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". > > > > My goal is to trace the flow for "nova list --all" command. I am > > launching the nova-api service manually from the command line as > follows: > > > > #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini > > /etc/uwsgi/nova-api-os-compute.ini --workers 1 > > > > I am executing "nova list --all" command in another terminal. I have > > inserted pdb in instance.py as follows: > > > > ---- > > @base.remotable_classmethod > > def get_all(cls, context, expected_attrs=None): > > import pdb; pdb.set_trace() > > """Returns all instances on all nodes.""" > > db_instances = db.instance_get_all( > > context, columns_to_join=_expected_cols(expected_attrs)) > > return _make_instance_list(context, cls(), db_instances, > > expected_attrs) > > --- > > > > But when I fire the nova list --all command, I see no pdb prompt being > > shown in the nova-api window. Can anyone help me how to use the pdb to > > trace the flow of control for "nova list --all" command ? > > It looks like running nova-api that way is still running as a background > process: > > > https://stackoverflow.com/questions/34914704/bdbquit-raised-when-debugging-python-with-pdb > > I got that result ^ when I tried it locally. > > I was however able to get success with remote pdb: > > https://docs.openstack.org/devstack/latest/systemd.html#using-remote-pdb > > so maybe give that a try. Note that the code where you set the trace in > nova/objects/instance.py is not actually hit when doing a server list. > You may have instead meant: > > > https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/api.py#L2991 > > Also note that as a community we're trying to get away from using the > legacy 'nova' command and recommend using the openstackclient instead: > > > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-list > > The 'nova' CLI is no longer being maintained and we're adding to the > novaclient python bindings only when necessary. > > HTH, > -melwitt > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Nov 9 11:51:34 2022 From: zigo at debian.org (Thomas Goirand) Date: Wed, 9 Nov 2022 12:51:34 +0100 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model In-Reply-To: References: Message-ID: On 11/9/22 09:59, Herve Beraud wrote: > Hey Osloers, > > Months ago we moved a couple of oslo deliverables to the independent > release model [1][2], however it led us to issues with backports [3]. > > Backports are not an option for those deliverables. > > During the previous release management team' PTG we discussed this topic > and we decided [4] to move back the oslo deliverables to the > cycle-with-intermediary model. > > You can follow the transition to the CWI model through this patch > https://review.opendev.org/c/openstack/releases/+/864095 > > > Do not hesitate to react directly to the patch. > > Thanks for your time. Hi, FYI, I very much welcome this move. I'd prefer if most OpenStack deliverable was going back to this kind of release management. As a downstream package maintainer, it was very hard to know what I should be doing in terms of backporting too. With these going back to "normal", it's a lot more clear. Thanks a lot, Cheers, Thomas Goirand (zigo) From senrique at redhat.com Wed Nov 9 11:57:38 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 9 Nov 2022 11:57:38 +0000 Subject: [cinder] Bug report from 10-26-2022 to 11-08-2022 Message-ID: This is a bug report from 10-26-2022 to 11-08-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Low - https://bugs.launchpad.net/cinder/+bug/1995204 "Fail to unify allocation_capacity_gb values among multiple Active-Active Cinder-Volume services ." Unassigned. - https://bugs.launchpad.net/python-cinderclient/+bug/1995883 "cinderclient against wallaby fails to create a snapshot." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1995863 "Failed to create multiple instances with boot volumes at the same time in version 20.0.2.dev11." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1996049 " Backup fails with VolumeNotFound but not set to error." Fix proposed to master. Incomplete - https://bugs.launchpad.net/cinder/+bug/1995838 "Reimage results are stuck in downloading state." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1996039 "Volume State Update Failed After Backup Completed." Assigned to wanghelin. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at shrug.pw Wed Nov 9 11:58:24 2022 From: neil at shrug.pw (Neil Hanlon) Date: Wed, 9 Nov 2022 06:58:24 -0500 Subject: Need assistance In-Reply-To: References: Message-ID: On Wed, Nov 9, 2022, 05:51 Gk Gk wrote: > Thanks Melanie for the reply. I am able to use pdb successfully for the > trace. But I am observing a strange behaviour with the python source files. > Whenever I make any changes to the source files > , for example, insert a pdb statement in servers.py, it is taking a minute > or more for the changes to take effect. For example, after the change, if I > run the uwsgi command at the terminal manually with --honour-stdin option, > then immediately if I fire the nova list command, it is not taking effect. > Only after a minute or so of making the change, it is taking effect. > Somewhat strange. > > My next question is, inside the nova-api container, I am trying to trace > how nova-api service starts. The systemd file has this content: > --- > ExecStart = /openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --autoload > --ini /etc/uwsgi/nova-api-os-compute.ini > ---- > So I have checked the file /etc/uwsgi/nova-api-os-compute.ini , which has > the below content: > --- > wsgi-file = /openstack/venvs/nova-20.2.1/bin/nova-api-wsgi > -- > > Is the above file '/openstack/venvs/nova-20.2.1/bin/nova-api-wsgi' the one > from which the nova-api service starts at all ? > That is correct. The nova-api-wsgi and nova-metadata-wsgi entry points read nova.conf and api-paste.ini to generate the required WSGI application. Those scripts are just python entry points so you should be able to follow along there, barring some setuptools magic invoked. > > > Thanks > Kumar > > On Wed, Nov 9, 2022 at 5:39 AM melanie witt wrote: > >> On Tue Nov 08 2022 03:03:18 GMT-0800 (Pacific Standard Time), Gk Gk >> wrote: >> > Hi All, >> > >> > I have a OSA setup. I am trying to trace the control flow of nova-api >> > using pdb in the file >> > >> "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". >> > >> > My goal is to trace the flow for "nova list --all" command. I am >> > launching the nova-api service manually from the command line as >> follows: >> > >> > #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini >> > /etc/uwsgi/nova-api-os-compute.ini --workers 1 >> > >> > I am executing "nova list --all" command in another terminal. I have >> > inserted pdb in instance.py as follows: >> > >> > ---- >> > @base.remotable_classmethod >> > def get_all(cls, context, expected_attrs=None): >> > import pdb; pdb.set_trace() >> > """Returns all instances on all nodes.""" >> > db_instances = db.instance_get_all( >> > context, >> columns_to_join=_expected_cols(expected_attrs)) >> > return _make_instance_list(context, cls(), db_instances, >> > expected_attrs) >> > --- >> > >> > But when I fire the nova list --all command, I see no pdb prompt being >> > shown in the nova-api window. Can anyone help me how to use the pdb to >> > trace the flow of control for "nova list --all" command ? >> >> It looks like running nova-api that way is still running as a background >> process: >> >> >> https://stackoverflow.com/questions/34914704/bdbquit-raised-when-debugging-python-with-pdb >> >> I got that result ^ when I tried it locally. >> >> I was however able to get success with remote pdb: >> >> https://docs.openstack.org/devstack/latest/systemd.html#using-remote-pdb >> >> so maybe give that a try. Note that the code where you set the trace in >> nova/objects/instance.py is not actually hit when doing a server list. >> You may have instead meant: >> >> >> https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/api.py#L2991 >> >> Also note that as a community we're trying to get away from using the >> legacy 'nova' command and recommend using the openstackclient instead: >> >> >> https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-list >> >> The 'nova' CLI is no longer being maintained and we're adding to the >> novaclient python bindings only when necessary. >> >> HTH, >> -melwitt >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Nov 9 12:00:29 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Nov 2022 12:00:29 +0000 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model In-Reply-To: References: Message-ID: <4c330429a009b665e8360a19d1fb4a12157a7208.camel@redhat.com> On Wed, 2022-11-09 at 09:59 +0100, Herve Beraud wrote: > Hey Osloers, > > Months ago we moved a couple of oslo deliverables to the independent > release model [1][2], however it led us to issues with backports [3]. > > Backports are not an option for those deliverables. that is only true because we do not create brances for each y stream in the x.y.z naming if we did that then you could delvier bugfix z relases with select backports. moving back to release with intermediary is proably fine but ohter then process restrictions i dont think there is anythign that woudl prevent an independed released compentent form doing backports if they really wanted too. really what you woudl want to do is select a subset of LTS release that you create the branch for and backport too that effectivly is what cycle-with-intermediary will give you. you will have one "lts" release per upstrema cycle as cycle-with-intermediary requires at least one release a cycle but like indepenent allows arbviarty adtional release at any time in the cycle outside the freeze periods. > > During the previous release management team' PTG we discussed this topic > and we decided [4] to move back the oslo deliverables to the > cycle-with-intermediary model. > > You can follow the transition to the CWI model through this patch > https://review.opendev.org/c/openstack/releases/+/864095 > > Do not hesitate to react directly to the patch. > > Thanks for your time. > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2020-November/018527.html > [2] > https://opendev.org/openstack/releases/commit/5ecb80c82ed3ab0144c8e5860ee62df458dfc2b5 > [3] > https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030612.html > [4] https://etherpad.opendev.org/p/oct2022-ptg-rel-mgt > From fungi at yuggoth.org Wed Nov 9 13:21:56 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Nov 2022 13:21:56 +0000 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model In-Reply-To: <4c330429a009b665e8360a19d1fb4a12157a7208.camel@redhat.com> References: <4c330429a009b665e8360a19d1fb4a12157a7208.camel@redhat.com> Message-ID: <20221109132155.wimr2urq7jdemlmn@yuggoth.org> On 2022-11-09 12:00:29 +0000 (+0000), Sean Mooney wrote: [...] > if we did that then you could delvier bugfix z relases with select > backports. > > moving back to release with intermediary is proably fine but ohter > then process restrictions i dont think there is anythign that > woudl prevent an independed released compentent form doing > backports if they really wanted too. [...] It's not a problem of logistics, but rather policy. Independently released projects are intended to be treated like other external dependencies which are completely disconnected from our coordinated cycle. Yes they could still add "stable" branches and backport fixes to those and tag point releases from them, but as with external dependencies we intentionally keep their versions frozen in our central constraints list, so those backports don't actually get the same degree of integration testing as cycle-with-intermediary projects do. And if you keep unwinding the policy restrictions in order to make independent projects behave more like cycle-based projects, addressing the issues each of those policies is meant to mitigate, you eventually end up at something very much like cycle-with-intermediary anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Wed Nov 9 13:42:55 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 09 Nov 2022 13:42:55 +0000 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model In-Reply-To: <20221109132155.wimr2urq7jdemlmn@yuggoth.org> References: <4c330429a009b665e8360a19d1fb4a12157a7208.camel@redhat.com> <20221109132155.wimr2urq7jdemlmn@yuggoth.org> Message-ID: <96cbcf31012b855d4db5a02d2e4c8ecda48681f6.camel@redhat.com> On Wed, 2022-11-09 at 13:21 +0000, Jeremy Stanley wrote: > On 2022-11-09 12:00:29 +0000 (+0000), Sean Mooney wrote: > [...] > > if we did that then you could delvier bugfix z relases with select > > backports. > > > > moving back to release with intermediary is proably fine but ohter > > then process restrictions i dont think there is anythign that > > woudl prevent an independed released compentent form doing > > backports if they really wanted too. > [...] > > It's not a problem of logistics, but rather policy. Independently > released projects are intended to be treated like other external > dependencies which are completely disconnected from our coordinated > cycle. Yes they could still add "stable" branches and backport fixes > to those and tag point releases from them, but as with external > dependencies we intentionally keep their versions frozen in our > central constraints list, so those backports don't actually get the > same degree of integration testing as cycle-with-intermediary > projects do. And if you keep unwinding the policy restrictions in > order to make independent projects behave more like cycle-based > projects, addressing the issues each of those policies is meant to > mitigate, you eventually end up at something very much like > cycle-with-intermediary anyway. yep i was not objecting to the mvoe but to me the reall delta between cycle-with-intermediary and independent is just that 1 there will be at least one release per upstream cycle and 2 there will be a stable barnch created from the final release of the cycle that may recive backport in the future if as a team we decied the backport is warented. cycle-with-intermediary shoudl be the default choice i think for most projects/repos From fungi at yuggoth.org Wed Nov 9 13:52:15 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 Nov 2022 13:52:15 +0000 Subject: [oslo] Moving back oslo deliverables to the cycle with intermediary release model In-Reply-To: <96cbcf31012b855d4db5a02d2e4c8ecda48681f6.camel@redhat.com> References: <4c330429a009b665e8360a19d1fb4a12157a7208.camel@redhat.com> <20221109132155.wimr2urq7jdemlmn@yuggoth.org> <96cbcf31012b855d4db5a02d2e4c8ecda48681f6.camel@redhat.com> Message-ID: <20221109135214.5cooxndoguv57nqs@yuggoth.org> On 2022-11-09 13:42:55 +0000 (+0000), Sean Mooney wrote: [...] > to me the reall delta between cycle-with-intermediary and > independent is just that 1 there will be at least one release per > upstream cycle and 2 there will be a stable barnch created from > the final release of the cycle that may recive backport in the > future if as a team we decied the backport is warented. [...] Another difference implied there is that we will increase the versions of cycle-with-intermediary dependencies in the otherwise frozen upper-constraints.txt lists in stable branches of the openstack/requirements project. A point release from a "stable" branch of an independent release project isn't afforded this exception, and for good reason (because it's not required to follow the more stringent releasing and branching processes we expect of cycle-based deliverables). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Aidan.Collins at twosigma.com Tue Nov 8 17:43:52 2022 From: Aidan.Collins at twosigma.com (Aidan Collins) Date: Tue, 8 Nov 2022 17:43:52 +0000 Subject: [kolla] Kolla-ansible plays with --limit failing if a compute host is down. In-Reply-To: <126201570.921971.1667827494352@mail.yahoo.com> References: <126201570.921971.1667827494352@mail.yahoo.com> Message-ID: Thanks for the reply but we have an automated system that runs the ansible and store the inventory files in source control so it?s a bit complicated to edit the inventory. Is there no other way other than to change the inventory file every time? Thanks From: Albert Braden Sent: Monday, November 7, 2022 8:25 AM To: openstack-discuss at lists.openstack.org; Aidan Collins Subject: Re: [kolla] Kolla-ansible plays with --limit failing if a compute host is down. When I encounter that I edit the inventory and comment out the down host. On Friday, November 4, 2022, 05:16:32 PM EDT, Aidan Collins > wrote: Hello, It seems that the kolla-ansible plays reconfigure, prechecks, bootstrap-servers and deploy all fail when using limit if any compute host is down, even if it is not the one being specified by limit. Is there any way to configure gather-facts in these plays to not fail if this is the case? Due to the size of our plant we sometimes need to take down a compute host for maintenance and still provisiion new ones. We are using Victoria. Thanks a lot -aidan -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Wed Nov 9 15:14:42 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Wed, 9 Nov 2022 16:14:42 +0100 Subject: [kolla-ansible]Network Problem after server reboot Message-ID: Hello, after a restart of my cluster (and some problems...), I have one last problem with the VMs already present (before the restart). They all work fine?.They all work, console access OK, network topology ok? But they can no longer communicate on the network, they do not obtain IP addresses by dhcp. Yet everything seems to be working. If I detach the interface, I create a new interface, it doesn't work. I cannot reach the routers. I cannot communicate with an instance on the same network. On the other hand, if I create a new instance, no problem, it works and can join the other instances and its router. Is there a way to fix this? The problem is where? in the database? Thank you in advance for your help. Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Nov 9 15:25:14 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 9 Nov 2022 16:25:14 +0100 Subject: [largescale-sig] Next meeting: Nov 9, 15utc In-Reply-To: References: Message-ID: Here is the summary of our SIG meeting today. We discussed potential guests for our Dec 8 OpenInfra Live episode, and reviewed recent changes to the Large Scale documentation. You can read the detailed meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-11-09-15.00.html Our next IRC meeting will be November 23, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From katonalala at gmail.com Wed Nov 9 17:29:07 2022 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 9 Nov 2022 18:29:07 +0100 Subject: Neutron : Routed Provider Networks (no subnets declared in compute placement API: consequences?) In-Reply-To: References: Message-ID: Hi, Not sure I fully understand your issue, and to tell the truth my memory is quite bad so not sure if I can recall how things worked at Ussuri time. - Do you have the resource providers for your segments in placement? - if not, please check if Neutron can acces placement API (check the neutron.conf's placement section, check the log if there are Placement Client Errors or similar suspicious things) As I see from the code if the aggregates are created the resource providers and inventories should also be created. Lajos Andrew West ezt ?rta (id?pont: 2022. nov. 3., Cs, 19:19): > Hi neutron experts > > Have a cloud built with multiple network segments (RPN) . (kolla-ansible, > openstack ussuri), > things are running OK (on the network level): > > networks have DHCP agents ok (*os network agent list --agent-type dhcp > --network $networkID* ) > > all network segments are listed in Host Aggregates > > BUT if I run through all the existing segments , > NONE have a (compute service) inventory declared for each segment IPv4 > subnet > i.e > *os resource provider inventory list $segmentID * > returns no output. (official doc on RPN says this should exist) > > What feature may not function if this inventory is missing ? > > I don't quite understand what role this IPv4 subnet compute service > inventory plays here (during placement? port declaration ?) > > thanks > Andrew > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Wed Nov 9 18:16:30 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 9 Nov 2022 19:16:30 +0100 Subject: [nova][placement] Happy Specs review day next Tuesday (Nov 15) ! Message-ID: Hey, as said above, like every 6-month cycle, we'll have a specs review day to make sure we can jab the open specs. You are a reviewer (even not a core) ? Make sure you can review the specs on Nov 15th. You are a contributor and you have an open spec ? Make sure you can look at Gerrit during this day because you could see new comments and you could then rebase your spec for providing a new revision :) -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From calestyo at scientia.org Wed Nov 9 19:37:02 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Wed, 09 Nov 2022 20:37:02 +0100 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: <8bf58204f8c79c59560679de7c9dc4a0c240b164.camel@scientia.org> On Tue, 2022-11-08 at 12:09 -0500, Erik McCormick wrote: > I suppose you could consider it inefficient over a very long term in > that you have a source image taking up storage that has very little > resemblance to the instances that were spawned from it. Well it ultimately costs me a ~ factor 2 of storage per instance. > However, what you're running in to here is the "pets vs. > cattle"?argument. Openstack is a *cloud* platform, not a > virtualization platform. It is built for cattle. Long-lived instances > are not what it's targeted to. It's clear that my use case is a bit non-standard. :-) > That being said, it deals with them just fine. You simply have to > accept you're going to end up with these relics. If you're able to > nuke and recreate instances frequently and not upgrade them over > years, you end up using far less storage and have instances that can > quickly migrate around if you're using local storage.? In my case that periodic re-creating is rather not easily possible, as the VMs are rather complex in their setup. It's clear that otherwise, it makes sense to have the CoW to share blocks,... but still: Shouldn't it be possible to simply break up the connection? Like telling the backend, when someone wants to "detach" the volume from the image it's based (and CoW-copied) upon, that it should make a full copy of the still shared blocks? Also, what its the reason, that one cannot remove the volume, an instance was originally created with, from it? > You can import an existing disk (after some conversion depending on > your source hypervisor) into a Ceph-backed Cinder volume and boot > from it just fine. You have to make sure to tick the box that tells > it it's bootable, but otherwise should be fine.? That's what I tried to describe before: 1st: I imported it as image 2nd: Made an instance from it - so far things work fine - 3rd: attached a empty (non image based) volume of the same size) that one also had --bootable 4th: copied everything over and made it bootable At this point however, removing the original volume (based on the image) seems to be forbidden (some error that the root volume cannot be removed. So I tried to trick the system and used the 2nd (non-image based volume) for instance creation (i.e. server create --volume, not -- image). While that did work, it then falls back to booting from (SeaBIOS) and not UEFI as the previous instance, based on the image, still did. And I seem to cannot get that working to UEFI-boot. > Those properties you're setting on images are simply being passed to > nova when it boots the instance. You should be able to specify them > on a command-line boot from a volume. Well I'm afraid, that doesn't work. Not sure, maybe it's a bug (the OpenStack instance in question is probably a somewhat older version). When I upload my image, and use --property hw_firmware_type=uefi, the image get's properties | direct_url='rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', hw_firmware_type='uefi', locations='[{'url': 'rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', 'metadata': {}}]', owner_specified.openstack.md5='', owner_specified.openstack.object='images/mytestimage', owner_specified.openstack.sha256='' The volume created from that has: properties | attached_mode='rw' volume_image_metadata | {'container_format': 'bare', 'min_ram': '0', 'owner_specified.openstack.sha256': '', 'disk_format': 'raw', 'image_name': 'mytestimage', 'hw_firmware_type': 'uefi', 'image_id': 'c45be8c7-8ff7-4553-a145-c83ba75fb951', 'owner_specified.openstack.object': 'images/mytestimage', 'owner_specified.openstack.md5': '', 'min_disk': '0', 'checksum': '9ad12344a29cbbf7dbddc1ff4c48ea69', 'size': '21474836480'} So there, the setting seems to be not in properties but volume_image_metadata. The instance created from that volume has: an empty properties field. My 2nd (image independent volume) has: properties | hw_firmware_type='uefi' and no volume_image_metadata field. When I create an instance from that (2nd image), via e.g.: $ openstack server create --flavor pn72te.small --network=internet --volume mytestimage-indep-from-image --property 'hw_firmware_type=uefi' test Then it gets: properties | hw_firmware_type='uefi' So seems whether it boots BIOS or UEFI is not determined from the instance's properties field (which IMO would be the natural place)... but not from the volume (or image)... but there it also doesn't seem to work. Is there any documentation on where/what exactly causes the instance to boot UEFI? > For your conversion purposes, you could check out virt-v2v. I used > that to convert a bunch of old vmware instances to KVM and import > them into Openstack. It was slow but worked pretty well. I'll have a look, but I guess it won't help me with the no-UEFI-boot issue. Thanks, Chris. From eblock at nde.ag Thu Nov 10 08:47:49 2022 From: eblock at nde.ag (Eugen Block) Date: Thu, 10 Nov 2022 08:47:49 +0000 Subject: how to remove image with still used volumes In-Reply-To: <8bf58204f8c79c59560679de7c9dc4a0c240b164.camel@scientia.org> References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> <8bf58204f8c79c59560679de7c9dc4a0c240b164.camel@scientia.org> Message-ID: <20221110084749.Horde.nW_fpLY3HCzv70M_P4UTa0q@webmail.nde.ag> What is your storage back end? With ceph there is a way which I wouldn't really recommend but in our cloud it accidentally happens from time to time. Basically, it's about flattening images. For example, there are multiple VMs based on the same image which are copy-on-write clones. We back up the most important VMs with 'rbd export' so they become "flat" in the backup store. After disaster recovery we had to restore some of the VMs ('rbd import'), but that means they lose their "parent" (the base image in glance). After some time we cleaned up the glance store and deleted images without clones, accidentally resulting in VMs with no base image ('openstack server show') since they were flat and had no more parent information. One disadvantage is that you have to search the database which image it could have been, another one is flat images allocate the whole disk space in ceph (but there's a sparsify command to deal with that). So one could "flatten" all instances that don't need their "parent clone" and delete them from glance. But I doubt that it's a reasonable approach, just one possible way. Zitat von Christoph Anton Mitterer : > On Tue, 2022-11-08 at 12:09 -0500, Erik McCormick wrote: >> I suppose you could consider it inefficient over a very long term in >> that you have a source image taking up storage that has very little >> resemblance to the instances that were spawned from it. > > Well it ultimately costs me a ~ factor 2 of storage per instance. > > >> However, what you're running in to here is the "pets vs. >> cattle"?argument. Openstack is a *cloud* platform, not a >> virtualization platform. It is built for cattle. Long-lived instances >> are not what it's targeted to. > > It's clear that my use case is a bit non-standard. :-) > > >> That being said, it deals with them just fine. You simply have to >> accept you're going to end up with these relics. If you're able to >> nuke and recreate instances frequently and not upgrade them over >> years, you end up using far less storage and have instances that can >> quickly migrate around if you're using local storage.? > > In my case that periodic re-creating is rather not easily possible, as > the VMs are rather complex in their setup. > > It's clear that otherwise, it makes sense to have the CoW to share > blocks,... but still: > > Shouldn't it be possible to simply break up the connection? Like > telling the backend, when someone wants to "detach" the volume from the > image it's based (and CoW-copied) upon, that it should make a full copy > of the still shared blocks? > > Also, what its the reason, that one cannot remove the volume, an > instance was originally created with, from it? > > >> You can import an existing disk (after some conversion depending on >> your source hypervisor) into a Ceph-backed Cinder volume and boot >> from it just fine. You have to make sure to tick the box that tells >> it it's bootable, but otherwise should be fine.? > > That's what I tried to describe before: > 1st: I imported it as image > 2nd: Made an instance from it > - so far things work fine - > 3rd: attached a empty (non image based) volume of the same size) > that one also had --bootable > 4th: copied everything over and made it bootable > > At this point however, removing the original volume (based on the > image) seems to be forbidden (some error that the root volume cannot be > removed. > > So I tried to trick the system and used the 2nd (non-image based > volume) for instance creation (i.e. server create --volume, not -- > image). > While that did work, it then falls back to booting from (SeaBIOS) and > not UEFI as the previous instance, based on the image, still did. > > And I seem to cannot get that working to UEFI-boot. > > >> Those properties you're setting on images are simply being passed to >> nova when it boots the instance. You should be able to specify them >> on a command-line boot from a volume. > > Well I'm afraid, that doesn't work. Not sure, maybe it's a bug (the > OpenStack instance in question is probably a somewhat older version). > > When I upload my image, and use --property hw_firmware_type=uefi, the > image get's > properties | > direct_url='rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', hw_firmware_type='uefi', locations='[{'url': 'rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', 'metadata': {}}]', owner_specified.openstack.md5='', owner_specified.openstack.object='images/mytestimage', > owner_specified.openstack.sha256='' > > The volume created from that has: > properties | attached_mode='rw' > volume_image_metadata | {'container_format': 'bare', > 'min_ram': '0', 'owner_specified.openstack.sha256': '', > 'disk_format': 'raw', 'image_name': 'mytestimage', > 'hw_firmware_type': 'uefi', 'image_id': > 'c45be8c7-8ff7-4553-a145-c83ba75fb951', > 'owner_specified.openstack.object': 'images/mytestimage', > 'owner_specified.openstack.md5': '', 'min_disk': '0', 'checksum': > '9ad12344a29cbbf7dbddc1ff4c48ea69', 'size': '21474836480'} > > So there, the setting seems to be not in properties but > volume_image_metadata. > > > The instance created from that volume has: > an empty properties field. > > > My 2nd (image independent volume) has: > properties | hw_firmware_type='uefi' > and no volume_image_metadata field. > > > When I create an instance from that (2nd image), via e.g.: > $ openstack server create --flavor pn72te.small --network=internet > --volume mytestimage-indep-from-image --property > 'hw_firmware_type=uefi' test > > Then it gets: > properties | hw_firmware_type='uefi' > > > So seems whether it boots BIOS or UEFI is not determined from the > instance's properties field (which IMO would be the natural place)... > but not from the volume (or image)... but there it also doesn't seem to > work. > > > > Is there any documentation on where/what exactly causes the instance to > boot UEFI? > > >> For your conversion purposes, you could check out virt-v2v. I used >> that to convert a bunch of old vmware instances to KVM and import >> them into Openstack. It was slow but worked pretty well. > > I'll have a look, but I guess it won't help me with the no-UEFI-boot > issue. > > > Thanks, > Chris. From derekokeeffe85 at yahoo.ie Thu Nov 10 10:02:39 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Thu, 10 Nov 2022 10:02:39 +0000 (UTC) Subject: Unable to snapshot instances on backend storage References: <1865485308.3717441.1668074559691.ref@mail.yahoo.com> Message-ID: <1865485308.3717441.1668074559691@mail.yahoo.com> Hi all, When we create an instance and leave the "Create new volume" option as no then we can manage the instance with no issues (migrate, snapshot, etc..) These instances are saved locally on the compute nodes. When we create an instance and select "Create new volume" yes the instance is spun up fine on our backend storage with no obvious issues (reachable with ping & ssh. shutdown, restart, networking, etc.. all fine) however, when we try to snapshot it or migrate it it fails. We can however take volume snapshots of volumes that we have created and are stored on the same shared backend. Has anyone came across this or maybe a pointer as to what they may think is causing it? It sounds to us as if nova try's to create a snapshot of the VM but thinks it's a volume maybe? Any help greatly appreciated. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.karpiarz at eschercloud.ai Wed Nov 9 17:03:03 2022 From: m.karpiarz at eschercloud.ai (Mariusz Karpiarz) Date: Wed, 9 Nov 2022 17:03:03 +0000 Subject: [CloudKitty][os-api-ref][openstack-dev] v1 API docs Message-ID: All, CloudKitty docs for v1 APIs (https://docs.openstack.org/cloudkitty/latest/api-reference/v1/v1.html) appear to be generated from the source code instead of using `os-api-ref` (https://opendev.org/openstack/os-api-ref), like in case of v2 docs. I want to move both v1 and v2 API docs to a separate `api-ref/source` directory in the root of the repository, where we will only be using "os_api_ref" and "openstackdocstheme" Sphinx extensions, so we need to decide what to do with v1 docs. I started rewriting v1 docs to the format supported by `os-api-ref` but they don't quite translate well to the new format, mainly because of different ways results are presented (Python objects vs JSONs). How much do we care about proper v1 API docs and would it be worth for someone (likely me) to write them again from scratch? There is also the option for carrying over the old extensions (they are all listed here: https://opendev.org/openstack/cloudkitty/src/branch/stable/zed/doc/source/conf.py#L42-L58) but I'm not sure all of them are still supported by the system building https://docs.openstack.org/api/ and this is a good opportunity to clean this list up. :) Please let me know if you have any ideas. Mariusz From rcastill at redhat.com Thu Nov 10 13:26:10 2022 From: rcastill at redhat.com (Rafael Castillo) Date: Thu, 10 Nov 2022 06:26:10 -0700 Subject: [ansible-openstack-modules] 2023.1 Antelope PTG summary Message-ID: We held our ansible-collections-openstack PTG a few weeks ago. Thanks to everyone who participated! Here's a belated summary of the discussions that took place. Etherpad: https://etherpad.opendev.org/p/oct2022-ptg-os-ansible-modules The plan for the Antelope cycle is to continue our current course of porting all remaining modules to support the new SDK. Our tentative release date for 2.0.0 is January 1st 2023. While porting modules will be the priority first and foremost, we discussed some proposals for enhancement during our session. These will remain on the backburner and will be addressed on a best effort basis. # Proposals for improvement ## Consistent module naming In order to make the modules more self-describing, avoid prefixes like federation_idp and keep names obvious. Change module names to be consistent with openstack-client. Change return values to match module names where possible. ## Documentation updates Highlight in docs that module results can differ depending on the cloud they use. Our module docs are currently based on devstack. Keep different documentation pages on galaxy between stable and master, as the two differ quite a bit. Update README and docs folder with more current information. ## Migrate current tests to use ansible-test The ansible-test integration test runner has some nice features. There's examples of the OTC modules using this we can take a look at. ## Write sdk logs to module results instead of writing to file Can be more convenient to have logs available right in ansible output. Counter-point is that these logs grow to be huge sometimes. ## Drop pbr based installation? We should focus on supporting ansible-galaxy as the preferred method of installing ansible collections. Again, thanks for participating in the collections process. Have a good weekend. From ygk.kmr at gmail.com Thu Nov 10 13:37:15 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Thu, 10 Nov 2022 19:07:15 +0530 Subject: Fwd: Need assistance In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: Gk Gk Date: Thu, Nov 10, 2022 at 7:01 PM Subject: Re: Need assistance To: The file which is being picked by uwsgi is '../../nova-20.2.1/lib/python3.6/site-packages/nova/api/openstack/compute/wsgi.py' . But I dont see how this file is being called. Which program loads this file ? Can someone help me here ? Disregard the abive message. This is the file I believe is being called "/openstack/venvs//nova-20.2.1/lib/python3.6/site-packages/nova/api/openstack/wsgi.py" . So how is it being called or which program is calling it ? I want to know the first file which uwsgi loads after being launched. On Wed, Nov 9, 2022 at 5:28 PM Neil Hanlon wrote: > > > On Wed, Nov 9, 2022, 05:51 Gk Gk wrote: > >> Thanks Melanie for the reply. I am able to use pdb successfully for the >> trace. But I am observing a strange behaviour with the python source files. >> Whenever I make any changes to the source files >> , for example, insert a pdb statement in servers.py, it is taking a >> minute or more for the changes to take effect. For example, after the >> change, if I run the uwsgi command at the terminal manually with >> --honour-stdin option, then immediately if I fire the nova list command, >> it is not taking effect. Only after a minute or so of making the change, it >> is taking effect. Somewhat strange. >> >> My next question is, inside the nova-api container, I am trying to trace >> how nova-api service starts. The systemd file has this content: >> --- >> ExecStart = /openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --autoload >> --ini /etc/uwsgi/nova-api-os-compute.ini >> ---- >> So I have checked the file /etc/uwsgi/nova-api-os-compute.ini , which has >> the below content: >> --- >> wsgi-file = /openstack/venvs/nova-20.2.1/bin/nova-api-wsgi >> -- >> >> Is the above file '/openstack/venvs/nova-20.2.1/bin/nova-api-wsgi' the >> one from which the nova-api service starts at all ? >> > > That is correct. The nova-api-wsgi and nova-metadata-wsgi entry points > read nova.conf and api-paste.ini to generate the required WSGI application. > > Those scripts are just python entry points so you should be able to follow > along there, barring some setuptools magic invoked. > >> >> >> Thanks >> Kumar >> >> On Wed, Nov 9, 2022 at 5:39 AM melanie witt wrote: >> >>> On Tue Nov 08 2022 03:03:18 GMT-0800 (Pacific Standard Time), Gk Gk >>> wrote: >>> > Hi All, >>> > >>> > I have a OSA setup. I am trying to trace the control flow of nova-api >>> > using pdb in the file >>> > >>> "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". >>> > >>> > My goal is to trace the flow for "nova list --all" command. I am >>> > launching the nova-api service manually from the command line as >>> follows: >>> > >>> > #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini >>> > /etc/uwsgi/nova-api-os-compute.ini --workers 1 >>> > >>> > I am executing "nova list --all" command in another terminal. I have >>> > inserted pdb in instance.py as follows: >>> > >>> > ---- >>> > @base.remotable_classmethod >>> > def get_all(cls, context, expected_attrs=None): >>> > import pdb; pdb.set_trace() >>> > """Returns all instances on all nodes.""" >>> > db_instances = db.instance_get_all( >>> > context, >>> columns_to_join=_expected_cols(expected_attrs)) >>> > return _make_instance_list(context, cls(), db_instances, >>> > expected_attrs) >>> > --- >>> > >>> > But when I fire the nova list --all command, I see no pdb prompt being >>> > shown in the nova-api window. Can anyone help me how to use the pdb to >>> > trace the flow of control for "nova list --all" command ? >>> >>> It looks like running nova-api that way is still running as a background >>> process: >>> >>> >>> https://stackoverflow.com/questions/34914704/bdbquit-raised-when-debugging-python-with-pdb >>> >>> I got that result ^ when I tried it locally. >>> >>> I was however able to get success with remote pdb: >>> >>> https://docs.openstack.org/devstack/latest/systemd.html#using-remote-pdb >>> >>> so maybe give that a try. Note that the code where you set the trace in >>> nova/objects/instance.py is not actually hit when doing a server list. >>> You may have instead meant: >>> >>> >>> https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/api.py#L2991 >>> >>> Also note that as a community we're trying to get away from using the >>> legacy 'nova' command and recommend using the openstackclient instead: >>> >>> >>> https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-list >>> >>> The 'nova' CLI is no longer being maintained and we're adding to the >>> novaclient python bindings only when necessary. >>> >>> HTH, >>> -melwitt >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Thu Nov 10 16:46:51 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Thu, 10 Nov 2022 17:46:51 +0100 Subject: [kolla-ansible]Reset Configuration Message-ID: Hello, yesterday I sent a desperate message about the functioning of my Openstack (kolla-ansible, yoga, centos8Stream). The situation is incomprehensible. For some accounts in some projects, everything works (creation of networks, subnets, dhcp configuration, instances, everything works). For other accounts, everything works except having an IP address per dhcp on a subnet. The servers have restarted following a power outage. I managed to put everything back together except those DHCP issues. As I can start from an empty configuration, how to reset everything ... for example with a command: kolla-ansible -i multinode ____________ Or how to fix my configuration. It seems very complicated to me. I would like some opinions if possible. Thanks in advance Franck -------------- next part -------------- An HTML attachment was scrubbed... URL: From sylvain.bauza at gmail.com Thu Nov 10 17:30:26 2022 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Thu, 10 Nov 2022 18:30:26 +0100 Subject: Need assistance In-Reply-To: References: Message-ID: Le jeu. 10 nov. 2022 ? 14:47, Gk Gk a ?crit : > > > ---------- Forwarded message --------- > From: Gk Gk > Date: Thu, Nov 10, 2022 at 7:01 PM > Subject: Re: Need assistance > To: > > > The file which is being picked by uwsgi is > '../../nova-20.2.1/lib/python3.6/site-packages/nova/api/openstack/compute/wsgi.py' > . But I dont see how this file is being called. Which program loads this > file ? > Can someone help me here ? > > Disregard the abive message. This is the file I believe is being called > "/openstack/venvs//nova-20.2.1/lib/python3.6/site-packages/nova/api/openstack/wsgi.py" > . So how is it being called or which program is calling it ? I want to > know the first file which uwsgi loads after being launched. > You're basically asking how we run our WSGI application in Nova. As explained below by Neil, we have an entrypoint defined by [1] that helps uwsgi (the WSGI server) to know the WSGI application to run above it. The source of the entrypoint can be found in [2]. As you can read, it calls the init_application function of the nova.api.openstack.wsgi_app module which itself calls Paste [3] (a library for URL dispatching and WSGI pipelining with middlewares/filters [4]) for deploying the WSGI middlewares and application based on paste.ini config file [5] As you see, we eventually create the routes using the osapi_compute_app_v21 factory which is defined by nova.api.openstack.compute:APIRouterV21.factory That's then how the plumbing is made so that when you call the Nova API for an openstack server list, it calls the API with the URL /nova/servers which is routed by the factory, using the routes module [6] (to expose the routes) to the index method in the ServerController [7]. [1] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/setup.cfg#L92 [2] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/api/openstack/compute/wsgi.py [3] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/api/openstack/wsgi_app.py#L138 [4] https://pythonpaste.readthedocs.io/en/latest/ [5] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/etc/nova/api-paste.ini#L33 [6] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/api/openstack/compute/routes.py#L743 [7] https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/api/openstack/compute/servers.py#L122 HTH -S > > > On Wed, Nov 9, 2022 at 5:28 PM Neil Hanlon wrote: > >> >> >> On Wed, Nov 9, 2022, 05:51 Gk Gk wrote: >> >>> Thanks Melanie for the reply. I am able to use pdb successfully for the >>> trace. But I am observing a strange behaviour with the python source files. >>> Whenever I make any changes to the source files >>> , for example, insert a pdb statement in servers.py, it is taking a >>> minute or more for the changes to take effect. For example, after the >>> change, if I run the uwsgi command at the terminal manually with >>> --honour-stdin option, then immediately if I fire the nova list command, >>> it is not taking effect. Only after a minute or so of making the change, it >>> is taking effect. Somewhat strange. >>> >>> My next question is, inside the nova-api container, I am trying to trace >>> how nova-api service starts. The systemd file has this content: >>> --- >>> ExecStart = /openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --autoload >>> --ini /etc/uwsgi/nova-api-os-compute.ini >>> ---- >>> So I have checked the file /etc/uwsgi/nova-api-os-compute.ini , which >>> has the below content: >>> --- >>> wsgi-file = /openstack/venvs/nova-20.2.1/bin/nova-api-wsgi >>> -- >>> >>> Is the above file '/openstack/venvs/nova-20.2.1/bin/nova-api-wsgi' the >>> one from which the nova-api service starts at all ? >>> >> >> That is correct. The nova-api-wsgi and nova-metadata-wsgi entry points >> read nova.conf and api-paste.ini to generate the required WSGI application. >> >> Those scripts are just python entry points so you should be able to >> follow along there, barring some setuptools magic invoked. >> >>> >>> >>> Thanks >>> Kumar >>> >>> On Wed, Nov 9, 2022 at 5:39 AM melanie witt wrote: >>> >>>> On Tue Nov 08 2022 03:03:18 GMT-0800 (Pacific Standard Time), Gk Gk >>>> wrote: >>>> > Hi All, >>>> > >>>> > I have a OSA setup. I am trying to trace the control flow of nova-api >>>> > using pdb in the file >>>> > >>>> "/openstack/venvs/nova-20.2.1/lib/python3.6/site-packages/nova/objects/instance.py". >>>> > >>>> > My goal is to trace the flow for "nova list --all" command. I am >>>> > launching the nova-api service manually from the command line as >>>> follows: >>>> > >>>> > #/openstack/venvs/uwsgi-20.2.1-python3/bin/uwsgi --ini >>>> > /etc/uwsgi/nova-api-os-compute.ini --workers 1 >>>> > >>>> > I am executing "nova list --all" command in another terminal. I have >>>> > inserted pdb in instance.py as follows: >>>> > >>>> > ---- >>>> > @base.remotable_classmethod >>>> > def get_all(cls, context, expected_attrs=None): >>>> > import pdb; pdb.set_trace() >>>> > """Returns all instances on all nodes.""" >>>> > db_instances = db.instance_get_all( >>>> > context, >>>> columns_to_join=_expected_cols(expected_attrs)) >>>> > return _make_instance_list(context, cls(), db_instances, >>>> > expected_attrs) >>>> > --- >>>> > >>>> > But when I fire the nova list --all command, I see no pdb prompt >>>> being >>>> > shown in the nova-api window. Can anyone help me how to use the pdb >>>> to >>>> > trace the flow of control for "nova list --all" command ? >>>> >>>> It looks like running nova-api that way is still running as a >>>> background >>>> process: >>>> >>>> >>>> https://stackoverflow.com/questions/34914704/bdbquit-raised-when-debugging-python-with-pdb >>>> >>>> I got that result ^ when I tried it locally. >>>> >>>> I was however able to get success with remote pdb: >>>> >>>> https://docs.openstack.org/devstack/latest/systemd.html#using-remote-pdb >>>> >>>> so maybe give that a try. Note that the code where you set the trace in >>>> nova/objects/instance.py is not actually hit when doing a server list. >>>> You may have instead meant: >>>> >>>> >>>> https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/api.py#L2991 >>>> >>>> Also note that as a community we're trying to get away from using the >>>> legacy 'nova' command and recommend using the openstackclient instead: >>>> >>>> >>>> https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-list >>>> >>>> The 'nova' CLI is no longer being maintained and we're adding to the >>>> novaclient python bindings only when necessary. >>>> >>>> HTH, >>>> -melwitt >>>> >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Thu Nov 10 18:09:43 2022 From: eblock at nde.ag (Eugen Block) Date: Thu, 10 Nov 2022 18:09:43 +0000 Subject: [kolla-ansible]Network Problem after server reboot In-Reply-To: Message-ID: <20221110180943.Horde.O7n9mEOowE9aGPTTJuKyaM6@webmail.nde.ag> Hi, this sounds very similar to something I experienced a couple of times this year. In a HA cloud with two control nodes (the third joined just recently) when one node was shut down (accidentally) I saw basically the same effects you're describing. I could create new networks and instances were started successfully and also got their IPs via DHCP while existing VMs didn't properly work (at least the dhcp part for self-service networks). I'm still not sure what exactly the root cause is as I can't reproduce it in my test lab, and retrying it in a production cluster is not a good idea. ;-) I got things to work, but it's still unclear what exactly it was. It's possible that you could see hints in the neutron logs that something's not right, I don't recall the exact message but it was something like "dhcp agent doesn't work because the server is overloaded". By the way, what is the number of dhcp agents per network you have in neutron.conf? Briefly, here's what I did (at that time with 2 control nodes): - put the pacemaker cluster into maintenance mode so I could stop and start services manually - stopped all services except rabbitmq and galera - made sure all services (like neutron) were actually "dead", so no left over processes - started apache and haproxy on one node only so all requests would land there - started one service after another manually and watched the logs - now the dhcp agent started successfully and logged - started the services on the remaining control node and everything was stable - the cluster then recovered I don't know if that helps in any way, but I thought I'd share. By the way, we don't use kolla so I can't really comment that part. Regards, Eugen Zitat von Franck VEDEL : > Hello, > after a restart of my cluster (and some problems...), I have one > last problem with the VMs already present (before the restart). > They all work fine?.They all work, console access OK, network topology ok? > > But they can no longer communicate on the network, they do not > obtain IP addresses by dhcp. Yet everything seems to be working. > If I detach the interface, I create a new interface, it doesn't > work. I cannot reach the routers. I cannot communicate with an > instance on the same network. > On the other hand, if I create a new instance, no problem, it works > and can join the other instances and its router. > Is there a way to fix this? The problem is where? in the database? > Thank you in advance for your help. > > Franck VEDEL From emccormick at cirrusseven.com Thu Nov 10 21:59:05 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Thu, 10 Nov 2022 16:59:05 -0500 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: Message-ID: On Thu, Nov 10, 2022 at 11:52 AM Franck VEDEL < franck.vedel at univ-grenoble-alpes.fr> wrote: > Hello, > > yesterday I sent a desperate message about the functioning of my Openstack > (kolla-ansible, yoga, centos8Stream). > The situation is incomprehensible. For some accounts in some projects, > everything works (creation of networks, subnets, dhcp configuration, > instances, everything works). > For other accounts, everything works except having an IP address per dhcp > on a subnet. > The servers have restarted following a power outage. > > I managed to put everything back together except those DHCP issues. > > As I can start from an empty configuration, how to reset everything ... > for example with a command: > kolla-ansible -i multinode ____________ > > Or how to fix my configuration. It seems very complicated to me. > I would like some opinions if possible. > Thanks in advance > > Franck > Are you asking how to completely zero out your entire cluster and rebuild it? That seems a bit drastic. kolla-ansible destroy will nuke everything. Take a backup of /etc/kolla (or wherever your inventory / globals.yml / passwords/yml is) first. Older versions removed some things there when running destroy and I can't recall when / if that changed. How many controllers do you have? Are you using OVS, OVN, or something else? Are you using L3-HA? DVR? Did all nodes have to be rebooted? If not, then which ones? Have you confirmed there are no dead containers on any controllers? ( docker ps -a ) Have you looked in logs for ERROR messages? In particular: neutron-server.log, neutron-dhcp-agent.log, nova-api.log, and nova-compute.log ? Strange things happen when time is out of sync. Verify all the nodes synced properly to an NTP server. Big symptom of this is 'openstack hypervisor list' will show hosts going up and down every few seconds. -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Fri Nov 11 07:32:15 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 11 Nov 2022 08:32:15 +0100 Subject: [kolla-ansible]Network Problem after server reboot In-Reply-To: <20221110180943.Horde.O7n9mEOowE9aGPTTJuKyaM6@webmail.nde.ag> References: <20221110180943.Horde.O7n9mEOowE9aGPTTJuKyaM6@webmail.nde.ag> Message-ID: <538F7430-BCCB-4DA3-88A1-F47CA0D42AB2@univ-grenoble-alpes.fr> Thanks for this help. I also have 2 control nodes. But since my first message, I have other problems. Some new instances (for some accounts, on some l3 networks) do not receive their addresses through DHCP. For example, even for the admin at default account, I have these problems. With my test account too, whereas if I ask a user to try, on some networks it works and some doesn't. On the other hand, for all, everything works on external networks (even dhcp). Well, this all seems impossible to fix. I'm not sure I can do what you say because I don't know the order of services. And there are so many logs. I look carefully at these logs, but I can't find the exact problem. I plan to reinstall because it's an Openstack used by students every year, and there I have a month to get it working. It's just demoralizing to have to redo everything when the system was working so well. On the other hand, it would be an opportunity to correct the mistakes I made (in particular, switching from CentosStream to Ubuntu for servers and containers). thanks again Franck > Le 10 nov. 2022 ? 19:09, Eugen Block a ?crit : > > Hi, > > this sounds very similar to something I experienced a couple of times this year. In a HA cloud with two control nodes (the third joined just recently) when one node was shut down (accidentally) I saw basically the same effects you're describing. I could create new networks and instances were started successfully and also got their IPs via DHCP while existing VMs didn't properly work (at least the dhcp part for self-service networks). I'm still not sure what exactly the root cause is as I can't reproduce it in my test lab, and retrying it in a production cluster is not a good idea. ;-) > I got things to work, but it's still unclear what exactly it was. It's possible that you could see hints in the neutron logs that something's not right, I don't recall the exact message but it was something like "dhcp agent doesn't work because the server is overloaded". By the way, what is the number of dhcp agents per network you have in neutron.conf? > Briefly, here's what I did (at that time with 2 control nodes): > - put the pacemaker cluster into maintenance mode so I could stop and start services manually > - stopped all services except rabbitmq and galera > - made sure all services (like neutron) were actually "dead", so no left over processes > - started apache and haproxy on one node only so all requests would land there > - started one service after another manually and watched the logs > - now the dhcp agent started successfully and logged > - started the services on the remaining control node and everything was stable > - the cluster then recovered > > I don't know if that helps in any way, but I thought I'd share. By the way, we don't use kolla so I can't really comment that part. > > Regards, > Eugen > > Zitat von Franck VEDEL : > >> Hello, >> after a restart of my cluster (and some problems...), I have one last problem with the VMs already present (before the restart). >> They all work fine?.They all work, console access OK, network topology ok? >> >> But they can no longer communicate on the network, they do not obtain IP addresses by dhcp. Yet everything seems to be working. >> If I detach the interface, I create a new interface, it doesn't work. I cannot reach the routers. I cannot communicate with an instance on the same network. >> On the other hand, if I create a new instance, no problem, it works and can join the other instances and its router. >> Is there a way to fix this? The problem is where? in the database? >> Thank you in advance for your help. >> >> Franck VEDEL > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Fri Nov 11 08:05:34 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 11 Nov 2022 09:05:34 +0100 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: Message-ID: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> Thanks for your help, really. My cluster: 2 controllers nodes, OVS, L3-HA. All nodes had to be rebooted All is working for example with external networks (so dhcp on external networks). There are no dead containers, all seems ok. I try to create a new instance on a L3 network. No ERROR in neutron*.log. The only error is nova-api.log: Example: 2022-11-11 08:45:54.452 42 ERROR oslo.messaging._drivers.impl_rabbit [-] [8b6fd776-f096-4c8a-927e-88225a3adb43] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: But on the first node (10.0.5.109 on the internal network) ? netstat -atnp |wc-l ? ???>>> 505 connections So?. if I backup /etc/kolla, my glance images, my configuration files? if a do ? koll-ansible destroy ?, is next step ? kolla-ansible bootstraps?. ? and preaches, and deploy, or directly deploy ? What?s the difference with cleanup-containers ? I use this openstack cluster for my students, I have a month to get it working again. I could reinstall everything (and change the operating system) but I don't have time for that. So I can lose all the users data, if I have my glance images, my flavors, the configuration to hang the ldap, the certificates, I think it will be ok. Franck VEDEL > > Are you asking how to completely zero out your entire cluster and rebuild it? That seems a bit drastic. > > kolla-ansible destroy will nuke everything. Take a backup of /etc/kolla (or wherever your inventory / globals.yml / passwords/yml is) first. Older versions removed some things there when running destroy and I can't recall when / if that changed. > > How many controllers do you have? > > Are you using OVS, OVN, or something else? > > Are you using L3-HA? DVR? > > Did all nodes have to be rebooted? If not, then which ones? > > Have you confirmed there are no dead containers on any controllers? ( docker ps -a ) > > Have you looked in logs for ERROR messages? In particular: neutron-server.log, neutron-dhcp-agent.log, nova-api.log, and nova-compute.log ? > > Strange things happen when time is out of sync. Verify all the nodes synced properly to an NTP server. Big symptom of this is 'openstack hypervisor list' will show hosts going up and down every few seconds. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 11 08:49:18 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Nov 2022 09:49:18 +0100 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda, today's meeting is cancelled. Have a nice weekend! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 11 11:04:27 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 11 Nov 2022 12:04:27 +0100 Subject: [neutron][release] Proposing to EOL Queens, Rocky and Stein (all Neutron related projects) In-Reply-To: References: Message-ID: Hello: Please send your feedback for [1]. If you have questions or concerns, please reply to this email or leave a comment in the patch. Next week I'll unblock this patch to continue the process to merge it. Regards. [1]https://review.opendev.org/c/openstack/releases/+/862937 On Fri, Oct 28, 2022 at 12:04 PM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > In the last PTG, the Neutron team has decided [1] to move the stable > branches Queens, Rocky and Stein to EOL (end-of-life) status. According > to the steps to achieve this [2], we need first to announce it. That will > affect all Neutron related projects. > > The patch to mark these branches as EOL will be pushed in one week. If you > have any inconvenience, please let me know in this mail chain or in IRC > (ralonsoh, #openstack-neutron channel). You can also contact any Neutron > core reviewer in the IRC channel. > > Regards. > > [1]https://etherpad.opendev.org/p/neutron-antelope-ptg#L131 > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.karpiarz at eschercloud.ai Thu Nov 10 14:01:11 2022 From: m.karpiarz at eschercloud.ai (Mariusz Karpiarz) Date: Thu, 10 Nov 2022 14:01:11 +0000 Subject: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? Message-ID: All, Was the idea of moving internal components deployed by kolla-ansible (like the MySQL database) to a load balancer separate to the one used by user-facing APIs discussed anywhere? This feels like a good option to have for security but, as far as I'm aware, it's not supported by kolla-ansible. It should be possible to use existing Kolla HAProxy images, but mount config files from subdirectories of `/etc/kolla/haproxy/` for each container. I suspect the main hurtle here would be rewriting the user interface of kolla-ansible to account for the split whilst maintaining backward compatibility... Mariusz Karpiarz From pkliczew at redhat.com Thu Nov 10 16:33:52 2022 From: pkliczew at redhat.com (Piotr Kliczewski) Date: Thu, 10 Nov 2022 17:33:52 +0100 Subject: [Openstack][FOSDEM][CFP] Virtualization & IaaS Devroom Message-ID: We are excited to announce that the call for proposals is now open for the Virtualization & IaaS devroom at the upcoming FOSDEM 2023, to be hosted on February 4th 2023. This devroom is a collaborative effort, and is organized by dedicated folks from projects such as OpenStack, Xen Project, KubeVirt, QEMU, KVM, and Foreman. We would like to invite all those who are involved in these fields to submit your proposals by December 10th, 2022. About the Devroom The Virtualization & IaaS devroom will feature session topics such as open source hypervisors or virtual machine managers such as Xen Project, KVM, bhyve and VirtualBox as well as Infrastructure-as-a-Service projects such as KubeVirt, Apache CloudStack, OpenStack, QEMU and OpenNebula. This devroom will host presentations that focus on topics of shared interest, such as KVM; libvirt; shared storage; virtualized networking; cloud security; clustering and high availability; interfacing with multiple hypervisors; hyperconverged deployments; and scaling across hundreds or thousands of servers. Presentations in this devroom will be aimed at developers working on these platforms who are looking to collaborate and improve shared infrastructure or solve common problems. We seek topics that encourage dialog between projects and continued work post-FOSDEM. Important Dates Submission deadline: 10th December 2022 Acceptance notifications: 15th December 2022 Final schedule announcement: 20th December 2022 Devroom: First half of 4th February 2023 Submit Your Proposal All submissions must be made via the Pentabarf event planning site[1]. If you have not used Pentabarf before, you will need to create an account. If you submitted proposals for FOSDEM in previous years, you can use your existing account. After creating the account, select Create Event to start the submission process. Make sure to select Virtualization and IaaS devroom from the Track list. Please fill out all the required fields, and provide a meaningful abstract and description of your proposed session. Submission Guidelines We expect more proposals than we can possibly accept, so it is vitally important that you submit your proposal on or before the deadline. Late submissions are unlikely to be considered. All presentation slots are 30 minutes, with 20 minutes planned for presentations, and 10 minutes for Q&A. All presentations will be recorded and made available under Creative Commons licenses. In the Submission notes field, please indicate that you agree that your presentation will be licensed under the CC-By-SA-4.0 or CC-By-4.0 license and that you agree to have your presentation recorded. For example: "If my presentation is accepted for FOSDEM, I hereby agree to license all recordings, slides, and other associated materials under the Creative Commons Attribution Share-Alike 4.0 International License. Sincerely, ." In the Submission notes field, please also confirm that if your talk is accepted, you will be able to attend FOSDEM and deliver your presentation. We will not consider proposals from prospective speakers who are unsure whether they will be able to secure funds for travel and lodging to attend FOSDEM. (Sadly, we are not able to offer travel funding for prospective speakers.) Submission Guidelines Mentored presentations will have 25-minute slots, where 20 minutes will include the presentation and 5 minutes will be reserved for questions. The number of newcomer session slots is limited, so we will probably not be able to accept all applications. You must submit your talk and abstract to apply for the mentoring program, our mentors are volunteering their time and will happily provide feedback but won't write your presentation for you! If you are experiencing problems with Pentabarf, the proposal submission interface, or have other questions, you can email our devroom mailing list[2] and we will try to help you. How to Apply In addition to agreeing to video recording and confirming that you can attend FOSDEM in case your session is accepted, please write "speaker mentoring program application" in the "Submission notes" field, and list any prior speaking experience or other relevant information for your application. Code of Conduct Following the release of the updated code of conduct for FOSDEM, we'd like to remind all speakers and attendees that all of the presentations and discussions in our devroom are held under the guidelines set in the CoC and we expect attendees, speakers, and volunteers to follow the CoC at all times. If you submit a proposal and it is accepted, you will be required to confirm that you accept the FOSDEM CoC. If you have any questions about the CoC or wish to have one of the devroom organizers review your presentation slides or any other content for CoC compliance, please email us and we will do our best to assist you. Call for Volunteers We are also looking for volunteers to help run the devroom. We need assistance watching time for the speakers, and helping with video for the devroom. Please contact devroom mailing list [2] for more information. Questions? If you have any questions about this devroom, please send your questions to our devroom mailing list. You can also subscribe to the list to receive updates about important dates, session announcements, and to connect with other attendees. See you all at FOSDEM! [1] https://penta.fosdem.org/submission/FOSDEM23 [2] iaas-virt-devroom at lists.fosdem.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Nov 11 14:36:53 2022 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 11 Nov 2022 14:36:53 +0000 Subject: [release] Release countdown for week R-18, Nov 14 - 18 Message-ID: Development Focus ----------------- The Antelope-1 milestone is next week, on November 17th, 2022! Project team plans for the 2023.1 Antelope cycle should now be solidified. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library which had changes but has not been otherwise released since the Zed release. PTL's or release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well, by posting a -1. If we do not hear anything at all by the end of the week, we will assume things are OK to proceed. NB: If one of your libraries is still releasing 0.x versions, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. Upcoming Deadlines & Dates -------------------------- Antelope-1 milestone: November 17th, 2022 Final 2023.1 Antelope release: March 22nd, 2023 El?d Ill?s irc: elodilles -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Fri Nov 11 14:55:15 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Fri, 11 Nov 2022 15:55:15 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: Message-ID: Some help please. On Tue, Nov 8, 2022, 14:44 wodel youchi wrote: > Hi, > > To deploy Openstack with a self-signed certificate, the documentation says > to generate the certificates using kolla-ansible certificates, to configure > the support of TLS in globals.yml and to deploy. > > I am facing a problem, my old certificate has expired, I want to use a > self-signed certificate. > I backported my servers to an older date, then generated a self-signed > certificate using kolla, but the deploy/reconfigure won't work, they say : > > self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line > 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: > CERTIFICATE_VERIFY_FAILED certificate verify failed > > PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Fri Nov 11 15:59:09 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 11 Nov 2022 10:59:09 -0500 Subject: [kolla-ansible]Reset Configuration In-Reply-To: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> Message-ID: On Fri, Nov 11, 2022 at 3:05 AM Franck VEDEL < franck.vedel at univ-grenoble-alpes.fr> wrote: > > Thanks for your help, really. > My cluster: 2 controllers nodes, OVS, L3-HA. > All nodes had to be rebooted > All is working for example with external networks (so dhcp on external > networks). > There are no dead containers, all seems ok. > > I try to create a new instance on a L3 network. No ERROR in neutron*.log. > The only error is nova-api.log: > > Example: > 2022-11-11 08:45:54.452 42 ERROR oslo.messaging._drivers.impl_rabbit [-] > [8b6fd776-f096-4c8a-927e-88225a3adb43] AMQP server on 10.0.5.109:5672 is > unreachable: . Trying again in 1 > seconds.: amqp.exceptions.RecoverableConnectionError: > > > But on the first node (10.0.5.109 on the internal network) ? netstat -atnp > |wc-l ? ???>>> 505 connections > > Sounds to me like Rabbit is broken. This could also be an issue with NTP which I asked about earlier. Did you confirm your systems are all correctly synced to the same time source? You can check the status of rabbit on each control node with: docker exec -it rabbitmq rabbitmqctl cluster_status Output should show the same on both of your controllers. If not, restart your rabbit containers. If they won't come back properly, you could destroy and redeploy just those two containers l On both controllers do: docker rm rabbitmq docker volume rm rabbitmq Then kolla-ansible --tags rabbitmq deploy So?. if I backup /etc/kolla, my glance images, my configuration files? > if a do ? koll-ansible destroy ?, is next step ? kolla-ansible > bootstraps?. ? and preaches, and deploy, > or directly deploy ? > > What?s the difference with cleanup-containers ? > > You don't need to bootstrap again. That just installs prerequisites which won't get removed from the destroy. Just go right to doing kolla-ansible deploy again. Do remember this will give you a brand new Openstack with nothing preserved from before. cleanup-containers alone may leave behind some docker tweaks that Neutron needs. It probably doesn't matter if you're going to just redeploy the same configuration though so go ahead and use that instead. -Erik > > I use this openstack cluster for my students, I have a month to get it > working again. I could reinstall everything (and change the operating > system) but I don't have time for that. > So I can lose all the users data, if I have my glance images, my flavors, > the configuration to hang the ldap, the certificates, I think it will be ok. > > > > Franck VEDEL > > > Are you asking how to completely zero out your entire cluster and rebuild > it? That seems a bit drastic. > > kolla-ansible destroy will nuke everything. Take a backup of /etc/kolla > (or wherever your inventory / globals.yml / passwords/yml is) first. Older > versions removed some things there when running destroy and I can't recall > when / if that changed. > > How many controllers do you have? > > Are you using OVS, OVN, or something else? > > Are you using L3-HA? DVR? > > Did all nodes have to be rebooted? If not, then which ones? > > Have you confirmed there are no dead containers on any controllers? ( > docker ps -a ) > > Have you looked in logs for ERROR messages? In particular: > neutron-server.log, neutron-dhcp-agent.log, nova-api.log, and > nova-compute.log ? > > Strange things happen when time is out of sync. Verify all the nodes > synced properly to an NTP server. Big symptom of this is 'openstack > hypervisor list' will show hosts going up and down every few seconds. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Fri Nov 11 19:40:49 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 11 Nov 2022 20:40:49 +0100 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> Message-ID: <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Thanks for your help Erik. All is fine with NTP. Exactly the same result with " docker exec -it rabbitmq rabbitmqctl cluster_status" on the 2 nodes. I Will try this: > On both controllers do: > docker rm rabbitmq > docker volume rm rabbitmq > > Then kolla-ansible --tags rabbitmq deploy Franck VEDEL D?p. R?seaux Informatiques & T?l?coms IUT1 - Univ GRENOBLE Alpes 0476824462 Stages, Alternance, Emploi. > Le 11 nov. 2022 ? 16:59, Erik McCormick a ?crit : > > > > On Fri, Nov 11, 2022 at 3:05 AM Franck VEDEL > wrote: > > Thanks for your help, really. > My cluster: 2 controllers nodes, OVS, L3-HA. > All nodes had to be rebooted > All is working for example with external networks (so dhcp on external networks). > There are no dead containers, all seems ok. > > I try to create a new instance on a L3 network. No ERROR in neutron*.log. > The only error is nova-api.log: > > Example: > 2022-11-11 08:45:54.452 42 ERROR oslo.messaging._drivers.impl_rabbit [-] [8b6fd776-f096-4c8a-927e-88225a3adb43] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: > > But on the first node (10.0.5.109 on the internal network) ? netstat -atnp |wc-l ? ???>>> 505 connections > > Sounds to me like Rabbit is broken. This could also be an issue with NTP which I asked about earlier. Did you confirm your systems are all correctly synced to the same time source? > > You can check the status of rabbit on each control node with: > > docker exec -it rabbitmq rabbitmqctl cluster_status > > Output should show the same on both of your controllers. If not, restart your rabbit containers. If they won't come back properly, you could destroy and redeploy just those two containers l > > On both controllers do: > docker rm rabbitmq > docker volume rm rabbitmq > > Then kolla-ansible --tags rabbitmq deploy > > > So?. if I backup /etc/kolla, my glance images, my configuration files? > if a do ? koll-ansible destroy ?, is next step ? kolla-ansible bootstraps?. ? and preaches, and deploy, > or directly deploy ? > > What?s the difference with cleanup-containers ? > > You don't need to bootstrap again. That just installs prerequisites which won't get removed from the destroy. Just go right to doing kolla-ansible deploy again. Do remember this will give you a brand new Openstack with nothing preserved from before. > > cleanup-containers alone may leave behind some docker tweaks that Neutron needs. It probably doesn't matter if you're going to just redeploy the same configuration though so go ahead and use that instead. > > -Erik > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Fri Nov 11 19:52:40 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Fri, 11 Nov 2022 23:22:40 +0330 Subject: assign network subnet to computes Message-ID: hello is there any way to assign a specific subnet in openstack to a specific compute ? For example subnets with ip 192.168.11.0/24 only can assign to instances on compute6 . -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Nov 11 20:13:29 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 11 Nov 2022 20:13:29 +0000 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: Message-ID: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> Hi, I'm not familiar with kolla, but the docs also mention this option: kolla_copy_ca_into_containers: "yes" As I understand it the CA cert is required within the containers so they can trust the self-signed certs. At least that's how I configure it in a manually deployed openstack cloud. Do you have that option enabled? If it is enabled, did you verify it with openssl tools? Regards, Eugen Zitat von wodel youchi : > Some help please. > > On Tue, Nov 8, 2022, 14:44 wodel youchi wrote: > >> Hi, >> >> To deploy Openstack with a self-signed certificate, the documentation says >> to generate the certificates using kolla-ansible certificates, to configure >> the support of TLS in globals.yml and to deploy. >> >> I am facing a problem, my old certificate has expired, I want to use a >> self-signed certificate. >> I backported my servers to an older date, then generated a self-signed >> certificate using kolla, but the deploy/reconfigure won't work, they say : >> >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: >> CERTIFICATE_VERIFY_FAILED certificate verify failed >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* >> >> Regards. >> From gmann at ghanshyammann.com Fri Nov 11 22:08:39 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 11 Nov 2022 14:08:39 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Nov 11: Reading: 5 min Message-ID: <18468bcdee7.1148f7f4b153555.1190917032431871802@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * * We had this week's meeting on Nov 09. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-09-16.00.log.html * Next TC weekly meeting will be on Nov 16 Wed at 16:00 UTC, please make note of the new day/time for TC meetings. Feel free to add the topic to the agenda[1] by Nov 15. 2. What we completed this week: ========================= * None for this week. 3. Activities In progress: ================== TC Tracker for 2023.1 cycle --------------------------------- * Current cycle working items and their progress are present in 2023.1 tracker etherpad[2]. Open Reviews ----------------- * Ten open reviews for ongoing activities[3]. 2023.1 cycle testing runtime changes (additional testing) ------------------------------------------------------------------- We discussed it in PTG and mainly for smooth upgrade what best we can do/test in testing runtime when we bump the distro version. You might have read my PTG summary email about the agreement on this. Accordingly, I have updated the patch[4] which includes the change in the 2023.1 cycle testing runtime: 1. Keep supporting Ubuntu 20.04: at least one integrated job running at the project gate. 2. Minimum python version will be py3.8 (default in Ubuntu 20.04) This is early heads-up on this and you can review it if any feedback otherwise once it is merged, I will send these updates in a separate email also. TC to stop using storyboard? ---------------------------------- TC has a governance project in the storyboard and it was confusing for many of us. It was used to track the tasks mainly the community-wide goals. But we have not used it for a couple of years and etherpad for TC tracker works very well since Xena cycle[5]. Most TC members are ok to remove the TC storyboard project (after cleaning all story/tasks which I already started) but as we did not have all the TC members in this week's meeting, we will do a formal vote on this in the next TC meeting. Renovate translation SIG i18 ---------------------------------- * rosmaita is coordinating with Weblate and updated that hosting OpenStack translation on Weblate is not free (they can give a 50% discount though). This is a difficult situation now and rosmaita will continue his research and user survey if we can get help from users using the translation. TC Weekly meeting new day and time -------------------------------------------- I am mentioning this again in case anyone missed it. The TC weekly meeting new time is every Wed at 16:00 UTC. TC Video meeting discussion ---------------------------------- JayF resolution patch to move TC all weekly meetings format to IRC is under review[6]. TC chair nomination & election process ----------------------------------------------- I have updated the patch to have the dir structure in the chair nomination. The current two options are up for the review[7][8]. Project updates ------------------- * Add zookeeper role under OpenStack-Ansible governance[9] * Add Skyline repository for OpenStack-Ansible[10] * Add the cinder-infinidat charm to Openstack charms[11] * Add the infinidat-tools subordinate charm to OpenStack charms[12] * Add the manila-infinidat charm to Openstack charms[13] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[14]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [15] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. See you all next week in PTG! [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://etherpad.opendev.org/p/tc-zed-tracker [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/860599 [5] https://wiki.openstack.org/wiki/Technical_Committee_Tracker [6] https://review.opendev.org/c/openstack/governance/+/863685 [7] https://review.opendev.org/c/openstack/governance/+/862772 [8] https://review.opendev.org/c/openstack/governance/+/862774 [9] https://review.opendev.org/c/openstack/governance/+/863161 [10] https://review.opendev.org/c/openstack/governance/+/863166 [11] https://review.opendev.org/c/openstack/governance/+/863958 [12] https://review.opendev.org/c/openstack/governance/+/864067 [13] https://review.opendev.org/c/openstack/governance/+/864068 [14] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [15] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From laurentfdumont at gmail.com Fri Nov 11 22:33:08 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Fri, 11 Nov 2022 17:33:08 -0500 Subject: [kolla-ansible]Reset Configuration In-Reply-To: <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Message-ID: Salut Franck! Can you share the output of docker exec -it rabbitmq rabbitmqctl cluster_status? Can you "nc -v" from one of the compute nodes towards the controller nodes? Laurent On Fri, Nov 11, 2022 at 2:47 PM Franck VEDEL < franck.vedel at univ-grenoble-alpes.fr> wrote: > Thanks for your help Erik. > All is fine with NTP. > Exactly the same result with " docker exec -it rabbitmq rabbitmqctl > cluster_status" on the 2 nodes. > > I Will try this: > > On both controllers do: > docker rm rabbitmq > docker volume rm rabbitmq > > Then kolla-ansible --tags rabbitmq deploy > > > > > Franck VEDEL > *D?p. R?seaux Informatiques & T?l?coms* > *IUT1 - Univ GRENOBLE Alpes* > *0476824462* > Stages, Alternance, Emploi. > > > > Le 11 nov. 2022 ? 16:59, Erik McCormick a > ?crit : > > > > On Fri, Nov 11, 2022 at 3:05 AM Franck VEDEL < > franck.vedel at univ-grenoble-alpes.fr> wrote: > >> >> Thanks for your help, really. >> My cluster: 2 controllers nodes, OVS, L3-HA. >> All nodes had to be rebooted >> All is working for example with external networks (so dhcp on external >> networks). >> There are no dead containers, all seems ok. >> >> I try to create a new instance on a L3 network. No ERROR in neutron*.log. >> The only error is nova-api.log: >> >> Example: >> 2022-11-11 08:45:54.452 42 ERROR oslo.messaging._drivers.impl_rabbit [-] >> [8b6fd776-f096-4c8a-927e-88225a3adb43] AMQP server on 10.0.5.109:5672 is >> unreachable: . Trying again in 1 >> seconds.: amqp.exceptions.RecoverableConnectionError: >> >> >> But on the first node (10.0.5.109 on the internal network) ? netstat >> -atnp |wc-l ? ???>>> 505 connections >> >> Sounds to me like Rabbit is broken. This could also be an issue with NTP > which I asked about earlier. Did you confirm your systems are all correctly > synced to the same time source? > > You can check the status of rabbit on each control node with: > > docker exec -it rabbitmq rabbitmqctl cluster_status > > Output should show the same on both of your controllers. If not, restart > your rabbit containers. If they won't come back properly, you could destroy > and redeploy just those two containers l > > On both controllers do: > docker rm rabbitmq > docker volume rm rabbitmq > > Then kolla-ansible --tags rabbitmq deploy > > > So?. if I backup /etc/kolla, my glance images, my configuration files? >> if a do ? koll-ansible destroy ?, is next step ? kolla-ansible >> bootstraps?. ? and preaches, and deploy, >> or directly deploy ? >> >> What?s the difference with cleanup-containers ? >> >> You don't need to bootstrap again. That just installs prerequisites which > won't get removed from the destroy. Just go right to doing kolla-ansible > deploy again. Do remember this will give you a brand new Openstack with > nothing preserved from before. > > cleanup-containers alone may leave behind some docker tweaks that Neutron > needs. It probably doesn't matter if you're going to just redeploy the same > configuration though so go ahead and use that instead. > > -Erik > > >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Sat Nov 12 08:02:16 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Sat, 12 Nov 2022 09:02:16 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> Message-ID: Hi Thanks for your help. First I want to correct something, the *kolla_verify_tls_backend* was positioned to *false* from the beginning, while doing the first deployment with the commercial certificate. And yes I have *kolla_copy_ca_into_containers* positioned to *yes* from the beginning. And I can see in the nodes that there is a directory named certificates in every module's directory in /etc/kolla What do you mean by using openssl? Do you mean to execute the command inside a container and try to connect to keystone? If yes what is the correct command? It seems like something is missing to tell the client side to ignore the certificate validity, something like the --insecure parameter in the openstack cli. Regards. On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: > Hi, > > I'm not familiar with kolla, but the docs also mention this option: > > kolla_copy_ca_into_containers: "yes" > > As I understand it the CA cert is required within the containers so > they can trust the self-signed certs. At least that's how I configure > it in a manually deployed openstack cloud. Do you have that option > enabled? If it is enabled, did you verify it with openssl tools? > > Regards, > Eugen > > Zitat von wodel youchi : > > > Some help please. > > > > On Tue, Nov 8, 2022, 14:44 wodel youchi wrote: > > > >> Hi, > >> > >> To deploy Openstack with a self-signed certificate, the documentation > says > >> to generate the certificates using kolla-ansible certificates, to > configure > >> the support of TLS in globals.yml and to deploy. > >> > >> I am facing a problem, my old certificate has expired, I want to use a > >> self-signed certificate. > >> I backported my servers to an older date, then generated a self-signed > >> certificate using kolla, but the deploy/reconfigure won't work, they > say : > >> > >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", > line > >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: > >> CERTIFICATE_VERIFY_FAILED certificate verify failed > >> > >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* > >> > >> Regards. > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Sat Nov 12 08:08:44 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Sat, 12 Nov 2022 09:08:44 +0100 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Message-ID: Bonjour ! Output of the command Cluster status of node rabbit at iut1r-srv-ops01-i01 ... Basics Cluster name: rabbit at iut1r-srv-ops01-i01.u-ga.fr Disk Nodes rabbit at iut1r-srv-ops01-i01 rabbit at iut1r-srv-ops02-i01 Running Nodes rabbit at iut1r-srv-ops01-i01 rabbit at iut1r-srv-ops02-i01 Versions rabbit at iut1r-srv-ops01-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 rabbit at iut1r-srv-ops02-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 Maintenance status Node: rabbit at iut1r-srv-ops01-i01, status: not under maintenance Node: rabbit at iut1r-srv-ops02-i01, status: not under maintenance Alarms (none) Network Partitions (none) Listeners Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15692, protocol: http/prometheus, purpose: Prometheus exporter API over HTTP Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15672, protocol: http, purpose: HTTP API Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15692, protocol: http/prometheus, purpose: Prometheus exporter API over HTTP Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 Feature flags Flag: drop_unroutable_metric, state: enabled Flag: empty_basic_get_metric, state: enabled Flag: implicit_default_bindings, state: enabled Flag: maintenance_mode_status, state: enabled Flag: quorum_queue, state: enabled Flag: stream_queue, state: enabled Flag: user_limits, state: enabled Flag: virtual_host_metadata, state: enabled So? nothing strange for me. All containers are healthy nom (after delete rabbitmq and rebuild rabbitmq). in addition to dhcp, communications on the network do not work. If I create an instance, it has no ip address by dhcp. If I give her a static ip, she can't reach the router. If I create another instance, with another static ip, they don't communicate with each other. And they can't ping the router (or routers, I put 2, 1 on each of my 2 external networks) There are some errors in rabbitmq?..log: 2022-11-12 08:53:37.155542+01:00 [error] <0.16179.2> missed heartbeats from client, timeout: 60s 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> closing AMQP connection <0.17357.2> (10.0.5.109:37532 -> 10.0.5.109:5672 - mod_wsgi:43:e50d8e69-7c76-4198-877c-c807e0a180d8): 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> missed heartbeats from client, timeout: 60s There are some errors also in neutron-l3-agent.log 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task message = self.waiters.get(msg_id, timeout=timeout) 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task 'to message ID %s' % msg_id) 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 297cacfadd764562bf09a1c5daf61958 Also in neutron-dhcp-agent.log 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent message = self.waiters.get(msg_id, timeout=timeout) 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent 'to message ID %s' % msg_id) 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 6f1d9d0c51ac4d89b9c889ca273f40a0 A lot of errors in neutron-metadata.log 2022-11-11 22:01:44.152 43 ERROR oslo.messaging._drivers.impl_rabbit [-] [d7902e2c-eba9-40e4-b872-40e7ba7a39ec] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-11-11 22:01:44.226 7 ERROR oslo.messaging._drivers.impl_rabbit [-] [028872e4-fcd1-4de5-b20c-8c5541e3c77f] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: timeout ?. waiting?. unreachable?. connectionerror?. Something is wrong, but I think it?s very difficult to find the problem. To difficult for me. ? nc -v ? works. I do not know what to do. I can lose all data (networks, instances, volumes, etc). I can start again on a new config Do I do it with kolla-ansible -i multinode destroy? Before switching to Yoga, I had a cluster under Xena. I kept my configuration and a venv (python) with koll-ansible for Xena. Am I going back to this version? How without doing stupid things? Thanks a lot. Franck VEDEL > Le 11 nov. 2022 ? 23:33, Laurent Dumont a ?crit : > > docker exec -it rabbitmq rabbitmqctl cluster_status -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Sat Nov 12 14:10:13 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sat, 12 Nov 2022 09:10:13 -0500 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Message-ID: On Sat, Nov 12, 2022 at 3:08 AM Franck VEDEL < franck.vedel at univ-grenoble-alpes.fr> wrote: > Bonjour ! > > Output of the command > > Cluster status of node rabbit at iut1r-srv-ops01-i01 ... > Basics > Cluster name: rabbit at iut1r-srv-ops01-i01.u-ga.fr > > Disk Nodes > rabbit at iut1r-srv-ops01-i01 > rabbit at iut1r-srv-ops02-i01 > > Running Nodes > rabbit at iut1r-srv-ops01-i01 > rabbit at iut1r-srv-ops02-i01 > > Versions > rabbit at iut1r-srv-ops01-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 > rabbit at iut1r-srv-ops02-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 > > Maintenance status > Node: rabbit at iut1r-srv-ops01-i01, status: not under maintenance > Node: rabbit at iut1r-srv-ops02-i01, status: not under maintenance > > Alarms > (none) > > Network Partitions > (none) > > Listeners > Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15672, protocol: > http, purpose: HTTP API > Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15692, protocol: > http/prometheus, purpose: Prometheus exporter API over HTTP > Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 25672, > protocol: clustering, purpose: inter-node and CLI tool communication > Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 5672, > protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 > Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15672, protocol: > http, purpose: HTTP API > Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15692, protocol: > http/prometheus, purpose: Prometheus exporter API over HTTP > Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 25672, > protocol: clustering, purpose: inter-node and CLI tool communication > Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 5672, > protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 > > Feature flags > Flag: drop_unroutable_metric, state: enabled > Flag: empty_basic_get_metric, state: enabled > Flag: implicit_default_bindings, state: enabled > Flag: maintenance_mode_status, state: enabled > Flag: quorum_queue, state: enabled > Flag: stream_queue, state: enabled > Flag: user_limits, state: enabled > Flag: virtual_host_metadata, state: enabled > > So? nothing strange for me. > > All containers are healthy nom (after delete rabbitmq and rebuild > rabbitmq). > > > in addition to dhcp, communications on the network do not work. > If I create an instance, it has no ip address by dhcp. > If I give her a static ip, she can't reach the router. > If I create another instance, with another static ip, they don't > communicate with each other. > And they can't ping the router (or routers, I put 2, 1 on each of my 2 > external networks) > > There are some errors in rabbitmq?..log: > 2022-11-12 08:53:37.155542+01:00 [error] <0.16179.2> missed heartbeats > from client, timeout: 60s > 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> closing AMQP > connection <0.17357.2> (10.0.5.109:37532 -> 10.0.5.109:5672 - > mod_wsgi:43:e50d8e69-7c76-4198-877c-c807e0a180d8): > 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> missed heartbeats > from client, timeout: 60s > > There are some errors also in neutron-l3-agent.log > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task message = > self.waiters.get(msg_id, timeout=timeout) > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task File > "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 445, in get > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task 'to > message ID %s' % msg_id) > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply > to message ID 297cacfadd764562bf09a1c5daf61958 > > Also in neutron-dhcp-agent.log > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent message = > self.waiters.get(msg_id, timeout=timeout) > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent File > "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 445, in get > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent 'to message > ID %s' % msg_id) > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent > oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply > to message ID 6f1d9d0c51ac4d89b9c889ca273f40a0 > > A lot of errors in neutron-metadata.log > 2022-11-11 22:01:44.152 43 ERROR oslo.messaging._drivers.impl_rabbit [-] > [d7902e2c-eba9-40e4-b872-40e7ba7a39ec] AMQP server on 10.0.5.109:5672 is > unreachable: . Trying again in 1 > seconds.: amqp.exceptions.RecoverableConnectionError: > > 2022-11-11 22:01:44.226 7 ERROR oslo.messaging._drivers.impl_rabbit [-] > [028872e4-fcd1-4de5-b20c-8c5541e3c77f] AMQP server on 10.0.5.109:5672 is > unreachable: . Trying again in 1 > seconds.: amqp.exceptions.RecoverableConnectionError: > > > > timeout ?. waiting?. unreachable?. connectionerror?. > > Something is wrong, but I think it?s very difficult to find the problem. > To difficult for me. > ? nc -v ? works. > There are several things that can cause issues with Rabbit, or with services sending messages. Rabbit itself is not always to blame. Things I've seen cause issues before include: 1) Time not being in sync on all systems (covered that earlier) 2) DNS (it's always DNS, right?) 3) Networking issues like mismatched MTU 4) Nova being configured for a Ceph backend, but timing out trying to talk to the cluster (messages would expire while Nova waited on it) > I do not know what to do. > I can lose all data (networks, instances, volumes, etc). I can start again > on a new config > Do I do it with kolla-ansible -i multinode destroy? > > Yeah, just do kolla-ansible -i multinode destroy after backing up your kolla configs. Before switching to Yoga, I had a cluster under Xena. I kept my > configuration and a venv (python) with koll-ansible for Xena. > Am I going back to this version? How without doing stupid things? > > I can't see any good reason to roll back to Xena. Yoga should be fine. Changing should be as simple as swapping your VENV, and using your Xena globals.yml, passwords.yml, inventory, and any other custom configs you had for that version. > Thanks a lot. > > Franck VEDEL > > > > Le 11 nov. 2022 ? 23:33, Laurent Dumont a > ?crit : > > docker exec -it rabbitmq rabbitmqctl cluster_status > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at caktusgroup.com Sat Nov 12 17:12:39 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Sat, 12 Nov 2022 12:12:39 -0500 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues Message-ID: Hi, I'm attempting to use Kolla Ansible 14.6.0 to deploy OpenStack Yoga on a small 3-node Ubuntu 20.04 cluster. The nodes have 128 GB RAM each, dual Xeon processors, and dual 10G Intel NICs. The NICs are connected to access ports on a 10G switch with separate VLANs for the local and external networks. All the playbooks run cleanly, but cloud-init is failing in the Ubuntu 20.04 and 22.04 VMs I attempt to boot. The VM images are unmodified from https://cloud-images.ubuntu.com/, and cloud-init works fine if I mount a second volume with user-data. The error is a timeout attempting to reach 169.254.169.254. This occurs both when booting a VM in an internal routed network and directly in an external network. I tried various neutron plugin agents (ovn, linuxbridge, and openvswitch both with and without firewall_driver = openvswitch ) first with a clean install of the entire OS each time, all with the same result. Running tcpdump looking for 169.254.169.254 shows nothing. As a possible clue, the virtual NICs are unable to pass any traffic (e.g., to reach an external DHCP server) unless I completely disable port security on the interface (even if the associated security group is wide open). But disabling port security does not fix cloud-init (not to mention I don't really want to disable port security). Are there any additional requirements related to deploying OpenStack with Kolla on Ubuntu 20.04? This is a fairly vanilla configuration using the multinode inventory as a starting point. I tried to follow the Quick Start as closely as possible; the only material difference I see is that I'm using the same 3 nodes for control + compute. I am using MAAS so it's easy to get a clean OS install on all three nodes ahead of each attempt. I plan to try again with the standard (non-HWE) kernel just in case, but otherwise I am running out of ideas. In case of any additional clues, here are my globals.yml and inventory file, along with the playbook I'm using to configure the network, images, VMs, etc., after bootstrapping the cluster: https://gist.github.com/tobiasmcnulty/7dbbdbc67abc08cbb013bf5983852ed6 Thank you in advance for any advice! Cheers, Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Sat Nov 12 20:00:20 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Sat, 12 Nov 2022 21:00:20 +0100 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Message-ID: > 3) Networking issues like mismatched MTU My MTU (between nodes ) is 9000?. I believe my problem is the MTU. I modified /etc/kolla/config/neutron.conf and /etc/kolla/config/neutron/ml2_conf.ini.conf then kolla-ansible -i multinode reconfigures (case 1 here: https://docs.openstack.org/newton/networking-guide/config-mtu.html ) I test again everything and functions that did not work work again but not all.... For example, instances get an ip through dhcp but can't ping the router, but on some networks it works. However, before the reboot of the servers, I had not had a problem with the MTU of 9000. I'm going back to a 1500 MTU on Monday on site. Thank you Eric!!! Franck VEDEL > Le 12 nov. 2022 ? 15:10, Erik McCormick a ?crit : > > > > On Sat, Nov 12, 2022 at 3:08 AM Franck VEDEL > wrote: > Bonjour ! > > Output of the command > > Cluster status of node rabbit at iut1r-srv-ops01-i01 ... > Basics > Cluster name: rabbit at iut1r-srv-ops01-i01.u-ga.fr > > Disk Nodes > rabbit at iut1r-srv-ops01-i01 > rabbit at iut1r-srv-ops02-i01 > > Running Nodes > rabbit at iut1r-srv-ops01-i01 > rabbit at iut1r-srv-ops02-i01 > > Versions > rabbit at iut1r-srv-ops01-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 > rabbit at iut1r-srv-ops02-i01: RabbitMQ 3.9.20 on Erlang 24.3.4.2 > > Maintenance status > Node: rabbit at iut1r-srv-ops01-i01, status: not under maintenance > Node: rabbit at iut1r-srv-ops02-i01, status: not under maintenance > > Alarms > (none) > > Network Partitions > (none) > > Listeners > Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15672, protocol: http, purpose: HTTP API > Node: rabbit at iut1r-srv-ops01-i01, interface: [::], port: 15692, protocol: http/prometheus, purpose: Prometheus exporter API over HTTP > Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication > Node: rabbit at iut1r-srv-ops01-i01, interface: 10.0.5.109, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 > Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15672, protocol: http, purpose: HTTP API > Node: rabbit at iut1r-srv-ops02-i01, interface: [::], port: 15692, protocol: http/prometheus, purpose: Prometheus exporter API over HTTP > Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 25672, protocol: clustering, purpose: inter-node and CLI tool communication > Node: rabbit at iut1r-srv-ops02-i01, interface: 10.0.5.110, port: 5672, protocol: amqp, purpose: AMQP 0-9-1 and AMQP 1.0 > > Feature flags > Flag: drop_unroutable_metric, state: enabled > Flag: empty_basic_get_metric, state: enabled > Flag: implicit_default_bindings, state: enabled > Flag: maintenance_mode_status, state: enabled > Flag: quorum_queue, state: enabled > Flag: stream_queue, state: enabled > Flag: user_limits, state: enabled > Flag: virtual_host_metadata, state: enabled > > So? nothing strange for me. > > All containers are healthy nom (after delete rabbitmq and rebuild rabbitmq). > > > in addition to dhcp, communications on the network do not work. > If I create an instance, it has no ip address by dhcp. > If I give her a static ip, she can't reach the router. > If I create another instance, with another static ip, they don't communicate with each other. > And they can't ping the router (or routers, I put 2, 1 on each of my 2 external networks) > > There are some errors in rabbitmq?..log: > 2022-11-12 08:53:37.155542+01:00 [error] <0.16179.2> missed heartbeats from client, timeout: 60s > 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> closing AMQP connection <0.17357.2> (10.0.5.109:37532 -> 10.0.5.109:5672 - mod_wsgi:43:e50d8e69-7c76-4198-877c-c807e0a180d8): > 2022-11-12 08:54:54.026480+01:00 [error] <0.17357.2> missed heartbeats from client, timeout: 60s > > There are some errors also in neutron-l3-agent.log > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task message = self.waiters.get(msg_id, timeout=timeout) > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task 'to message ID %s' % msg_id) > 2022-11-11 22:04:42.512 37 ERROR oslo_service.periodic_task oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 297cacfadd764562bf09a1c5daf61958 > > Also in neutron-dhcp-agent.log > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent message = self.waiters.get(msg_id, timeout=timeout) > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent File "/var/lib/kolla/venv/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 445, in get > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent 'to message ID %s' % msg_id) > 2022-11-11 22:04:44.854 7 ERROR neutron.agent.dhcp.agent oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to message ID 6f1d9d0c51ac4d89b9c889ca273f40a0 > > A lot of errors in neutron-metadata.log > 2022-11-11 22:01:44.152 43 ERROR oslo.messaging._drivers.impl_rabbit [-] [d7902e2c-eba9-40e4-b872-40e7ba7a39ec] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: > 2022-11-11 22:01:44.226 7 ERROR oslo.messaging._drivers.impl_rabbit [-] [028872e4-fcd1-4de5-b20c-8c5541e3c77f] AMQP server on 10.0.5.109:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: > > > timeout ?. waiting?. unreachable?. connectionerror?. > > Something is wrong, but I think it?s very difficult to find the problem. To difficult for me. > ? nc -v ? works. > > There are several things that can cause issues with Rabbit, or with services sending messages. Rabbit itself is not always to blame. Things I've seen cause issues before include: > > 1) Time not being in sync on all systems (covered that earlier) > 2) DNS (it's always DNS, right?) > 3) Networking issues like mismatched MTU > 4) Nova being configured for a Ceph backend, but timing out trying to talk to the cluster (messages would expire while Nova waited on it) > > > I do not know what to do. > I can lose all data (networks, instances, volumes, etc). I can start again on a new config > Do I do it with kolla-ansible -i multinode destroy? > > Yeah, just do kolla-ansible -i multinode destroy after backing up your kolla configs. > > Before switching to Yoga, I had a cluster under Xena. I kept my configuration and a venv (python) with koll-ansible for Xena. > Am I going back to this version? How without doing stupid things? > > I can't see any good reason to roll back to Xena. Yoga should be fine. > > Changing should be as simple as swapping your VENV, and using your Xena globals.yml, passwords.yml, inventory, and any other custom configs you had for that version. > > > Thanks a lot. > > Franck VEDEL > > > >> Le 11 nov. 2022 ? 23:33, Laurent Dumont > a ?crit : >> >> docker exec -it rabbitmq rabbitmqctl cluster_status -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Sun Nov 13 15:41:01 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Sun, 13 Nov 2022 21:11:01 +0530 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Hongbin, I have developed a small code that allows to mount the manila share to the container. It exploits some keywords in the label field and use it mounting the manila share to the container. It is not a very clean solution but it serves my purpose. If you want to extend or check it, I will love to share it with you. Regards, Vaibhav On Wed, Jul 6, 2022 at 10:00 PM Vaibhav wrote: > Hi Hongbin, > > Thanks a lot. > I saw earlier fuxi driver was there. but it is discontinued now. it seems > to be good to refix it for Manila. > > Also, there is docker support for NFS volumes. > https://docs.docker.com/storage/volumes/ > Can something be done to have it. > > I am ready to test if somebody is ready for development. and help in > development if your team guides me some hook points. > > Regards, > Vaibhav > > > On Tue, Jul 5, 2022 at 12:54 PM Hongbin Lu wrote: > >> Hi Vaibhav, >> >> In current state, only Cinder is supported. In theory, Manila can be >> added as another storage backend. I will check if anyone interests to >> contribute this feature. >> >> Best regards, >> Hongbin >> >> On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: >> >>> Hi, >>> >>> I am using zun for running containers and managing them. >>> I deployed cinder also persistent storage. and it is working fine. >>> >>> I want to mount my Manila shares to be mounted on containers managed by >>> Zun. >>> >>> I can see a Fuxi project and driver for this but it is discontinued now. >>> >>> With Cinder only one container can use the storage volume at a time. If >>> I want to have a shared file system to be mounted on multiple containers >>> simultaneously, it is not possible with cinder. >>> >>> Is there any alternative to Fuxi. is there any other mechanism to use >>> docker Volume support for NFS as shown in the link below? >>> https://docs.docker.com/storage/volumes/ >>> >>> Please advise and give a suggestion. >>> >>> Regards, >>> Vaibhav >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sun Nov 13 15:58:11 2022 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sun, 13 Nov 2022 23:58:11 +0800 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Vaibhav, Thanks. I would love to explore your solution. Please do share it with me. Best regards, Hongbin On Sun, Nov 13, 2022 at 11:41 PM Vaibhav wrote: > Hi Hongbin, > > I have developed a small code that allows to mount the manila share to the > container. > > It exploits some keywords in the label field and use it mounting the > manila share to the container. > > It is not a very clean solution but it serves my purpose. If you want to > extend or check it, I will love to share it with you. > > Regards, > Vaibhav > > On Wed, Jul 6, 2022 at 10:00 PM Vaibhav wrote: > >> Hi Hongbin, >> >> Thanks a lot. >> I saw earlier fuxi driver was there. but it is discontinued now. it seems >> to be good to refix it for Manila. >> >> Also, there is docker support for NFS volumes. >> https://docs.docker.com/storage/volumes/ >> Can something be done to have it. >> >> I am ready to test if somebody is ready for development. and help in >> development if your team guides me some hook points. >> >> Regards, >> Vaibhav >> >> >> On Tue, Jul 5, 2022 at 12:54 PM Hongbin Lu wrote: >> >>> Hi Vaibhav, >>> >>> In current state, only Cinder is supported. In theory, Manila can be >>> added as another storage backend. I will check if anyone interests to >>> contribute this feature. >>> >>> Best regards, >>> Hongbin >>> >>> On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: >>> >>>> Hi, >>>> >>>> I am using zun for running containers and managing them. >>>> I deployed cinder also persistent storage. and it is working fine. >>>> >>>> I want to mount my Manila shares to be mounted on containers managed by >>>> Zun. >>>> >>>> I can see a Fuxi project and driver for this but it is discontinued >>>> now. >>>> >>>> With Cinder only one container can use the storage volume at a time. If >>>> I want to have a shared file system to be mounted on multiple containers >>>> simultaneously, it is not possible with cinder. >>>> >>>> Is there any alternative to Fuxi. is there any other mechanism to use >>>> docker Volume support for NFS as shown in the link below? >>>> https://docs.docker.com/storage/volumes/ >>>> >>>> Please advise and give a suggestion. >>>> >>>> Regards, >>>> Vaibhav >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Mon Nov 14 08:31:36 2022 From: michal.arbet at ultimum.io (Michal Arbet) Date: Mon, 14 Nov 2022 09:31:36 +0100 Subject: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? In-Reply-To: References: Message-ID: Hi, For mysql you can use proxysql as a separate loadbalancer. But I don't understand your other questions... Does it mean that you want to run haproxy for some service (for example mariadb ..if proxysql is not used) in a mariadb container ? Or have a separate haproxy_mariadb container to do this ? If yes , both are bad ideas. First option contradicts the idea of "one process per container". The Second option will just run multiple instances of haproxy containers. Could you please explain in detail ? Thanks, kevko Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook p? 11. 11. 2022 v 15:41 odes?latel Mariusz Karpiarz < m.karpiarz at eschercloud.ai> napsal: > All, > > Was the idea of moving internal components deployed by kolla-ansible (like > the MySQL database) to a load balancer separate to the one used by > user-facing APIs discussed anywhere? This feels like a good option to have > for security but, as far as I'm aware, it's not supported by kolla-ansible. > > It should be possible to use existing Kolla HAProxy images, but mount > config files from subdirectories of `/etc/kolla/haproxy/` for each > container. I suspect the main hurtle here would be rewriting the user > interface of kolla-ansible to account for the split whilst maintaining > backward compatibility... > > Mariusz Karpiarz > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Nov 14 09:03:57 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 14 Nov 2022 10:03:57 +0100 Subject: assign network subnet to computes In-Reply-To: References: Message-ID: Hello Parsa: Please check [1]. This is how routed provider networks work in OpenStack neutron. In this topology you can have isolated L2 segments and a defined set of compute nodes per segment.However this architecture requires the L3 layer to be handled outside Neutron. This is something you should take into consideration. Regards. [1] https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html On Fri, Nov 11, 2022 at 8:53 PM Parsa Aminian wrote: > hello > is there any way to assign a specific subnet in openstack to a > specific compute ? > For example subnets with ip 192.168.11.0/24 only can assign to instances > on compute6 . > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Nov 14 09:14:05 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 14 Nov 2022 10:14:05 +0100 Subject: [all][python3.11][debian][horizon] Debian has python 3.11 as available version Message-ID: Hi, Debian unstable now has Python 3.11 as available interpreter. Some packages got rebuilt, and we're starting to see failures. Here's the first one that I'm trying to fix: pyscss. https://github.com/Kronuz/pyScss/issues/428 pykafka also fails to build. There's likely a lot more to come. I'd appreciate help... :) Cheers, Thomas Goirand (zigo) From wodel.youchi at gmail.com Mon Nov 14 10:25:17 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 14 Nov 2022 11:25:17 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> Message-ID: Hi, Any ideas? Regards. Le sam. 12 nov. 2022 ? 09:02, wodel youchi a ?crit : > Hi > > Thanks for your help. > > First I want to correct something, the *kolla_verify_tls_backend* was > positioned to *false* from the beginning, while doing the first > deployment with the commercial certificate. > > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* from > the beginning. And I can see in the nodes that there is a directory named > certificates in every module's directory in /etc/kolla > > What do you mean by using openssl? Do you mean to execute the command > inside a container and try to connect to keystone? If yes what is the > correct command? > > It seems like something is missing to tell the client side to ignore the > certificate validity, something like the --insecure parameter in the > openstack cli. > > Regards. > > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: > >> Hi, >> >> I'm not familiar with kolla, but the docs also mention this option: >> >> kolla_copy_ca_into_containers: "yes" >> >> As I understand it the CA cert is required within the containers so >> they can trust the self-signed certs. At least that's how I configure >> it in a manually deployed openstack cloud. Do you have that option >> enabled? If it is enabled, did you verify it with openssl tools? >> >> Regards, >> Eugen >> >> Zitat von wodel youchi : >> >> > Some help please. >> > >> > On Tue, Nov 8, 2022, 14:44 wodel youchi wrote: >> > >> >> Hi, >> >> >> >> To deploy Openstack with a self-signed certificate, the documentation >> says >> >> to generate the certificates using kolla-ansible certificates, to >> configure >> >> the support of TLS in globals.yml and to deploy. >> >> >> >> I am facing a problem, my old certificate has expired, I want to use a >> >> self-signed certificate. >> >> I backported my servers to an older date, then generated a self-signed >> >> certificate using kolla, but the deploy/reconfigure won't work, they >> say : >> >> >> >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", >> line >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed >> >> >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* >> >> >> >> Regards. >> >> >> >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Nov 14 11:09:59 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 14 Nov 2022 12:09:59 +0100 Subject: [neutron] Bug deputy Nov 7 to Nov 13 Message-ID: Hello Neutrinos: This is the bug list from the past week: High: * https://bugs.launchpad.net/neutron/+bug/1996150: Neutron fails to create network with enforced scopes and new RBAC policies. Assigned: Slawek. * https://bugs.launchpad.net/neutron/+bug/1996129: With new RBAC enabled: Tempest test failing on NetworkNotFound. Duplicate of 1996150. Medium: * https://bugs.launchpad.net/neutron/+bug/1995972: L3 router is doing schedule_routers when adding/removing external gateway. Unassigned. * https://bugs.launchpad.net/neutron/+bug/1995974: [OVN] Router "router_extra_attributes" register is not created. Patch: https://review.opendev.org/c/openstack/neutron/+/864051 * https://bugs.launchpad.net/neutron/+bug/1996180: [OVN] "standard_attr" register missing during inconsistency check. Assigned: Rodolfo. Low: * https://bugs.launchpad.net/neutron/+bug/1996241: Manual install & Configuration in Neutron. Low hanging fruit. * https://bugs.launchpad.net/neutron/+bug/1996421: 'openstack port list' should display ports only from current project. Not sure this is a legit bug, commented in the bug and the patch. Invalid/opinion/incomplete/duplicated: * https://bugs.launchpad.net/neutron/+bug/1995872: A stuck INACTIVE port binding causes wrong l2pop fdb entries to be sent. Duplicated of https://bugs.launchpad.net/neutron/+bug/1979072. * https://bugs.launchpad.net/neutron/+bug/1996199: router external gateway assigning unsupported IPv6. Justification: https://bugs.launchpad.net/neutron/+bug/1996199/comments/1. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ygk.kmr at gmail.com Mon Nov 14 13:03:19 2022 From: ygk.kmr at gmail.com (Gk Gk) Date: Mon, 14 Nov 2022 18:33:19 +0530 Subject: Need information Message-ID: Hi All, At the core of nova-api, I am trying to trace the function which executes the SQL query for listing all the instances for a user. I want to know where in the code this query is executed. I have traced it till this point in the code: ---- if self.cells: results = context.scatter_gather_cells(ctx, self.cells, context.CELL_TIMEOUT, query_wrapper, do_query) --- in the file " https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/multi_cell_list.py#L218 " But where in the above file, is the SQL query executed ? Please help me Thanks Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Nov 14 13:16:24 2022 From: amy at demarco.com (Amy Marrich) Date: Mon, 14 Nov 2022 07:16:24 -0600 Subject: RDO Zed Released Message-ID: The RDO community is pleased to announce the general availability of the RDO build for OpenStack Zed for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Zed is the 26th release from the OpenStack project, which is the work of more than 1,000 contributors from around the world. As with the Upstream release, this release of RDO is dedicated to Ilya Etingof who was an upstream and RDO contributor. The release is already available for CentOS Stream 9 on the CentOS mirror network in: http://mirror.stream.centos.org/SIGs/9-stream/cloud/x86_64/openstack-zed/ The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. The highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/zed/highlights.html *TripleO in the RDO Zed release:* Since the Xena development cycle, TripleO follows the Independent release model ( https://specs.openstack.org/openstack/tripleo-specs/specs/xena/tripleo-independent-release.html ). For the Zed cycle, TripleO project will maintain and validate stable Zed branches. As for the rest of packages, RDO will update and publish the releases created during the maintenance cycle. *Contributors* During the Zed cycle, we saw the following new RDO contributors: - Miguel Garcia Cruces - Michael Johnson - Ren? Ribaud - Paras Babbar - Maur?cio Harley - Jesse Pretorius - Francesco Pantano - Carlos Eduardo - Arun KV Welcome to all of you and Thank You So Much for participating! But we wouldn?t want to overlook anyone. A super massive Thank You to all *57* contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories: - Adriano Vieira Petrich - Alan Bishop - Alan Pevec - Alfredo Moralejo Alonso - Amol Kahat - Amy Marrich - Ananya Banerjee - Arun KV - Arx Cruz - Bhagyashri Shewale - Carlos Eduardo - Chandan Kumar - C?dric Jeanneret - Daniel Pawlik - Dariusz Smigiel - Douglas Viroel - Emma Foley - Eric Harney - Fabien Boucher - Francesco Pantano - Gregory Thiemonge - Jakob Meng - Jesse Pretorius - Ji?? Podiv?n - Joel Capitao - Jon Schlueter - Julia Kreger - Karolina Kula - Leif Madsen - Lon Hohberger - Luigi Toscano - Marios Andreou - Martin Kopec - Mathieu Bultel - Matthias Runge - Maur?cio Harley - Michael Johnson - Miguel Garcia Cruces - Nate Johnston - Nicolas Hicher - Paras Babbar - Pooja Jadhav - Rabi Mishra - Rafael Castillo - Ren? Ribaud/780 - Riccardo Pittau - Ronelle Landy - Sagi Shnaidman - Sandeep Yadav - Sean Mooney - Shreshtha Joshi - Slawomir Kaplonski - Steve Baker - Takashi Kajinami - Tobias Urdin - Tristan De Cacqueray - Yatin Karel *The Next Release Cycle* At the end of one release, focus shifts immediately to the next release i.e Antelope. *Get Started* To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. Finally, for those that don?t have any hardware or physical resources, there?s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world. *Get Help* The RDO Project has our users at lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org. The #rdo channel on OFTC IRC is also an excellent place to find and give help. We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel in Libera.Chat network, and #tripleo on OFTC), however we have a more focused audience within the RDO venues. *Get Involved* To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation. Join us in #rdo and #tripleo on the OFTC IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Nov 14 13:21:43 2022 From: eblock at nde.ag (Eugen Block) Date: Mon, 14 Nov 2022 13:21:43 +0000 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> Message-ID: <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> Hi, > First I want to correct something, the *kolla_verify_tls_backend* was > positioned to *false* from the beginning, while doing the first deployment > with the commercial certificate. so with the previous cert it worked but only because you had the verification set to false, correct? > What do you mean by using openssl? Do you mean to execute the command > inside a container and try to connect to keystone? If yes what is the > correct command? That's one example, yes. Is apache configured correctly to use the provided certs? In my manual deployment it looks like this (only the relevant part): control01:~ # cat /etc/apache2/vhosts.d/keystone-public.conf [...] SSLEngine On SSLCertificateFile /etc/ssl/servercerts/control01.fqdn.cert.pem SSLCACertificateFile /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT SSLCertificateKeyFile /etc/ssl/private/control01.fqdn.key.pem SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown # HTTP Strict Transport Security (HSTS) enforces that all communications # with a server go over SSL. This mitigates the threat from attacks such # as SSL-Strip which replaces links on the wire, stripping away https prefixes # and potentially allowing an attacker to view confidential information on the # wire Header add Strict-Transport-Security "max-age=15768000" [...] and then test it with: ---snip--- control01:~ # curl -v https://control.fqdn:5000/v3 [...] * ALPN, offering h2 * ALPN, offering http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: [...] * subjectAltName: host "control.fqdn" matched cert's "*.fqdn" * issuer: ******* * SSL certificate verify ok. > GET /v3 HTTP/1.1 > Host: control.fqdn:5000 > User-Agent: curl/7.66.0 > Accept: */* > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK [...] * Connection #0 to host control.fqdn left intact {"version": {"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": "https://control.fqdn:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}} ---snip--- To check the created certificate you could run something like this: openssl x509 -in /etc/ssl/servercerts/control01.fqdn.cert.pem -text -noout and see if the SANs match your control node(s) IP addresses and FQDNs. Zitat von wodel youchi : > Hi > > Thanks for your help. > > First I want to correct something, the *kolla_verify_tls_backend* was > positioned to *false* from the beginning, while doing the first deployment > with the commercial certificate. > > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* from the > beginning. And I can see in the nodes that there is a directory named > certificates in every module's directory in /etc/kolla > > What do you mean by using openssl? Do you mean to execute the command > inside a container and try to connect to keystone? If yes what is the > correct command? > > It seems like something is missing to tell the client side to ignore the > certificate validity, something like the --insecure parameter in the > openstack cli. > > Regards. > > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: > >> Hi, >> >> I'm not familiar with kolla, but the docs also mention this option: >> >> kolla_copy_ca_into_containers: "yes" >> >> As I understand it the CA cert is required within the containers so >> they can trust the self-signed certs. At least that's how I configure >> it in a manually deployed openstack cloud. Do you have that option >> enabled? If it is enabled, did you verify it with openssl tools? >> >> Regards, >> Eugen >> >> Zitat von wodel youchi : >> >> > Some help please. >> > >> > On Tue, Nov 8, 2022, 14:44 wodel youchi wrote: >> > >> >> Hi, >> >> >> >> To deploy Openstack with a self-signed certificate, the documentation >> says >> >> to generate the certificates using kolla-ansible certificates, to >> configure >> >> the support of TLS in globals.yml and to deploy. >> >> >> >> I am facing a problem, my old certificate has expired, I want to use a >> >> self-signed certificate. >> >> I backported my servers to an older date, then generated a self-signed >> >> certificate using kolla, but the deploy/reconfigure won't work, they >> say : >> >> >> >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", >> line >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed >> >> >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* >> >> >> >> Regards. >> >> >> >> >> >> >> From smooney at redhat.com Mon Nov 14 14:36:10 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 14 Nov 2022 14:36:10 +0000 Subject: Need information In-Reply-To: References: Message-ID: <0f45e15c89906b521a7f41639510fe641af862f8.camel@redhat.com> On Mon, 2022-11-14 at 18:33 +0530, Gk Gk wrote: > Hi All, > > At the core of nova-api, I am trying to trace the function which executes > the SQL query for listing > all the instances for a user. I want to know where in the code this query > is executed. I have traced it till this point in the code: > > ---- > if self.cells: > results = context.scatter_gather_cells(ctx, self.cells, > context.CELL_TIMEOUT, > query_wrapper, do_query) > --- we do not enbed sql directly in our code. thats generally bad pratice so most moderen opensrouce proejct will either use an orm or create a centralised db module that is called into to avoid poluting code with SQL nova does both https://github.com/openstack/nova/blob/master/nova/db/main/api.py#L1547-L1844 instance_get_all, instance_get_all_by_filters and instance_get_all_by_filters_sort are the primay ways to list instances in the cell db those function use sqlachemy to generate the sql > > in the file " > https://github.com/openstack/nova/blob/c97507dfcd57cce9d76670d3b0d48538900c00e9/nova/compute/multi_cell_list.py#L218 > " > > But where in the above file, is the SQL query executed ? Please help me so instance list starts here https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/api/openstack/compute/servers.py#L116 which calls _getServers https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/api/openstack/compute/servers.py#L174 after preparing the inputs for the get_all function call it invokes it here https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/api/openstack/compute/servers.py#L327-L331 which is implemented here https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/compute/api.py#L2991 initally that fucntion buils d a list of instance that currently have build request then it callse instance_list.get_instance_objects_sorted https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/compute/api.py#L3138 that calls get_instances_sorted which just the InstanceLister to eventually call db.instance_get_all_by_filters_sort https://github.com/openstack/nova/blob/2eb358cdcec36fcfe5388ce6982d2961ca949d0a/nova/compute/instance_list.py#L83 fhat finally gets us to instance_get_all_by_filters_sort which is the funciton that generates the sql query https://github.com/openstack/nova/blob/master/nova/db/main/api.py#L1618 if that seams very complicated its because each nova cell has its own DB instance and we also need to supprot pagination of the results > > > Thanks > Kumar From wodel.youchi at gmail.com Mon Nov 14 15:02:12 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 14 Nov 2022 16:02:12 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> Message-ID: Hi, Thanks again, About your question : so with the previous cert it worked but only because you had the verification set to false, correct? The answer is : Not exactly. Let me explain, I deployed using a commercial valid certificate, but I configured kolla_verify_tls_backend to false exactly to avoid the problem I am facing now. From what I have understood : kolla_verify_tls_backend=false, means : accept the connection even if the verification fails, but apparently it is not the case. And kolla_copy_ca_into_containers was positioned to yes from the beginning. What happened is that my certificate expired, and now I am searching for a way to install a self-signed certificate while waiting to get the new certificate. I backported the platform a few days before the expiration of the certificate, then I generated the self-signed certificate and I tried to deploy it but without success. Regards. Le lun. 14 nov. 2022 ? 14:21, Eugen Block a ?crit : > Hi, > > > First I want to correct something, the *kolla_verify_tls_backend* was > > positioned to *false* from the beginning, while doing the first > deployment > > with the commercial certificate. > > so with the previous cert it worked but only because you had the > verification set to false, correct? > > > What do you mean by using openssl? Do you mean to execute the command > > inside a container and try to connect to keystone? If yes what is the > > correct command? > > That's one example, yes. Is apache configured correctly to use the > provided certs? In my manual deployment it looks like this (only the > relevant part): > > control01:~ # cat /etc/apache2/vhosts.d/keystone-public.conf > [...] > SSLEngine On > SSLCertificateFile /etc/ssl/servercerts/control01.fqdn.cert.pem > SSLCACertificateFile /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT > SSLCertificateKeyFile /etc/ssl/private/control01.fqdn.key.pem > SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown > > # HTTP Strict Transport Security (HSTS) enforces that all > communications > # with a server go over SSL. This mitigates the threat from attacks > such > # as SSL-Strip which replaces links on the wire, stripping away > https prefixes > # and potentially allowing an attacker to view confidential > information on the > # wire > Header add Strict-Transport-Security "max-age=15768000" > [...] > > and then test it with: > > ---snip--- > control01:~ # curl -v https://control.fqdn:5000/v3 > [...] > * ALPN, offering h2 > * ALPN, offering http/1.1 > * TLSv1.3 (OUT), TLS handshake, Client hello (1): > * TLSv1.3 (IN), TLS handshake, Server hello (2): > * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): > * TLSv1.3 (IN), TLS handshake, Certificate (11): > * TLSv1.3 (IN), TLS handshake, CERT verify (15): > * TLSv1.3 (IN), TLS handshake, Finished (20): > * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): > * TLSv1.3 (OUT), TLS handshake, Finished (20): > * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 > * ALPN, server accepted to use http/1.1 > * Server certificate: > [...] > * subjectAltName: host "control.fqdn" matched cert's "*.fqdn" > * issuer: ******* > * SSL certificate verify ok. > > GET /v3 HTTP/1.1 > > Host: control.fqdn:5000 > > User-Agent: curl/7.66.0 > > Accept: */* > > > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > * old SSL session ID is stale, removing > * Mark bundle as not supporting multiuse > < HTTP/1.1 200 OK > [...] > * Connection #0 to host control.fqdn left intact > {"version": {"id": "v3.14", "status": "stable", "updated": > "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": > "https://control.fqdn:5000/v3/"}], "media-types": [{"base": > "application/json", "type": > "application/vnd.openstack.identity-v3+json"}]}} > ---snip--- > > To check the created certificate you could run something like this: > > openssl x509 -in /etc/ssl/servercerts/control01.fqdn.cert.pem -text -noout > > and see if the SANs match your control node(s) IP addresses and FQDNs. > > Zitat von wodel youchi : > > > Hi > > > > Thanks for your help. > > > > First I want to correct something, the *kolla_verify_tls_backend* was > > positioned to *false* from the beginning, while doing the first > deployment > > with the commercial certificate. > > > > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* from > the > > beginning. And I can see in the nodes that there is a directory named > > certificates in every module's directory in /etc/kolla > > > > What do you mean by using openssl? Do you mean to execute the command > > inside a container and try to connect to keystone? If yes what is the > > correct command? > > > > It seems like something is missing to tell the client side to ignore the > > certificate validity, something like the --insecure parameter in the > > openstack cli. > > > > Regards. > > > > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: > > > >> Hi, > >> > >> I'm not familiar with kolla, but the docs also mention this option: > >> > >> kolla_copy_ca_into_containers: "yes" > >> > >> As I understand it the CA cert is required within the containers so > >> they can trust the self-signed certs. At least that's how I configure > >> it in a manually deployed openstack cloud. Do you have that option > >> enabled? If it is enabled, did you verify it with openssl tools? > >> > >> Regards, > >> Eugen > >> > >> Zitat von wodel youchi : > >> > >> > Some help please. > >> > > >> > On Tue, Nov 8, 2022, 14:44 wodel youchi > wrote: > >> > > >> >> Hi, > >> >> > >> >> To deploy Openstack with a self-signed certificate, the documentation > >> says > >> >> to generate the certificates using kolla-ansible certificates, to > >> configure > >> >> the support of TLS in globals.yml and to deploy. > >> >> > >> >> I am facing a problem, my old certificate has expired, I want to use > a > >> >> self-signed certificate. > >> >> I backported my servers to an older date, then generated a > self-signed > >> >> certificate using kolla, but the deploy/reconfigure won't work, they > >> say : > >> >> > >> >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", > >> line > >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: > >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed > >> >> > >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* > >> >> > >> >> Regards. > >> >> > >> > >> > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Mon Nov 14 13:26:42 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Mon, 14 Nov 2022 14:26:42 +0100 Subject: [kolla-ansible]Reset Configuration In-Reply-To: References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> Message-ID: <7EC464AB-100D-4755-955A-11FC2966A2C0@univ-grenoble-alpes.fr> Hello. Thanks a lot Erik my problem was the MTU. if I go back to a situation with MTU=1500 everywhere, all is working fine !!! Is the following configuration possible and if so, how to configure with kolla-ansible files ? : 3 networks: - external (2 externals, VLAN 10 and VLAN 20): MTU = 1500 - admin:MTU=1500 - management : MTU = 9000 (a scsi bay stores volumes, with mtu 9000 ok). Like this: ` Thanks a lot if you have a solution for this. If impossible, I stay with 1500? it?s working, no problem. Franck > Le 12 nov. 2022 ? 21:00, Franck VEDEL a ?crit : > >> 3) Networking issues like mismatched MTU > > My MTU (between nodes ) is 9000?. > > I believe my problem is the MTU. > > I modified /etc/kolla/config/neutron.conf and /etc/kolla/config/neutron/ml2_conf.ini.conf > then kolla-ansible -i multinode reconfigures > > (case 1 here: https://docs.openstack.org/newton/networking-guide/config-mtu.html ) > > I test again everything and functions that did not work work again but not all.... > > For example, instances get an ip through dhcp but can't ping the router, but on some networks it works. > However, before the reboot of the servers, I had not had a problem with the MTU of 9000. > > I'm going back to a 1500 MTU on Monday on site. > > Thank you Eric!!! > > Franck VEDEL > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture d?e?cran 2022-11-14 a? 14.25.19.png Type: image/png Size: 273266 bytes Desc: not available URL: From marios at redhat.com Mon Nov 14 15:58:14 2022 From: marios at redhat.com (Marios Andreou) Date: Mon, 14 Nov 2022 17:58:14 +0200 Subject: [tripleo] gate blocker /tripleo/+bug/1996482 - please hold rechecks Message-ID: Hi folks, per $subject we have a gate blocker at https://bugs.launchpad.net/tripleo/+bug/1996482 Please avoid recheck if you are hitting this issue - it fails during undercloud/standalone install and looks like: ERROR! Unexpected Exception, this is probably a bug: 'Task' object has no attribute '_valid_attrs' Fix is coming with https://review.opendev.org/c/openstack/tripleo-ansible/+/864392 thanks to Takashi for chasing that today regards, marios From fungi at yuggoth.org Mon Nov 14 18:35:06 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 Nov 2022 18:35:06 +0000 Subject: [tc] Fwd: Invitation: OpenInfra Board sync with OpenStack TC @ Wed Nov 16, 2022 20:00 - 21:00 (UTC) Message-ID: <20221114183505.e56fbzgfxvoefg6t@yuggoth.org> Just a reminder that a few months ago we arranged a November meeting time between the OpenStack community and the OpenInfra Board of Directors, Wednesday 2022-11-16 at 20:00 UTC. Julia has graciously supplied a conference call connection for the hour. I've also added the connection info to the pad we've been using for planning this call: https://etherpad.opendev.org/p/2022-11-board-openstack-sync -- Jeremy Stanley ----- Forwarded message from juliaashleykreger at gmail.com ----- Date: Mon, 14 Nov 2022 18:10:42 +0000 Subject: Invitation: OpenInfra Board sync with OpenStack TC @ Wed Nov 16, 2022 2pm - 3pm (CST) (jeremy at openinfra.dev) OpenInfra Board sync with OpenStack TC Wednesday Nov 16, 2022 ? 2pm ? 3pm Central Time - Chicago Location https://us02web.zoom.us/j/84584540710?pwd=aVdpaytPcG01NW9VWVZERDNtZURaZz09 https://www.google.com/url?q=https%3A%2F%2Fus02web.zoom.us%2Fj%2F84584540710%3Fpwd%3DaVdpaytPcG01NW9VWVZERDNtZURaZz09&sa=D&ust=1668881400000000&usg=AOvVaw2AsCJfeeuzfblhVKt8nLfW Greetings Directors & All, During the last meeting between the Board and the OpenStack TC, we determined we would attempt to meet on the 16th of November to have another general open discussion between the two groups. Mailing list posts were made, but a calendar invite was not sent out directly to the attendees. This is that invite. Please use the attached meeting link for call. -Julia ----- End forwarded message ----- -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: text/calendar Size: 2801 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From calestyo at scientia.org Mon Nov 14 19:49:28 2022 From: calestyo at scientia.org (Christoph Anton Mitterer) Date: Mon, 14 Nov 2022 20:49:28 +0100 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: Hey Eugen. > Basically, it's about flattening images. > For example, there are multiple VMs based on the same > image which are copy-on-write clones. We back up the most > important VMs with 'rbd export' so they become "flat" in > the backup store. Well that's effectively what I did, when I copied to a bare volume and tried booting from that. But then the problem is, as I wrote in the other mail, that either I cannot remove the original volume as it's a "root" volume and leave just the copy behind. Or, if I create a fresh server, I cannot make it boot with UEFI, for unknown reasons. Thanks, Chris. From gmann at ghanshyammann.com Mon Nov 14 21:07:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 Nov 2022 13:07:33 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 16 at 1600 UTC Message-ID: <18477f8041d.1192d08b1125527.821139533984080728@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2022 Nov 16, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Nov 15 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From sandcruz666 at gmail.com Tue Nov 15 05:10:12 2022 From: sandcruz666 at gmail.com (K Santhosh) Date: Tue, 15 Nov 2022 10:40:12 +0530 Subject: No subject Message-ID: Hai , I am Santhosh, I do facing a problem with freezer deploymentnt After the deployment of freezer . The freezer_scheduler container is continuously restarting in kolla openstack can you help me out with this freezer_scheduler container -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image(2).png Type: image/png Size: 12286 bytes Desc: not available URL: From rdhasman at redhat.com Tue Nov 15 05:58:08 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 15 Nov 2022 11:28:08 +0530 Subject: Unable to snapshot instances on backend storage In-Reply-To: <1865485308.3717441.1668074559691@mail.yahoo.com> References: <1865485308.3717441.1668074559691.ref@mail.yahoo.com> <1865485308.3717441.1668074559691@mail.yahoo.com> Message-ID: Hi Derek, When you select the "Create new volume" in Horizon (assumption), there is a cinder volume created which contains the image (bootable) and the instance is backed by it. When you perform the snapshot creation operation on that instance, a cinder snapshot of that volume is created and a glance image is registered pointing to it. Based on the scenario you described, the cinder snapshot operation seems to be failing. I would suggest you to check the following: 1) Which OpenStack version are you using? 1) Which cinder backend are you using? 2) Checking the nova logs and cinder logs (api, sch, vol) for any possible errors 3) Try to create a cinder volume, attach it, and try to snapshot it (Note that we've removed the requirement of a force flag since xena so you will need to provide force=True if you're on a pre-xena version). We've faced a recent issue related to snapshot create operation[1], maybe that's related. [1] https://bugs.launchpad.net/python-cinderclient/+bug/1995883 - Rajat Dhasmana On Thu, Nov 10, 2022 at 3:46 PM Derek O keeffe wrote: > Hi all, > > When we create an instance and leave the "Create new volume" option as no > then we can manage the instance with no issues (migrate, snapshot, etc..) > These instances are saved locally on the compute nodes. > > When we create an instance and select "Create new volume" yes the instance > is spun up fine on our backend storage with no obvious issues (reachable > with ping & ssh. shutdown, restart, networking, etc.. all fine) however, > when we try to snapshot it or migrate it it fails. We can however take > volume snapshots of volumes that we have created and are stored on the same > shared backend. > > Has anyone came across this or maybe a pointer as to what they may think > is causing it? It sounds to us as if nova try's to create a snapshot of the > VM but thinks it's a volume maybe? > > Any help greatly appreciated. > > Regards, > Derek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue Nov 15 10:02:49 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 15 Nov 2022 10:02:49 +0000 (UTC) Subject: Unable to snapshot instances on backend storage In-Reply-To: References: <1865485308.3717441.1668074559691.ref@mail.yahoo.com> <1865485308.3717441.1668074559691@mail.yahoo.com> Message-ID: <645356921.435715.1668506569770@mail.yahoo.com> Hi Rajat, Thanks for the reply, I will try to do as you say later today and check those logs to see if I can find anything. Regards,Derek On Tuesday 15 November 2022 at 05:58:23 GMT, Rajat Dhasmana wrote: Hi Derek, When you select the?"Create new volume" in Horizon (assumption), there is a cinder volume created which contains the image (bootable) and the instance is backed by it.When you perform the snapshot creation operation on that instance, a cinder snapshot of that volume is created and a glance image is registered pointing to it. Based on the scenario you described, the cinder snapshot operation seems to be failing. I would suggest you to check the following: 1) Which OpenStack version are you using??1) Which cinder backend are you using?2) Checking the nova logs and cinder logs (api, sch, vol) for any possible errors3) Try to create a cinder volume, attach it, and try to snapshot it (Note that we've removed the requirement of a force flag since xena so you will need to provide force=True if you're on a pre-xena version). We've faced a recent issue related to snapshot create operation[1], maybe that's related. [1]?https://bugs.launchpad.net/python-cinderclient/+bug/1995883 -Rajat Dhasmana On Thu, Nov 10, 2022 at 3:46 PM Derek O keeffe wrote: Hi all, When we create an instance and leave the "Create new volume" option as no then we can manage the instance with no issues (migrate, snapshot, etc..) These instances are saved locally on the compute nodes. When we create an instance and select "Create new volume" yes the instance is spun up fine on our backend storage with no obvious issues (reachable with ping & ssh. shutdown, restart, networking, etc.. all fine) however, when we try to snapshot it or migrate it it fails. We can however take volume snapshots of volumes that we have created and are stored on the same shared backend. Has anyone came across this or maybe a pointer as to what they may think is causing it? It sounds to us as if nova try's to create a snapshot of the VM but thinks it's a volume maybe? Any help greatly appreciated. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue Nov 15 10:29:29 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 15 Nov 2022 10:29:29 +0000 (UTC) Subject: EMC SAN with Openstack ansible References: <1592195710.486756.1668508169270.ref@mail.yahoo.com> Message-ID: <1592195710.486756.1668508169270@mail.yahoo.com> Hi all, I've been chatting on the IRC channel and I've gotten some feedback that I need to read up on regarding overrides, but just in case I'm about to reinvent the wheel (try at least) has anyone set up cinder volumes on an EMC SAN using OSA? and if so could yo give me some advice, examples, pointers, etc... We had it set up manually on an old cluster but are unsure how to do it through OSA (starting to look at it today based on the advice we got) This explains how to do it manually?https://www.delltechnologies.com/asset/en-us/products/storage/industry-market/sc-series-with-openstack-cinder-dell-emc-cml.pdf Any info would be great and hugely appreciated. Should we figure it out before then we will drop the config here for others that may be interested. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 15 10:45:30 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 15 Nov 2022 10:45:30 +0000 Subject: how to remove image with still used volumes In-Reply-To: References: <2fe9482a4b308177d495f56485550668932f9e90.camel@scientia.org> <5d15cc523af296fa3936884981a59e4ef3ff3ada.camel@scientia.org> Message-ID: <20221115104530.Horde.CZ-Ia6B-Nu1OCrJAlWh0pUj@webmail.nde.ag> Hi, > But then the problem is, as I wrote in the other mail, that either I > cannot remove the original volume as it's a "root" volume and leave > just the copy behind. right, I forgot about the not removable root volume. Your workaround seems valid though, copying the original volume to a new volume, launch a new instance from the new volume and remove the old one. But did you also try to set --image-property (not --property as you wrote before) to the fresh volume? Zitat von Christoph Anton Mitterer : > Hey Eugen. > >> Basically, it's about flattening images. >> For example, there are multiple VMs based on the same >> image which are copy-on-write clones. We back up the most >> important VMs with 'rbd export' so they become "flat" in >> the backup store. > > Well that's effectively what I did, when I copied to a bare volume and > tried booting from that. > > But then the problem is, as I wrote in the other mail, that either I > cannot remove the original volume as it's a "root" volume and leave > just the copy behind. > Or, if I create a fresh server, I cannot make it boot with UEFI, for > unknown reasons. > > > Thanks, > Chris. From michal.arbet at ultimum.io Tue Nov 15 10:48:11 2022 From: michal.arbet at ultimum.io (Michal Arbet) Date: Tue, 15 Nov 2022 11:48:11 +0100 Subject: No subject In-Reply-To: References: Message-ID: What about logs from container ? What about log in /var/log/kolla..... Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook ?t 15. 11. 2022 v 6:18 odes?latel K Santhosh napsal: > Hai , > I am Santhosh, > I do facing a problem with freezer deploymentnt > After the deployment of freezer . The freezer_scheduler > container is continuously restarting in kolla openstack > can you help me out with this freezer_scheduler container > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 15 10:49:11 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 15 Nov 2022 10:49:11 +0000 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> Message-ID: <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> Okay, I understand. Did you verify if the self-signed cert contains everything you require as I wrote in the previous email? Can you paste the openssl command output (and mask everything non-public)? Zitat von wodel youchi : > Hi, > Thanks again, > > About your question : so with the previous cert it worked but only because > you had the verification set to false, correct? > The answer is : Not exactly. > > Let me explain, I deployed using a commercial valid certificate, but I > configured kolla_verify_tls_backend to false exactly to avoid the problem I > am facing now. From what I have understood : > kolla_verify_tls_backend=false, means : accept the connection even if the > verification fails, but apparently it is not the case. > And kolla_copy_ca_into_containers was positioned to yes from the beginning. > > What happened is that my certificate expired, and now I am searching for a > way to install a self-signed certificate while waiting to get the new > certificate. > > I backported the platform a few days before the expiration of the > certificate, then I generated the self-signed certificate and I tried to > deploy it but without success. > > Regards. > > Le lun. 14 nov. 2022 ? 14:21, Eugen Block a ?crit : > >> Hi, >> >> > First I want to correct something, the *kolla_verify_tls_backend* was >> > positioned to *false* from the beginning, while doing the first >> deployment >> > with the commercial certificate. >> >> so with the previous cert it worked but only because you had the >> verification set to false, correct? >> >> > What do you mean by using openssl? Do you mean to execute the command >> > inside a container and try to connect to keystone? If yes what is the >> > correct command? >> >> That's one example, yes. Is apache configured correctly to use the >> provided certs? In my manual deployment it looks like this (only the >> relevant part): >> >> control01:~ # cat /etc/apache2/vhosts.d/keystone-public.conf >> [...] >> SSLEngine On >> SSLCertificateFile /etc/ssl/servercerts/control01.fqdn.cert.pem >> SSLCACertificateFile /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT >> SSLCertificateKeyFile /etc/ssl/private/control01.fqdn.key.pem >> SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown >> >> # HTTP Strict Transport Security (HSTS) enforces that all >> communications >> # with a server go over SSL. This mitigates the threat from attacks >> such >> # as SSL-Strip which replaces links on the wire, stripping away >> https prefixes >> # and potentially allowing an attacker to view confidential >> information on the >> # wire >> Header add Strict-Transport-Security "max-age=15768000" >> [...] >> >> and then test it with: >> >> ---snip--- >> control01:~ # curl -v https://control.fqdn:5000/v3 >> [...] >> * ALPN, offering h2 >> * ALPN, offering http/1.1 >> * TLSv1.3 (OUT), TLS handshake, Client hello (1): >> * TLSv1.3 (IN), TLS handshake, Server hello (2): >> * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): >> * TLSv1.3 (IN), TLS handshake, Certificate (11): >> * TLSv1.3 (IN), TLS handshake, CERT verify (15): >> * TLSv1.3 (IN), TLS handshake, Finished (20): >> * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): >> * TLSv1.3 (OUT), TLS handshake, Finished (20): >> * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 >> * ALPN, server accepted to use http/1.1 >> * Server certificate: >> [...] >> * subjectAltName: host "control.fqdn" matched cert's "*.fqdn" >> * issuer: ******* >> * SSL certificate verify ok. >> > GET /v3 HTTP/1.1 >> > Host: control.fqdn:5000 >> > User-Agent: curl/7.66.0 >> > Accept: */* >> > >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): >> * old SSL session ID is stale, removing >> * Mark bundle as not supporting multiuse >> < HTTP/1.1 200 OK >> [...] >> * Connection #0 to host control.fqdn left intact >> {"version": {"id": "v3.14", "status": "stable", "updated": >> "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": >> "https://control.fqdn:5000/v3/"}], "media-types": [{"base": >> "application/json", "type": >> "application/vnd.openstack.identity-v3+json"}]}} >> ---snip--- >> >> To check the created certificate you could run something like this: >> >> openssl x509 -in /etc/ssl/servercerts/control01.fqdn.cert.pem -text -noout >> >> and see if the SANs match your control node(s) IP addresses and FQDNs. >> >> Zitat von wodel youchi : >> >> > Hi >> > >> > Thanks for your help. >> > >> > First I want to correct something, the *kolla_verify_tls_backend* was >> > positioned to *false* from the beginning, while doing the first >> deployment >> > with the commercial certificate. >> > >> > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* from >> the >> > beginning. And I can see in the nodes that there is a directory named >> > certificates in every module's directory in /etc/kolla >> > >> > What do you mean by using openssl? Do you mean to execute the command >> > inside a container and try to connect to keystone? If yes what is the >> > correct command? >> > >> > It seems like something is missing to tell the client side to ignore the >> > certificate validity, something like the --insecure parameter in the >> > openstack cli. >> > >> > Regards. >> > >> > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: >> > >> >> Hi, >> >> >> >> I'm not familiar with kolla, but the docs also mention this option: >> >> >> >> kolla_copy_ca_into_containers: "yes" >> >> >> >> As I understand it the CA cert is required within the containers so >> >> they can trust the self-signed certs. At least that's how I configure >> >> it in a manually deployed openstack cloud. Do you have that option >> >> enabled? If it is enabled, did you verify it with openssl tools? >> >> >> >> Regards, >> >> Eugen >> >> >> >> Zitat von wodel youchi : >> >> >> >> > Some help please. >> >> > >> >> > On Tue, Nov 8, 2022, 14:44 wodel youchi >> wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> To deploy Openstack with a self-signed certificate, the documentation >> >> says >> >> >> to generate the certificates using kolla-ansible certificates, to >> >> configure >> >> >> the support of TLS in globals.yml and to deploy. >> >> >> >> >> >> I am facing a problem, my old certificate has expired, I want to use >> a >> >> >> self-signed certificate. >> >> >> I backported my servers to an older date, then generated a >> self-signed >> >> >> certificate using kolla, but the deploy/reconfigure won't work, they >> >> say : >> >> >> >> >> >> self._sslobj.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", >> >> line >> >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: [SSL: >> >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed >> >> >> >> >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* >> >> >> >> >> >> Regards. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From noonedeadpunk at gmail.com Tue Nov 15 11:44:49 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 15 Nov 2022 12:44:49 +0100 Subject: EMC SAN with Openstack ansible In-Reply-To: <1592195710.486756.1668508169270@mail.yahoo.com> References: <1592195710.486756.1668508169270.ref@mail.yahoo.com> <1592195710.486756.1668508169270@mail.yahoo.com> Message-ID: Hey, Yes, so OpenStack-Ansible configuration is not much different from manual one. Any custom parameters can be added to appropriate config files using overrides. For example, to adjust cinder.conf with the custom config, you need to define in your user_variables.yml (or group_vars/cinder_volume.yml) smth like that: cinder_cinder_conf_overrides: DEFAULT: use_multipath_for_image_xfer: True enforce_multipath_for_image_xfer: True When you need to define a backend for cinder, you can do this either with overrides as well, or just leveraging cinder_backends variable, ie: cinder_backends: dellfc: volume_backend_name: dellfc volume_driver: cinder.volume.drivers.dell_emc.sc.storagecenter_fc.SCFCDriver ... You can also check for some examples here [1]. Despite it's configured not in user_vars but in openstack_user_config, idea and result of both these options are the same. For nova you have a variable named nova_nova_conf_overrides that can be leveraged to adjust nova.conf: nova_nova_conf_overrides: libvirt: iscsi_use_multipath: True The main thing with overrides, is that they should be proper YAML formatted. YAML will be converted into ini format when placing a config in place using config_template module. Hope this helps. [1]: https://opendev.org/openstack/openstack-ansible/src/branch/master/etc/openstack_deploy/openstack_user_config.yml.example#L626 ??, 15 ????. 2022 ?. ? 11:31, Derek O keeffe : > > Hi all, > > I've been chatting on the IRC channel and I've gotten some feedback that I need to read up on regarding overrides, but just in case I'm about to reinvent the wheel (try at least) has anyone set up cinder volumes on an EMC SAN using OSA? and if so could yo give me some advice, examples, pointers, etc... > > We had it set up manually on an old cluster but are unsure how to do it through OSA (starting to look at it today based on the advice we got) This explains how to do it manually https://www.delltechnologies.com/asset/en-us/products/storage/industry-market/sc-series-with-openstack-cinder-dell-emc-cml.pdf > > Any info would be great and hugely appreciated. Should we figure it out before then we will drop the config here for others that may be interested. > > Regards, > Derek From marios at redhat.com Tue Nov 15 12:48:09 2022 From: marios at redhat.com (Marios Andreou) Date: Tue, 15 Nov 2022 14:48:09 +0200 Subject: [tripleo] gate blocker /tripleo/+bug/1996482 - please hold rechecks In-Reply-To: References: Message-ID: On Mon, Nov 14, 2022 at 5:58 PM Marios Andreou wrote: > > Hi folks, > > per $subject we have a gate blocker at > https://bugs.launchpad.net/tripleo/+bug/1996482 > > Please avoid recheck if you are hitting this issue - it fails during > undercloud/standalone install and looks like: > > ERROR! Unexpected Exception, this is probably a bug: 'Task' object has > no attribute '_valid_attrs' > > Fix is coming with > https://review.opendev.org/c/openstack/tripleo-ansible/+/864392 > > thanks to Takashi for chasing that today > update: *proper* fix(es) still pending with https://review.opendev.org/c/openstack/tripleo-ansible/+/864392 and any other patches we will need. However since the fix needs some more work we decided to unblock with a pin https://review.opendev.org/c/openstack/tripleo-quickstart/+/864498 to be reverted asap in the next few days. you can use depends-on tripleo-quickstart/+/864498 or wait for it to go through the gate https://zuul.openstack.org/status#864498 thanks for your patience marios > regards, marios From tobias at caktusgroup.com Tue Nov 15 14:14:58 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Tue, 15 Nov 2022 09:14:58 -0500 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues In-Reply-To: References: Message-ID: As an update, I tried the non-HWE kernel with the same result. Could it be a hardware/driver issue with the 10G NICs? It's so repeatable. I'll look into finding some other hardware to test with. Has anyone else experienced such a complete failure with cloud-init and/or security groups, and do you have any advice on how I might continue to debug this? Many thanks, Tobias On Sat, Nov 12, 2022 at 12:12 PM Tobias McNulty wrote: > Hi, > > I'm attempting to use Kolla Ansible 14.6.0 to deploy OpenStack Yoga on a > small 3-node Ubuntu 20.04 cluster. The nodes have 128 GB RAM each, dual > Xeon processors, and dual 10G Intel NICs. The NICs are connected to access > ports on a 10G switch with separate VLANs for the local and external > networks. > > All the playbooks run cleanly, but cloud-init is failing in the > Ubuntu 20.04 and 22.04 VMs I attempt to boot. The VM images are unmodified > from https://cloud-images.ubuntu.com/, and cloud-init works fine if I > mount a second volume with user-data. The error is a timeout attempting to > reach 169.254.169.254. This occurs both when booting a VM in an internal > routed network and directly in an external network. > > I tried various neutron plugin agents (ovn, linuxbridge, and openvswitch > both with and without firewall_driver = openvswitch > ) > first with a clean install of the entire OS each time, all with the same > result. Running tcpdump looking for 169.254.169.254 shows nothing. As a > possible clue, the virtual NICs are unable to pass any traffic (e.g., to > reach an external DHCP server) unless I completely disable port security on > the interface (even if the associated security group is wide open). But > disabling port security does not fix cloud-init (not to mention I don't > really want to disable port security). > > Are there any additional requirements related to deploying OpenStack with > Kolla on Ubuntu 20.04? > > This is a fairly vanilla configuration using the multinode inventory as a > starting point. I tried to follow the Quick Start > as > closely as possible; the only material difference I see is that I'm using > the same 3 nodes for control + compute. I am using MAAS so it's easy to get > a clean OS install on all three nodes ahead of each attempt. I plan to try > again with the standard (non-HWE) kernel just in case, but otherwise I am > running out of ideas. In case of any additional clues, here are my > globals.yml and inventory file, along with the playbook I'm using to > configure the network, images, VMs, etc., after bootstrapping the cluster: > > https://gist.github.com/tobiasmcnulty/7dbbdbc67abc08cbb013bf5983852ed6 > > Thank you in advance for any advice! > > Cheers, > Tobias > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 15 14:44:47 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 15 Nov 2022 14:44:47 +0000 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues In-Reply-To: References: Message-ID: <20221115144447.Horde.gj_lVBrH_euIJbbrSkZRDQa@webmail.nde.ag> Hi, just one more thing to check: whenever I had troubles with the metadata it was usually apparmor blocking the access. For testing purposes (or if you're behind a firewall anyway) you could try to disable all the security related daemons and see if that helps. If you don't have it enabled, do you see any errors in the neutron logs? Zitat von Tobias McNulty : > As an update, I tried the non-HWE kernel with the same result. Could it be > a hardware/driver issue with the 10G NICs? It's so repeatable. I'll look > into finding some other hardware to test with. > > Has anyone else experienced such a complete failure with cloud-init and/or > security groups, and do you have any advice on how I might continue to > debug this? > > Many thanks, > Tobias > > > On Sat, Nov 12, 2022 at 12:12 PM Tobias McNulty > wrote: > >> Hi, >> >> I'm attempting to use Kolla Ansible 14.6.0 to deploy OpenStack Yoga on a >> small 3-node Ubuntu 20.04 cluster. The nodes have 128 GB RAM each, dual >> Xeon processors, and dual 10G Intel NICs. The NICs are connected to access >> ports on a 10G switch with separate VLANs for the local and external >> networks. >> >> All the playbooks run cleanly, but cloud-init is failing in the >> Ubuntu 20.04 and 22.04 VMs I attempt to boot. The VM images are unmodified >> from https://cloud-images.ubuntu.com/, and cloud-init works fine if I >> mount a second volume with user-data. The error is a timeout attempting to >> reach 169.254.169.254. This occurs both when booting a VM in an internal >> routed network and directly in an external network. >> >> I tried various neutron plugin agents (ovn, linuxbridge, and openvswitch >> both with and without firewall_driver = openvswitch >> ) >> first with a clean install of the entire OS each time, all with the same >> result. Running tcpdump looking for 169.254.169.254 shows nothing. As a >> possible clue, the virtual NICs are unable to pass any traffic (e.g., to >> reach an external DHCP server) unless I completely disable port security on >> the interface (even if the associated security group is wide open). But >> disabling port security does not fix cloud-init (not to mention I don't >> really want to disable port security). >> >> Are there any additional requirements related to deploying OpenStack with >> Kolla on Ubuntu 20.04? >> >> This is a fairly vanilla configuration using the multinode inventory as a >> starting point. I tried to follow the Quick Start >> as >> closely as possible; the only material difference I see is that I'm using >> the same 3 nodes for control + compute. I am using MAAS so it's easy to get >> a clean OS install on all three nodes ahead of each attempt. I plan to try >> again with the standard (non-HWE) kernel just in case, but otherwise I am >> running out of ideas. In case of any additional clues, here are my >> globals.yml and inventory file, along with the playbook I'm using to >> configure the network, images, VMs, etc., after bootstrapping the cluster: >> >> https://gist.github.com/tobiasmcnulty/7dbbdbc67abc08cbb013bf5983852ed6 >> >> Thank you in advance for any advice! >> >> Cheers, >> Tobias >> From erin at openstack.org Tue Nov 15 15:00:57 2022 From: erin at openstack.org (Erin Disney) Date: Tue, 15 Nov 2022 09:00:57 -0600 Subject: The CFP for the OpenInfra Summit 2023 is open! Message-ID: <4696B08E-D32F-46C5-9BCB-5C31A6FF637F@openstack.org> Hi Everyone! The CFP for the 2023 OpenInfra Summit (June 13-15, 2023) is NOW LIVE [1]! Check out the full list of tracks and submit a talk on your topic of expertise [2]. The CFP closes January 10, 2023, at 11:59 p.m. PT We are also now accepting submissions for Forum sessions [3]! What should you submit to the forum vs. the traditional CFP? For the Forum, submit discussion-oriented sessions, including challenges around different software components, working group progress or best practices to tackle common issues. Looking for other resources? Registration [4], sponsorships [5], travel support [6] and visa requests [7] are also all open! Find all the information on the OpenInfra Summit 2023 in one place [8]! Cheers, Erin [1] https://cfp.openinfra.dev/app/vancouver-2023/19/presentations [2] https://openinfra.dev/summit/vancouver-2023/summit-tracks/ [3] https://cfp.openinfra.dev/app/vancouver-2023/20/ [4] http://openinfra.dev/summit/registration [5] http://openinfra.dev/summit/vancouver-2023/summit-sponsor/ [6] https://openinfrafoundation.formstack.com/forms/openinfra_tsp [7] https://openinfrafoundation.formstack.com/forms/visa_yvrsummit2023 [8] https://openinfra.dev/summit/vancouver-2023 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 15 15:36:21 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 15 Nov 2022 16:36:21 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> Message-ID: Hi, This is the server certificate generated by kolla # openssl x509 -noout -text -in *backend-cert.pem* Certificate: Data: Version: 3 (0x2) Serial Number: 36:c4:48:24:e7:88:c4:f0:dd:32:b3:d8:e9:b7:c5:17:5c:4e:85:ff Signature Algorithm: sha256WithRSAEncryption *Issuer: CN = KollaTestCA Validity Not Before: Oct 14 13:13:04 2022 GMT Not After : Feb 26 13:13:04 2024 GMT* Subject: C = US, ST = NC, L = RTP, OU = kolla Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:b9:f6:f9:83:e6:8c:de:fb:3e:6f:df:23:b9:46: 53:04:52:7a:45:44:6e:9b:cb:cc:30:ab:df:bc:b2: .... Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: * IP Address:20.3.0.23, IP Address:20.3.0.27, IP Address:20.3.0.31* And this is the CA certificate generated by Kolla # openssl x509 -noout -text -in ca*.pem Certificate: Data: Version: 3 (0x2) Serial Number: 66:c9:c2:c8:fa:45:e7:48:26:a1:48:63:b6:a9:27:1d:dc:74:4a:c3 Signature Algorithm: sha256WithRSAEncryption * Issuer: CN = KollaTestCA Validity Not Before: Oct 14 13:12:59 2022 GMT Not After : Aug 3 13:12:59 2025 GMT Subject: CN = KollaTestCA* Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (4096 bit) Modulus: 00:ce:6f:91:5a:bf:81:49:b6:eb:d9:99:60:bc:93: 80:ab:59:bb:20:09:33:b5:b0:75:ba:50:90:87:93: *# openssl verify -verbose -CAfile ca.pem backend-cert.pembackend-cert.pem: OK* >From the keystone container I got this : *(keystone)[root at controllera /]# curl -v https://dashint.example.com:5000/v3 * * Trying 20.3.0.1... * TCP_NODELAY set * *Connected to dashint.example.com (20.3.0.1) port 5000 (#0)* * ALPN, offering h2 * ALPN, offering http/1.1 ** successfully set certificate verify locations:* CAfile: /etc/pki/tls/certs/ca-bundle.crt* CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, [no content] (0): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server did not agree to a protocol * Server certificate: ** subject: C=US; ST=NC; L=RTP; OU=kolla* start date: Oct 14 13:13:03 2022 GMT* expire date: Oct 14 13:13:03 2023 GMT* subjectAltName: host "dashint.example.com " matched cert's "dashint.example.com "* * issuer: CN=KollaTestCA * SSL certificate verify ok. * TLSv1.3 (OUT), TLS app data, [no content] (0): > GET /v3 HTTP/1.1 > Host: dashint.example.com:5000 > User-Agent: curl/7.61.1 > Accept: */* > * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS app data, [no content] (0): *< HTTP/1.1 200 OK* < date: Sat, 22 Oct 2022 15:39:22 GMT < server: Apache < content-length: 262 < vary: X-Auth-Token < x-openstack-request-id: req-88c293c3-7efb-4a12-ac06-21f90e1fdc10 < content-type: application/json < * Connection #0 to host dashint.example.com left intact {"version": {"id": "v3.14", "status": "stable", "updated": "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": " https://dashint.example.com:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}}curl ( https://dashint.example.com:5000/v3): response: 200, time: 0.012871, size: 262 When deploying with the self certificate it's in this task on the first controller where the problem is triggered : *TASK [service-ks-register : keystone | Creating services module_name=os_keystone_service, module_args={'name': '{{ item.name }}', 's$rvice_type': '{{ item.type }}', 'description': '{{ item.description }}', 'region_name': '{{ service_ks_register_region_name }}', 'au$h': '{{ service_ks_register_auth }}', 'interface': '{{ service_ks_register_interface }}', 'cacert': '{{ service_ks_cacert }}'}] **** FAILED - RETRYING: [controllera]: keystone | Creating services (5 retries left). FAILED - RETRYING: [controllera]: keystone | Creating services (4 retries left). FAILED - RETRYING: [controllera]: keystone | Creating services (3 retries left). FAILED - RETRYING: [controllera]: keystone | Creating services (2 retries left). FAILED - RETRYING: [controllera]: keystone | Creating services (1 retries left).failed: [controllera] (item={'name': 'keystone', 'service_type': 'identity'}) => {"action": "os_keystone_service", "ansible_loop_var" : "item", "attempts": 5, "changed": false, "item": {"description": "Openstack Identity Service", "endpoints": [{"interface": "admin", "url": "https://dashint.example.com:35357"}, {"interface": "internal", "url": "https://dashint.example.com:5000"}, {"interface": "public", "url": "https://dash.example.com:5000"}], "name": "keystone", "type": "identity"}, "module_stderr": "Failed to discover available identity versions when contacting https://dashint.example.com:35357. Attempting to parse version from URL.\nTraceback (mo st recent call last):\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 710, in urlopen\n chunk ed=chunked,\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 386, in _make_request\n self._val idate_conn(conn)\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 1040, in _validate_conn\n co nn.connect()\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/connection.py\", line 426, in connect\n tls_in_tls=tls_in_ tls,\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/util/ssl_.py\", line 450, in ssl_wrap_socket\n sock, context, tls_ in_tls, server_hostname=server_hostname\n File \"/opt/ansible/lib/python3.6/site-packages/urllib3/util/ssl_.py\", line 493, in _ssl_ wrap_socket_impl\n return ssl_context.wrap_socket(sock, server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 365, in wrap_socket\n _context=self, _session=session)\n File \"/usr/lib64/python3.6/ssl.py\", line 776, in __init__\n se lf.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in do_handshake\n self._sslobj.do_handshake()\n File \"/usr /lib64/python3.6/ssl.py\", line 648, in do_handshake\n *self._sslobj.do_handshake()\nssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed* (_ssl.c:897)\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most rec ent call last):\n File \"/opt/ansible/lib/python3.6/site-packages/requests/adapters.py\", line 450, in send\n timeout=timeout I don't know what this task is, the container is running, what does mean *service-ks-register : keystone* ? Regards. Le mar. 15 nov. 2022 ? 11:54, Eugen Block a ?crit : > Okay, I understand. Did you verify if the self-signed cert contains > everything you require as I wrote in the previous email? Can you paste > the openssl command output (and mask everything non-public)? > > Zitat von wodel youchi : > > > Hi, > > Thanks again, > > > > About your question : so with the previous cert it worked but only > because > > you had the verification set to false, correct? > > The answer is : Not exactly. > > > > Let me explain, I deployed using a commercial valid certificate, but I > > configured kolla_verify_tls_backend to false exactly to avoid the > problem I > > am facing now. From what I have understood : > > kolla_verify_tls_backend=false, means : accept the connection even if the > > verification fails, but apparently it is not the case. > > And kolla_copy_ca_into_containers was positioned to yes from the > beginning. > > > > What happened is that my certificate expired, and now I am searching for > a > > way to install a self-signed certificate while waiting to get the new > > certificate. > > > > I backported the platform a few days before the expiration of the > > certificate, then I generated the self-signed certificate and I tried to > > deploy it but without success. > > > > Regards. > > > > Le lun. 14 nov. 2022 ? 14:21, Eugen Block a ?crit : > > > >> Hi, > >> > >> > First I want to correct something, the *kolla_verify_tls_backend* was > >> > positioned to *false* from the beginning, while doing the first > >> deployment > >> > with the commercial certificate. > >> > >> so with the previous cert it worked but only because you had the > >> verification set to false, correct? > >> > >> > What do you mean by using openssl? Do you mean to execute the command > >> > inside a container and try to connect to keystone? If yes what is the > >> > correct command? > >> > >> That's one example, yes. Is apache configured correctly to use the > >> provided certs? In my manual deployment it looks like this (only the > >> relevant part): > >> > >> control01:~ # cat /etc/apache2/vhosts.d/keystone-public.conf > >> [...] > >> SSLEngine On > >> SSLCertificateFile /etc/ssl/servercerts/control01.fqdn.cert.pem > >> SSLCACertificateFile > /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT > >> SSLCertificateKeyFile /etc/ssl/private/control01.fqdn.key.pem > >> SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown > >> > >> # HTTP Strict Transport Security (HSTS) enforces that all > >> communications > >> # with a server go over SSL. This mitigates the threat from attacks > >> such > >> # as SSL-Strip which replaces links on the wire, stripping away > >> https prefixes > >> # and potentially allowing an attacker to view confidential > >> information on the > >> # wire > >> Header add Strict-Transport-Security "max-age=15768000" > >> [...] > >> > >> and then test it with: > >> > >> ---snip--- > >> control01:~ # curl -v https://control.fqdn:5000/v3 > >> [...] > >> * ALPN, offering h2 > >> * ALPN, offering http/1.1 > >> * TLSv1.3 (OUT), TLS handshake, Client hello (1): > >> * TLSv1.3 (IN), TLS handshake, Server hello (2): > >> * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): > >> * TLSv1.3 (IN), TLS handshake, Certificate (11): > >> * TLSv1.3 (IN), TLS handshake, CERT verify (15): > >> * TLSv1.3 (IN), TLS handshake, Finished (20): > >> * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): > >> * TLSv1.3 (OUT), TLS handshake, Finished (20): > >> * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 > >> * ALPN, server accepted to use http/1.1 > >> * Server certificate: > >> [...] > >> * subjectAltName: host "control.fqdn" matched cert's "*.fqdn" > >> * issuer: ******* > >> * SSL certificate verify ok. > >> > GET /v3 HTTP/1.1 > >> > Host: control.fqdn:5000 > >> > User-Agent: curl/7.66.0 > >> > Accept: */* > >> > > >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > >> * old SSL session ID is stale, removing > >> * Mark bundle as not supporting multiuse > >> < HTTP/1.1 200 OK > >> [...] > >> * Connection #0 to host control.fqdn left intact > >> {"version": {"id": "v3.14", "status": "stable", "updated": > >> "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": > >> "https://control.fqdn:5000/v3/"}], "media-types": [{"base": > >> "application/json", "type": > >> "application/vnd.openstack.identity-v3+json"}]}} > >> ---snip--- > >> > >> To check the created certificate you could run something like this: > >> > >> openssl x509 -in /etc/ssl/servercerts/control01.fqdn.cert.pem -text > -noout > >> > >> and see if the SANs match your control node(s) IP addresses and FQDNs. > >> > >> Zitat von wodel youchi : > >> > >> > Hi > >> > > >> > Thanks for your help. > >> > > >> > First I want to correct something, the *kolla_verify_tls_backend* was > >> > positioned to *false* from the beginning, while doing the first > >> deployment > >> > with the commercial certificate. > >> > > >> > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* > from > >> the > >> > beginning. And I can see in the nodes that there is a directory named > >> > certificates in every module's directory in /etc/kolla > >> > > >> > What do you mean by using openssl? Do you mean to execute the command > >> > inside a container and try to connect to keystone? If yes what is the > >> > correct command? > >> > > >> > It seems like something is missing to tell the client side to ignore > the > >> > certificate validity, something like the --insecure parameter in the > >> > openstack cli. > >> > > >> > Regards. > >> > > >> > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: > >> > > >> >> Hi, > >> >> > >> >> I'm not familiar with kolla, but the docs also mention this option: > >> >> > >> >> kolla_copy_ca_into_containers: "yes" > >> >> > >> >> As I understand it the CA cert is required within the containers so > >> >> they can trust the self-signed certs. At least that's how I configure > >> >> it in a manually deployed openstack cloud. Do you have that option > >> >> enabled? If it is enabled, did you verify it with openssl tools? > >> >> > >> >> Regards, > >> >> Eugen > >> >> > >> >> Zitat von wodel youchi : > >> >> > >> >> > Some help please. > >> >> > > >> >> > On Tue, Nov 8, 2022, 14:44 wodel youchi > >> wrote: > >> >> > > >> >> >> Hi, > >> >> >> > >> >> >> To deploy Openstack with a self-signed certificate, the > documentation > >> >> says > >> >> >> to generate the certificates using kolla-ansible certificates, to > >> >> configure > >> >> >> the support of TLS in globals.yml and to deploy. > >> >> >> > >> >> >> I am facing a problem, my old certificate has expired, I want to > use > >> a > >> >> >> self-signed certificate. > >> >> >> I backported my servers to an older date, then generated a > >> self-signed > >> >> >> certificate using kolla, but the deploy/reconfigure won't work, > they > >> >> say : > >> >> >> > >> >> >> self._sslobj.do_handshake()\n File > \"/usr/lib64/python3.6/ssl.py\", > >> >> line > >> >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: > [SSL: > >> >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed > >> >> >> > >> >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* > >> >> >> > >> >> >> Regards. > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Tue Nov 15 15:54:31 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Tue, 15 Nov 2022 15:54:31 +0000 Subject: [mistral][keystone] Updating credentials and trusts Message-ID: Hey team, Mistral is using keystone "trusts" in order to perform action on behalf of another customer. If I understand correctly the trust mechanism in keystone, it's related to a specific user_id. Imagine I want to update the user (and password) of the mistral service (mistral-api), then the user ID may change, so the trust will be broken. Is there any way to update the user of mistral-api without breaking the trusts? One answer is to not change the user, but only the password. But doing so will break my service for a moment (while I update the passwords in both services - keystone and mistral). Is there any other option? Regards, Arnaud - OVHcloud. From cboylan at sapwetik.org Tue Nov 15 17:02:12 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 15 Nov 2022 09:02:12 -0800 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues In-Reply-To: References: Message-ID: On Tue, Nov 15, 2022, at 6:14 AM, Tobias McNulty wrote: > As an update, I tried the non-HWE kernel with the same result. Could it > be a hardware/driver issue with the 10G NICs? It's so repeatable. I'll > look into finding some other hardware to test with. > > Has anyone else experienced such a complete failure with cloud-init > and/or security groups, and do you have any advice on how I might > continue to debug this? I'm not sure this will be helpful since you seem to have narrowed down the issue to VM networking, but here are some of the things that I do when debugging boot time VM setup failures: * Use config drive instead of metadata service. The metadata service hasn't always been reliable. * Bake information like DHCP config for interfaces and user ssh keys into an image and boot that. This way you don't need to rely on actions taken at boot time. * Use a different boot time configurator tool. Glean is the one the OpenDev team uses for test nodes. When I debug things there I tend to test with cloud-init to compare glean behavior. But you can do this in reverse. Again, I'm not sure this is helpful in this specific instance. But thought I'd send it out anyway to help those who may land here through Google search in the future. > > Many thanks, > Tobias From gmann at ghanshyammann.com Tue Nov 15 17:04:14 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 15 Nov 2022 09:04:14 -0800 Subject: [all][openstack-dev][ptls] Migrating devstack jobs to Jammy (Ubuntu LTS 22.04) In-Reply-To: References: Message-ID: <1847c3f9af5.ba43e35e213379.1687695354888052685@ghanshyammann.com> ---- On Thu, 13 Oct 2022 12:52:03 -0700 Dmitriy Rabotyagov wrote --- > Hi everyone, > > According to a 2023.1 community-wide goal [1], base-jobs including but [....] . On R-18, which is the first 2023.1 milestone that will happen on > the 18th of November 2022, base-jobs patches mentioned in step 1 will > be merged. Please ensure you have verified compatibility for your > projects and landed the required changes if any were needed before > this date otherwise, they might fail. Hello Everyone, The deadline for switching the CI/CD to Ubuntu Jammy (22.04) is approaching which is after 3 days (Nov 18). We will merge the OpenStack tox base, devstack, and tempest base jobs patches migrating them to Jammy on Nov 18(these will migrate most of the jobs to run on Jammy). Currently, there are two known failures, feel free to add more failures if you know and have not yet been fixed in the below etherpad https://etherpad.opendev.org/p/migrate-to-jammy 1. swift: https://bugs.launchpad.net/swift/+bug/1996627 2. devstack-plugin-ceph: https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1996628 If projects need more time to fix the bugs then they can pin the nodeset to the focal for time being and fix them asap. -gmann > > Please, do not hesitate to raise any questions or concerns. > > > [1] https://governance.openstack.org/tc/goals/selected/migrate-ci-jobs-to-ubuntu-jammy.html > [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/861116 > https://review.opendev.org/c/openstack/tempest/+/861110 > https://review.opendev.org/c/openstack/devstack/+/860795 > [3] https://review.opendev.org/c/openstack/nova/+/861111 > [4] https://etherpad.opendev.org/p/migrate-to-jammy > > From smooney at redhat.com Tue Nov 15 17:27:11 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 15 Nov 2022 17:27:11 +0000 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues In-Reply-To: References: Message-ID: <1639e0eb3f9ed24067caae1a6816d8a107605305.camel@redhat.com> On Tue, 2022-11-15 at 09:02 -0800, Clark Boylan wrote: > On Tue, Nov 15, 2022, at 6:14 AM, Tobias McNulty wrote: > > As an update, I tried the non-HWE kernel with the same result. Could it > > be a hardware/driver issue with the 10G NICs? It's so repeatable. I'll > > look into finding some other hardware to test with. > > > > Has anyone else experienced such a complete failure with cloud-init > > and/or security groups, and do you have any advice on how I might > > continue to debug this? > > I'm not sure this will be helpful since you seem to have narrowed down the issue to VM networking, but here are some of the things that I do when debugging boot time VM setup failures: > > * Use config drive instead of metadata service. The metadata service hasn't always been reliable. > * Bake information like DHCP config for interfaces and user ssh keys into an image and boot that. This way you don't need to rely on actions taken at boot time. > * Use a different boot time configurator tool. Glean is the one the OpenDev team uses for test nodes. When I debug things there I tend to test with cloud-init to compare glean behavior. But you can do this in reverse. > > Again, I'm not sure this is helpful in this specific instance. But thought I'd send it out anyway to help those who may land here through Google search in the future. one thing that you shoudl check in addtion to considering ^ is make sure that the nova api is configured to use memcache. cloud init only retries request until the first request succceds. once the first request works it assumes that the rest will. if you are using a loadbalance and multipel nova-metadtaa-api process without memcache, and it take more then 10-30 seconds(cant recall how long cloud-init waits) to build the metadatta respocnce then cloud init can fail. basically if the second request need to rebuild everythign again because its not in a shared cache( memcache) then teh request can time out and cloud init wont try again. > > > > > Many thanks, > > Tobias > From andreocferreira at gmail.com Tue Nov 15 23:00:47 2022 From: andreocferreira at gmail.com (=?UTF-8?Q?Andr=C3=A9_Ferreira?=) Date: Tue, 15 Nov 2022 23:00:47 +0000 Subject: Issue installing OpenStack in CentOS VMs - cannot launch instances Message-ID: Hello, I'm trying to setup an openstack cluster by following the instructions on https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-yoga I'm using two CentOS8 VMs: - VM1: controller node - VM2: compute node After installing all the minimum services, I've tried to create a server instance but it's failing. >From the logs, looks like that nova is not able to find a compute node to launch the instance: 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88 6a857bb3fb7f47849ff5a11d97968344 - default default] Failed to schedule i host was found. Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241, in inner return func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 209, in select_destinations raise exception.NoValidHost(reason="") nova.exception.NoValidHost: No valid host was found. 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most recent call last): 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1549, in schedule_and_build_instances 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids, return_alternates=True) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 910, in _schedule_instances 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return_alternates=return_alternates) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, in select_destinations 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids, return_objects, return_alternates) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in select_destinations 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return cctxt.call(ctxt, 'select_destinations', **msg_args) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 192, in call 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager retry=self.retry, transport_options=self.transport_options) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager transport_options=transport_options) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 691, in send 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager transport_options=transport_options) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise result 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager nova.exception_Remote.NoValidHost_Remote: No valid host was found. 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most recent call last): 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241, in inner 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return func(*args, **kwargs) 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 209, in select_destinations 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise exception.NoValidHost(reason="") 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager nova.exception.NoValidHost: No valid host was found >From nova-scheduler: 2022-11-15 16:54:11.797 3978 INFO nova.scheduler.manager [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88 6a857bb3fb7f47849ff5a11d97968344 - default default] Got no allocation can cient resources or a temporary occurrence as compute nodes start up. 2022-11-15 16:54:23.160 3978 DEBUG oslo_service.periodic_task [req-d477e747-537b-48b2-8913-ef84447d5a21 - - - - -] Running periodic task SchedulerManager._discover_hosts_in_cells run_periodic_tasks /usr/li 2022-11-15 16:54:23.169 3978 DEBUG oslo_concurrency.lockutils [req-4dbefd76-0b81-47d9-b6cf-382f43e3505e - - - - -] Lock "79648906-0ea8-4672-8c5f-73d5998a7b73" acquired by "nova.context.set_target_cell. From gmann at ghanshyammann.com Wed Nov 16 03:01:30 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 15 Nov 2022 19:01:30 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 16 at 1600 UTC In-Reply-To: <18477f8041d.1192d08b1125527.821139533984080728@ghanshyammann.com> References: <18477f8041d.1192d08b1125527.821139533984080728@ghanshyammann.com> Message-ID: <1847e626eec.b71c489b234236.2323728493118775762@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Nov 16 at 1600 UTC. Location:' IRC #openstack-tc Details: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Roll call * Follow up on past action items * Gate health check * T2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker * TC stop using storyboard? ** https://storyboard.openstack.org/#!/project/923 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 14 Nov 2022 13:07:33 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2022 Nov 16, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Nov 15 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From eblock at nde.ag Wed Nov 16 08:03:33 2022 From: eblock at nde.ag (Eugen Block) Date: Wed, 16 Nov 2022 08:03:33 +0000 Subject: Issue installing OpenStack in CentOS VMs - cannot launch instances In-Reply-To: Message-ID: <20221116080333.Horde.zq49VHkaQBgXxJ45wOXM3xU@webmail.nde.ag> Hi, these lines indicate that the compute node has been discovered successfully: [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_hosts" +-----------+--------------------------------------+---------------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+---------------------+ | cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 | compute0.os.lab.com | +-----------+--------------------------------------+---------------------+ The "No valid host was found" message can mean many things, you could turn on debug logs for nova and see what exactly it complains about. Zitat von Andr? Ferreira : > Hello, > > I'm trying to setup an openstack cluster by following the instructions on > https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-yoga > > I'm using two CentOS8 VMs: > - VM1: controller node > - VM2: compute node > > After installing all the minimum services, I've tried to create a server > instance but it's failing. > > From the logs, looks like that nova is not able to find a compute node to > launch the instance: > > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88 > 6a857bb3fb7f47849ff5a11d97968344 - default default] Failed to schedule i > host was found. > Traceback (most recent call last): > > File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line > 241, in inner > return func(*args, **kwargs) > > File "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line > 209, in select_destinations > raise exception.NoValidHost(reason="") > > nova.exception.NoValidHost: No valid host was found. > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most > recent call last): > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1549, in > schedule_and_build_instances > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids, > return_alternates=True) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 910, in > _schedule_instances > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > return_alternates=return_alternates) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/nova/scheduler/client/query.py", line 42, > in select_destinations > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager instance_uuids, > return_objects, return_alternates) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/nova/scheduler/rpcapi.py", line 160, in > select_destinations > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return > cctxt.call(ctxt, 'select_destinations', **msg_args) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 192, > in call > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager retry=self.retry, > transport_options=self.transport_options) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, > in _send > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > transport_options=transport_options) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 691, in send > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > transport_options=transport_options) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", > line 681, in _send > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise result > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > nova.exception_Remote.NoValidHost_Remote: No valid host was found. > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager Traceback (most > recent call last): > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 241, > in inner > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager return > func(*args, **kwargs) > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager File > "/usr/lib/python3.6/site-packages/nova/scheduler/manager.py", line 209, in > select_destinations > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager raise > exception.NoValidHost(reason="") > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > 2022-11-15 16:54:11.889 3963 ERROR nova.conductor.manager > nova.exception.NoValidHost: No valid host was found > > > From nova-scheduler: > 2022-11-15 16:54:11.797 3978 INFO nova.scheduler.manager > [req-c1aa668e-73db-4799-baa4-d782ec5986e9 e6529a38880d4efcb55308277aeabb88 > 6a857bb3fb7f47849ff5a11d97968344 - default default] Got no allocation can > cient resources or a temporary occurrence as compute nodes start up. > 2022-11-15 16:54:23.160 3978 DEBUG oslo_service.periodic_task > [req-d477e747-537b-48b2-8913-ef84447d5a21 - - - - -] Running periodic task > SchedulerManager._discover_hosts_in_cells run_periodic_tasks /usr/li > 2022-11-15 16:54:23.169 3978 DEBUG oslo_concurrency.lockutils > [req-4dbefd76-0b81-47d9-b6cf-382f43e3505e - - - - -] Lock > "79648906-0ea8-4672-8c5f-73d5998a7b73" acquired by > "nova.context.set_target_cell. .000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390 > 2022-11-15 16:54:23.170 3978 DEBUG oslo_concurrency.lockutils > [req-4dbefd76-0b81-47d9-b6cf-382f43e3505e - - - - -] Lock > "79648906-0ea8-4672-8c5f-73d5998a7b73" "released" by > "nova.context.set_target_cell. .001s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 > 2022-11-15 16:54:59.097 3977 DEBUG oslo_service.periodic_task > [req-bb4ca11b-2089-414c-b8d5-6c45aa58c1bf - - - - -] Running periodic task > SchedulerManager._discover_hosts_in_cells run_periodic_tasks /usr/li > 2022-11-15 16:54:59.112 3977 DEBUG oslo_concurrency.lockutils > [req-cf62a2b4-6721-4f98-a97d-7b51e58a34b3 - - - - -] Lock > "79648906-0ea8-4672-8c5f-73d5998a7b73" acquired by > "nova.context.set_target_cell. .000s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:390 > 2022-11-15 16:54:59.113 3977 DEBUG oslo_concurrency.lockutils > [req-cf62a2b4-6721-4f98-a97d-7b51e58a34b3 - - - - -] Lock > "79648906-0ea8-4672-8c5f-73d5998a7b73" "released" by > "nova.context.set_target_cell. .001s inner > /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:405 > > > The problem is that the compute node is not placed on the cell. The list of > hypervisors is also empty. > I've searched online but I don't find a way to fix this. > > (admin-rc) [andrefe at controller0 ~]$ openstack compute service list > +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+ > | ID | Binary | Host | Zone | Status | State | Updated At | > +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+ > | fa781cbb-732c-43db-8c9f-4c31bd73bbd2 | nova-scheduler | > controller0.os.lab.com | internal | enabled | up | > 2022-11-15T15:59:40.000000 | > | 106e9574-5ae3-45e2-a7c2-09ce3624a1d6 | nova-conductor | > controller0.os.lab.com | internal | enabled | up | > 2022-11-15T15:59:40.000000 | > | 28072820-609a-4066-abc8-affea51c3600 | nova-compute | compute0.os.lab.com > | nova | enabled | up | 2022-11-15T15:59:41.000000 | > +--------------------------------------+----------------+------------------------+----------+---------+-------+----------------------------+ > > [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_cells" nova > +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+ > | Name | UUID | Transport URL | Database Connection | Disabled | > +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+ > | cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | > mysql+pymysql://nova:****@controller0/nova_cell0 | False | > | cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 | > rabbit://openstack:****@controller0:5672/ | > mysql+pymysql://nova:****@controller0/nova | False | > +-------+--------------------------------------+-------------------------------------------+--------------------------------------------------+----------+ > > [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 list_hosts" > +-----------+--------------------------------------+---------------------+ > | Cell Name | Cell UUID | Hostname | > +-----------+--------------------------------------+---------------------+ > | cell1 | 79648906-0ea8-4672-8c5f-73d5998a7b73 | compute0.os.lab.com | > +-----------+--------------------------------------+---------------------+ > > [root at controller0 ~]# /bin/sh -c "nova-manage cell_v2 discover_hosts > --verbose" nova > Found 2 cell mappings. > Skipping cell0 since it does not contain hosts. > Getting computes from cell 'cell1': 79648906-0ea8-4672-8c5f-73d5998a7b73 > Found 0 unmapped computes in cell: 79648906-0ea8-4672-8c5f-73d5998a7b73 > > (admin-rc) [andrefe at controller0 ~]$ nova hypervisor-list > +----+---------------------+-------+--------+ > | ID | Hypervisor hostname | State | Status | > +----+---------------------+-------+--------+ > +----+---------------------+-------+--------+ > > Any idea on how I can fix this and add the compute to the cell? > > Thanks. From eblock at nde.ag Wed Nov 16 08:15:48 2022 From: eblock at nde.ag (Eugen Block) Date: Wed, 16 Nov 2022 08:15:48 +0000 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> Message-ID: <20221116081548.Horde.Yot1odWZ9C_0jce9mLVBeRS@webmail.nde.ag> Hi, so the curl output looks correct. When you say you backported your servers, what exactly does that mean? If the servers are set back in time the commercial certificate CA would still refuse validation, wouldn't it? Or am I misunderstanding things here? I'm just guessing here because I'm not familiar with kolla. Is there a debug mode for the deployment to see which certs exactly it tries to validate? Is there a way to manually deploy the certs and restart the required services? Zitat von wodel youchi : > Hi, > > This is the server certificate generated by kolla > > # openssl x509 -noout -text -in *backend-cert.pem* > Certificate: > Data: > Version: 3 (0x2) > Serial Number: > 36:c4:48:24:e7:88:c4:f0:dd:32:b3:d8:e9:b7:c5:17:5c:4e:85:ff > Signature Algorithm: sha256WithRSAEncryption > > > > *Issuer: CN = KollaTestCA Validity Not Before: Oct 14 > 13:13:04 2022 GMT Not After : Feb 26 13:13:04 2024 GMT* > Subject: C = US, ST = NC, L = RTP, OU = kolla > Subject Public Key Info: > Public Key Algorithm: rsaEncryption > RSA Public-Key: (2048 bit) > Modulus: > 00:b9:f6:f9:83:e6:8c:de:fb:3e:6f:df:23:b9:46: > 53:04:52:7a:45:44:6e:9b:cb:cc:30:ab:df:bc:b2: > .... > Exponent: 65537 (0x10001) > X509v3 extensions: > X509v3 Subject Alternative Name: > * IP Address:20.3.0.23, IP Address:20.3.0.27, IP > Address:20.3.0.31* > > And this is the CA certificate generated by Kolla > # openssl x509 -noout -text -in ca*.pem > Certificate: > Data: > Version: 3 (0x2) > Serial Number: > 66:c9:c2:c8:fa:45:e7:48:26:a1:48:63:b6:a9:27:1d:dc:74:4a:c3 > Signature Algorithm: sha256WithRSAEncryption > > > > > * Issuer: CN = KollaTestCA Validity Not Before: > Oct 14 13:12:59 2022 GMT Not After : Aug 3 13:12:59 2025 GMT > Subject: CN = KollaTestCA* > Subject Public Key Info: > Public Key Algorithm: rsaEncryption > RSA Public-Key: (4096 bit) > Modulus: > 00:ce:6f:91:5a:bf:81:49:b6:eb:d9:99:60:bc:93: > 80:ab:59:bb:20:09:33:b5:b0:75:ba:50:90:87:93: > > > > *# openssl verify -verbose -CAfile ca.pem backend-cert.pembackend-cert.pem: > OK* > > > From the keystone container I got this : > *(keystone)[root at controllera /]# curl -v > https://dashint.example.com:5000/v3 * > * Trying 20.3.0.1... > * TCP_NODELAY set > * *Connected to dashint.example.com (20.3.0.1) > port 5000 (#0)* > * ALPN, offering h2 > * ALPN, offering http/1.1 > > ** successfully set certificate verify locations:* CAfile: > /etc/pki/tls/certs/ca-bundle.crt* > CApath: none > * TLSv1.3 (OUT), TLS handshake, Client hello (1): > * TLSv1.3 (IN), TLS handshake, Server hello (2): > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, Certificate (11): > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, CERT verify (15): > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, Finished (20): > * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): > * TLSv1.3 (OUT), TLS handshake, [no content] (0): > * TLSv1.3 (OUT), TLS handshake, Finished (20): > * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 > * ALPN, server did not agree to a protocol > * Server certificate: > > > > ** subject: C=US; ST=NC; L=RTP; OU=kolla* start date: Oct 14 13:13:03 > 2022 GMT* expire date: Oct 14 13:13:03 2023 GMT* subjectAltName: host > "dashint.example.com " matched cert's > "dashint.example.com "* > * issuer: CN=KollaTestCA > * SSL certificate verify ok. > * TLSv1.3 (OUT), TLS app data, [no content] (0): >> GET /v3 HTTP/1.1 >> Host: dashint.example.com:5000 >> User-Agent: curl/7.61.1 >> Accept: */* >> > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > * TLSv1.3 (IN), TLS handshake, [no content] (0): > * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): > * TLSv1.3 (IN), TLS app data, [no content] (0): > *< HTTP/1.1 200 OK* > < date: Sat, 22 Oct 2022 15:39:22 GMT > < server: Apache > < content-length: 262 > < vary: X-Auth-Token > < x-openstack-request-id: req-88c293c3-7efb-4a12-ac06-21f90e1fdc10 > < content-type: application/json > < > * Connection #0 to host dashint.example.com left intact > {"version": {"id": "v3.14", "status": "stable", "updated": > "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": " > https://dashint.example.com:5000/v3/"}], "media-types": [{"base": > "application/json", "type": > "application/vnd.openstack.identity-v3+json"}]}}curl ( > https://dashint.example.com:5000/v3): response: 200, time: 0.012871, size: > 262 > > > When deploying with the self certificate it's in this task on the first > controller where the problem is triggered : > > > > *TASK [service-ks-register : keystone | Creating services > module_name=os_keystone_service, module_args={'name': '{{ item.name > }}', 's$rvice_type': '{{ item.type }}', 'description': > '{{ item.description }}', 'region_name': '{{ > service_ks_register_region_name }}', 'au$h': '{{ service_ks_register_auth > }}', 'interface': '{{ service_ks_register_interface }}', 'cacert': '{{ > service_ks_cacert }}'}] **** > FAILED - RETRYING: [controllera]: keystone | Creating services (5 retries > left). > FAILED - RETRYING: [controllera]: keystone | Creating services (4 retries > left). > FAILED - RETRYING: [controllera]: keystone | Creating services (3 retries > left). > FAILED - RETRYING: [controllera]: keystone | Creating services (2 retries > left). > FAILED - RETRYING: [controllera]: keystone | Creating services (1 retries > left).failed: [controllera] (item={'name': 'keystone', 'service_type': > 'identity'}) => {"action": "os_keystone_service", "ansible_loop_var" > : "item", "attempts": 5, "changed": false, "item": {"description": > "Openstack Identity Service", "endpoints": [{"interface": "admin", > "url": "https://dashint.example.com:35357"}, {"interface": "internal", > "url": "https://dashint.example.com:5000"}, {"interface": > "public", "url": "https://dash.example.com:5000"}], "name": "keystone", > "type": "identity"}, "module_stderr": "Failed to discover > available identity versions when contacting > https://dashint.example.com:35357. Attempting to parse version from > URL.\nTraceback (mo > st recent call last):\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", > line 710, in urlopen\n chunk > ed=chunked,\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", > line 386, in _make_request\n self._val > idate_conn(conn)\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/connectionpool.py\", > line 1040, in _validate_conn\n co > nn.connect()\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/connection.py\", line > 426, in connect\n tls_in_tls=tls_in_ > tls,\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/util/ssl_.py\", line > 450, in ssl_wrap_socket\n sock, context, tls_ > in_tls, server_hostname=server_hostname\n File > \"/opt/ansible/lib/python3.6/site-packages/urllib3/util/ssl_.py\", line > 493, in _ssl_ > wrap_socket_impl\n return ssl_context.wrap_socket(sock, > server_hostname=server_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", > line 365, in wrap_socket\n _context=self, _session=session)\n File > \"/usr/lib64/python3.6/ssl.py\", line 776, in __init__\n se > lf.do_handshake()\n File \"/usr/lib64/python3.6/ssl.py\", line 1036, in > do_handshake\n self._sslobj.do_handshake()\n File \"/usr > /lib64/python3.6/ssl.py\", line 648, in do_handshake\n > *self._sslobj.do_handshake()\nssl.SSLError: [SSL: > CERTIFICATE_VERIFY_FAILED] certificate verify failed* > (_ssl.c:897)\n\nDuring handling of the above exception, another exception > occurred:\n\nTraceback (most rec > ent call last):\n File > \"/opt/ansible/lib/python3.6/site-packages/requests/adapters.py\", line > 450, in send\n timeout=timeout > > > I don't know what this task is, the container is running, what does > mean *service-ks-register > : keystone* ? > > Regards. > > Le mar. 15 nov. 2022 ? 11:54, Eugen Block a ?crit : > >> Okay, I understand. Did you verify if the self-signed cert contains >> everything you require as I wrote in the previous email? Can you paste >> the openssl command output (and mask everything non-public)? >> >> Zitat von wodel youchi : >> >> > Hi, >> > Thanks again, >> > >> > About your question : so with the previous cert it worked but only >> because >> > you had the verification set to false, correct? >> > The answer is : Not exactly. >> > >> > Let me explain, I deployed using a commercial valid certificate, but I >> > configured kolla_verify_tls_backend to false exactly to avoid the >> problem I >> > am facing now. From what I have understood : >> > kolla_verify_tls_backend=false, means : accept the connection even if the >> > verification fails, but apparently it is not the case. >> > And kolla_copy_ca_into_containers was positioned to yes from the >> beginning. >> > >> > What happened is that my certificate expired, and now I am searching for >> a >> > way to install a self-signed certificate while waiting to get the new >> > certificate. >> > >> > I backported the platform a few days before the expiration of the >> > certificate, then I generated the self-signed certificate and I tried to >> > deploy it but without success. >> > >> > Regards. >> > >> > Le lun. 14 nov. 2022 ? 14:21, Eugen Block a ?crit : >> > >> >> Hi, >> >> >> >> > First I want to correct something, the *kolla_verify_tls_backend* was >> >> > positioned to *false* from the beginning, while doing the first >> >> deployment >> >> > with the commercial certificate. >> >> >> >> so with the previous cert it worked but only because you had the >> >> verification set to false, correct? >> >> >> >> > What do you mean by using openssl? Do you mean to execute the command >> >> > inside a container and try to connect to keystone? If yes what is the >> >> > correct command? >> >> >> >> That's one example, yes. Is apache configured correctly to use the >> >> provided certs? In my manual deployment it looks like this (only the >> >> relevant part): >> >> >> >> control01:~ # cat /etc/apache2/vhosts.d/keystone-public.conf >> >> [...] >> >> SSLEngine On >> >> SSLCertificateFile /etc/ssl/servercerts/control01.fqdn.cert.pem >> >> SSLCACertificateFile >> /etc/pki/trust/anchors/RHN-ORG-TRUSTED-SSL-CERT >> >> SSLCertificateKeyFile /etc/ssl/private/control01.fqdn.key.pem >> >> SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown >> >> >> >> # HTTP Strict Transport Security (HSTS) enforces that all >> >> communications >> >> # with a server go over SSL. This mitigates the threat from attacks >> >> such >> >> # as SSL-Strip which replaces links on the wire, stripping away >> >> https prefixes >> >> # and potentially allowing an attacker to view confidential >> >> information on the >> >> # wire >> >> Header add Strict-Transport-Security "max-age=15768000" >> >> [...] >> >> >> >> and then test it with: >> >> >> >> ---snip--- >> >> control01:~ # curl -v https://control.fqdn:5000/v3 >> >> [...] >> >> * ALPN, offering h2 >> >> * ALPN, offering http/1.1 >> >> * TLSv1.3 (OUT), TLS handshake, Client hello (1): >> >> * TLSv1.3 (IN), TLS handshake, Server hello (2): >> >> * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): >> >> * TLSv1.3 (IN), TLS handshake, Certificate (11): >> >> * TLSv1.3 (IN), TLS handshake, CERT verify (15): >> >> * TLSv1.3 (IN), TLS handshake, Finished (20): >> >> * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): >> >> * TLSv1.3 (OUT), TLS handshake, Finished (20): >> >> * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 >> >> * ALPN, server accepted to use http/1.1 >> >> * Server certificate: >> >> [...] >> >> * subjectAltName: host "control.fqdn" matched cert's "*.fqdn" >> >> * issuer: ******* >> >> * SSL certificate verify ok. >> >> > GET /v3 HTTP/1.1 >> >> > Host: control.fqdn:5000 >> >> > User-Agent: curl/7.66.0 >> >> > Accept: */* >> >> > >> >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): >> >> * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): >> >> * old SSL session ID is stale, removing >> >> * Mark bundle as not supporting multiuse >> >> < HTTP/1.1 200 OK >> >> [...] >> >> * Connection #0 to host control.fqdn left intact >> >> {"version": {"id": "v3.14", "status": "stable", "updated": >> >> "2020-04-07T00:00:00Z", "links": [{"rel": "self", "href": >> >> "https://control.fqdn:5000/v3/"}], "media-types": [{"base": >> >> "application/json", "type": >> >> "application/vnd.openstack.identity-v3+json"}]}} >> >> ---snip--- >> >> >> >> To check the created certificate you could run something like this: >> >> >> >> openssl x509 -in /etc/ssl/servercerts/control01.fqdn.cert.pem -text >> -noout >> >> >> >> and see if the SANs match your control node(s) IP addresses and FQDNs. >> >> >> >> Zitat von wodel youchi : >> >> >> >> > Hi >> >> > >> >> > Thanks for your help. >> >> > >> >> > First I want to correct something, the *kolla_verify_tls_backend* was >> >> > positioned to *false* from the beginning, while doing the first >> >> deployment >> >> > with the commercial certificate. >> >> > >> >> > And yes I have *kolla_copy_ca_into_containers* positioned to *yes* >> from >> >> the >> >> > beginning. And I can see in the nodes that there is a directory named >> >> > certificates in every module's directory in /etc/kolla >> >> > >> >> > What do you mean by using openssl? Do you mean to execute the command >> >> > inside a container and try to connect to keystone? If yes what is the >> >> > correct command? >> >> > >> >> > It seems like something is missing to tell the client side to ignore >> the >> >> > certificate validity, something like the --insecure parameter in the >> >> > openstack cli. >> >> > >> >> > Regards. >> >> > >> >> > On Fri, Nov 11, 2022, 21:21 Eugen Block wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> I'm not familiar with kolla, but the docs also mention this option: >> >> >> >> >> >> kolla_copy_ca_into_containers: "yes" >> >> >> >> >> >> As I understand it the CA cert is required within the containers so >> >> >> they can trust the self-signed certs. At least that's how I configure >> >> >> it in a manually deployed openstack cloud. Do you have that option >> >> >> enabled? If it is enabled, did you verify it with openssl tools? >> >> >> >> >> >> Regards, >> >> >> Eugen >> >> >> >> >> >> Zitat von wodel youchi : >> >> >> >> >> >> > Some help please. >> >> >> > >> >> >> > On Tue, Nov 8, 2022, 14:44 wodel youchi >> >> wrote: >> >> >> > >> >> >> >> Hi, >> >> >> >> >> >> >> >> To deploy Openstack with a self-signed certificate, the >> documentation >> >> >> says >> >> >> >> to generate the certificates using kolla-ansible certificates, to >> >> >> configure >> >> >> >> the support of TLS in globals.yml and to deploy. >> >> >> >> >> >> >> >> I am facing a problem, my old certificate has expired, I want to >> use >> >> a >> >> >> >> self-signed certificate. >> >> >> >> I backported my servers to an older date, then generated a >> >> self-signed >> >> >> >> certificate using kolla, but the deploy/reconfigure won't work, >> they >> >> >> say : >> >> >> >> >> >> >> >> self._sslobj.do_handshake()\n File >> \"/usr/lib64/python3.6/ssl.py\", >> >> >> line >> >> >> >> 648, in do_handshakeself._sslobj.do_handshake()\nssl.SSLError: >> [SSL: >> >> >> >> CERTIFICATE_VERIFY_FAILED certificate verify failed >> >> >> >> >> >> >> >> PS : in my globals.yml i have : *kolla_verify_tls_backend: "yes"* >> >> >> >> >> >> >> >> Regards. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From wodel.youchi at gmail.com Wed Nov 16 09:00:06 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 16 Nov 2022 10:00:06 +0100 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: <20221116081548.Horde.Yot1odWZ9C_0jce9mLVBeRS@webmail.nde.ag> References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> <20221116081548.Horde.Yot1odWZ9C_0jce9mLVBeRS@webmail.nde.ag> Message-ID: "so the curl output looks correct. When you say you backported your servers, what exactly does that mean?" It means : Servers are set back in time before the expiration of the commercial certificate, this permits Openstack to work. When set back in time, the commercial certificate works because it is still valid. My idea is to reconfigure my openstack to use a self-signed certificate that covers one year for example, so I created a the self-signed certificate while my servers are in the past, this self-signed will hold one year, so if I can deploy it I can bring my servers back to the current time. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Nov 16 11:32:23 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 16 Nov 2022 11:32:23 +0000 Subject: [cinder] Bug report from 11-08-2022 to 11-16-2022 Message-ID: This is a bug report from 11-08-2022 to 11-16-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/cinderlib/+bug/1996738 "Cinderlib Gate broken in Zed." Unassigned. Invalid - https://bugs.launchpad.net/cinder/+bug/1990257 "[OpenStack Yoga] Creating a VM fails when stopping only one rabbitmq." Moved to Nova. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Wed Nov 16 11:43:21 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Wed, 16 Nov 2022 08:43:21 -0300 Subject: [neutron-dynamic-routing] BGPspeaker LOOP Message-ID: Hey folks, Please, I have a question here, the bgpspeaker should only "learn" and not "advertise" the AS_PATHs via BGP, right? In my tests, I can see that it is learning routes from BGP neighbors. This behavior can cause an AS_PATH loop because the bgpspeaker learns back its own advertised routes, and I see a message like this in the logs: 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on UPDATE message has loops. Ignoring this message: BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) This can be fixed by suppressing the neighbor route advertisement (using route-map export), but have I misunderstood how neutron-dymanic-routing works or do we have a possible bug here? Regards -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Nov 16 11:56:47 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 16 Nov 2022 11:56:47 +0000 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: References: Message-ID: On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: > Hey folks, > > Please, I have a question here, the bgpspeaker should only "learn" and not > "advertise" the AS_PATHs via BGP, right? its been a while since i looked at it but in the past it did not supprot learning at all it just advertised the routes for the neutron netowrks > > In my tests, I can see that it is learning routes from BGP neighbors. This > behavior can cause an AS_PATH loop because the bgpspeaker learns back its > own advertised routes, and I see a message like this in the logs: > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on UPDATE > message has loops. Ignoring this message: > BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, > 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) > > This can be fixed by suppressing the neighbor route advertisement (using > route-map export), but have I misunderstood how neutron-dymanic-routing > works or do we have a possible bug here? > > Regards > From roberto.acosta at luizalabs.com Wed Nov 16 12:05:54 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Wed, 16 Nov 2022 09:05:54 -0300 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: References: Message-ID: Sorry for the mistake, I meant, the bgpspeaker should only "*advertise*!" and not "learn" the AS_PATHs via BGP. Regards Em qua., 16 de nov. de 2022 ?s 08:57, Sean Mooney escreveu: > On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: > > Hey folks, > > > > Please, I have a question here, the bgpspeaker should only "learn" and > not > > "advertise" the AS_PATHs via BGP, right? > > its been a while since i looked at it but in the past it did not supprot > learning at all > > it just advertised the routes for the neutron netowrks > > > > > In my tests, I can see that it is learning routes from BGP neighbors. > This > > behavior can cause an AS_PATH loop because the bgpspeaker learns back its > > own advertised routes, and I see a message like this in the logs: > > > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on UPDATE > > message has loops. Ignoring this message: > > > BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), > > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), > > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, > > > 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) > > > > This can be fixed by suppressing the neighbor route advertisement (using > > route-map export), but have I misunderstood how neutron-dymanic-routing > > works or do we have a possible bug here? > > > > Regards > > > > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Wed Nov 16 12:19:29 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Wed, 16 Nov 2022 13:19:29 +0100 Subject: [kolla-ansible][Yoga] Deployment stuck In-Reply-To: References: Message-ID: Any ideas? No matter what I tried to do with ansible.conf, the same problem : - using one -v the deployment goes till the end - using more -v the deployment gets stuck somewhere. Here is my ansible.conf [defaults] host_key_checking=False pipelining=True forks=100 log_path=/home/deployer/myansiblelogs/log.txt display_args_to_stdout = True I commented on all of them without success. I get the same behavior. Regards Le mar. 25 oct. 2022 ? 16:06, wodel youchi a ?crit : > Hi, > > I think I found what causes the problem, but I don't understand why. > > I removed the verbosity, i.e I removed -vvv I only kept just one and I > disabled ANSIBLE_DEBUG variable, and voila the deployment went till the end. > First I suspected the tmux process, some kind of buffer overflow because > of the quantity of the logs, but then I connected to the VM's console and > it is the behavior. > > With one -v the process goes without problem, but if I put more -vvv it > gets stuck somewhere. > If someone can explain this to me!!!!??? > > > > Regards. > > Le lun. 24 oct. 2022 ? 14:00, wodel youchi a > ?crit : > >> Anyone???? >> >> Le lun. 24 oct. 2022 ? 07:53, wodel youchi a >> ?crit : >> >>> Hi, >>> >>> My setup is simple, it's an hci deployment composed of 3 controllers >>> nodes and 6 compute and storage nodes. >>> I am using ceph-ansible for deploying the storage part and the >>> deployment goes well. >>> >>> My base OS is Rocky Linux 8 fully updated. >>> >>> My network is composed of a 1Gb management network for OS, application >>> deployment and server management. And a 40Gb with LACP (80Gb) data network. >>> I am using vlans to segregate openstack networks. >>> >>> I updated both Xena and Yoga kolla-ansible package I updated several >>> times the container images (I am using a local registry). >>> >>> No matter how many times I tried to deploy it's the same behavior. The >>> setup gets stuck somewhere. >>> >>> I tried to deploy the core modules without SSL, I tried to use an older >>> kernel, I tried to use the 40Gb network to deploy, nothing works. The >>> problem is the lack of error if there was one it would have been a starting >>> point but I have nothing. >>> >>> Regards. >>> >>> On Sun, Oct 23, 2022, 00:42 wodel youchi wrote: >>> >>>> Hi, >>>> >>>> Here you can find the kolla-ansible *deploy *log with ANSIBLE_DEBUG=1 >>>> >>>> Regards. >>>> >>>> Le sam. 22 oct. 2022 ? 23:55, wodel youchi a >>>> ?crit : >>>> >>>>> Hi, >>>>> >>>>> I am trying to deploy a new platform using kolla-ansible Yoga and I am >>>>> trying to upgrade another platform from Xena to yoga. >>>>> >>>>> On both platforms the prechecks went well, but when I start the >>>>> process of deployment for the first and upgrade for the second, the process >>>>> gets stuck. >>>>> >>>>> I tried to tail -f /var/log/kolla/*/*.log but I can't get hold of the >>>>> cause. >>>>> >>>>> In the first platform, some services get deployed, and at some point >>>>> the script gets stuck, several times in the modprobe phase. >>>>> >>>>> In the second platform, the upgrade gets stuck on : >>>>> >>>>> Escalation succeeded >>>>> [204/1859] >>>>> <20.3.0.28> (0, b'\n{"path": "/etc/kolla/cron", "changed": false, >>>>> "diff": {"before": {"path": "/etc/kolla/cro >>>>> n"}, "after": {"path": "/etc/kolla/cron"}}, "uid": 0, "gid": 0, >>>>> "owner": "root", "group": "root", "mode": "07 >>>>> 70", "state": "directory", "secontext": >>>>> "unconfined_u:object_r:etc_t:s0", "size": 70, "invocation": {"module_ >>>>> args": {"path": "/etc/kolla/cron", "owner": "root", "group": "root", >>>>> "mode": "0770", "recurse": false, "force >>>>> ": false, "follow": true, "modification_time_format": "%Y%m%d%H%M.%S", >>>>> "access_time_format": "%Y%m%d%H%M.%S", >>>>> "unsafe_writes": false, "state": "directory", "_original_basename": >>>>> null, "_diff_peek": null, "src": null, " >>>>> modification_time": null, "access_time": null, "seuser": null, >>>>> "serole": null, "selevel": null, "setype": nul >>>>> l, "attributes": null}}}\n', b'') >>>>> ok: [20.3.0.28] => (item={'key': 'cron', 'value': {'container_name': >>>>> 'cron', 'group': 'cron', 'enabled': True >>>>> , 'image': '20.3.0.34:4000/openstack.kolla/centos-source-cron:yoga', >>>>> 'environment': {'DUMMY_ENVIRONMENT': 'ko >>>>> lla_useless_env', 'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': >>>>> ['/etc/kolla/cron/:/var/lib/kolla/config_f >>>>> iles/:ro', '/etc/localtime:/etc/localtime:ro', '', >>>>> 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) => { >>>>> "ansible_loop_var": "item", >>>>> "changed": false, >>>>> "diff": { >>>>> "after": { >>>>> "path": "/etc/kolla/cron" >>>>> }, >>>>> "before": { >>>>> "path": "/etc/kolla/cron" >>>>> } >>>>> }, >>>>> "gid": 0, >>>>> "group": "root", >>>>> >>>>> How to start debugging the situation. >>>>> >>>>> Regards. >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Wed Nov 16 14:40:29 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Wed, 16 Nov 2022 11:40:29 -0300 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: <1bf20b2fbc22ca185503ff8139113ebfef9f4b0d.camel@redhat.com> References: <1bf20b2fbc22ca185503ff8139113ebfef9f4b0d.camel@redhat.com> Message-ID: Thanks Sean. I believe that the os-ken driver is not coded to honor this premise because the below function can learn new paths from peer update messages. https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/peer.py#L1544 Em qua., 16 de nov. de 2022 ?s 09:14, Sean Mooney escreveu: > On Wed, 2022-11-16 at 09:05 -0300, Roberto Bartzen Acosta wrote: > > Sorry for the mistake, I meant, the bgpspeaker should only "*advertise*!" > > and not "learn" the AS_PATHs via BGP. > yes that used to be the scope of that project to advertise only and not > learn > so i would geuss either that has change recently and they broke backward > compaitbly > or they have refactord it to use an external bgp speaker like frr and it > learns by default > > i dont really see anything here > https://github.com/openstack/neutron-dynamic-routing/commits/master > im not really familar with the internals of the project but i dont see any > code to learn routs form > > > https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/driver/os_ken/driver.py > > it just has code for advertizing and withdrawing routes. > > > > > > > Em qua., 16 de nov. de 2022 ?s 08:57, Sean Mooney > > escreveu: > > > > > On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: > > > > Hey folks, > > > > > > > > Please, I have a question here, the bgpspeaker should only "learn" > and > > > not > > > > "advertise" the AS_PATHs via BGP, right? > > > > > > its been a while since i looked at it but in the past it did not > supprot > > > learning at all > > > > > > it just advertised the routes for the neutron netowrks > > > > > > > > > > > In my tests, I can see that it is learning routes from BGP neighbors. > > > This > > > > behavior can cause an AS_PATH loop because the bgpspeaker learns > back its > > > > own advertised routes, and I see a message like this in the logs: > > > > > > > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on > UPDATE > > > > message has loops. Ignoring this message: > > > > > > > > BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), > > > > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), > > > > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, > > > > > > > > 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) > > > > > > > > This can be fixed by suppressing the neighbor route advertisement > (using > > > > route-map export), but have I misunderstood how > neutron-dymanic-routing > > > > works or do we have a possible bug here? > > > > > > > > Regards > > > > > > > > > > > > > > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Wed Nov 16 16:55:32 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Wed, 16 Nov 2022 13:55:32 -0300 Subject: [CloudKitty][os-api-ref][openstack-dev] v1 API docs In-Reply-To: References: Message-ID: There was a merge to add all missing methods from V1 to V2 [1]. Therefore, via the V2 context we can access all features from V1, I would not bother much to re-write it. However, of course, if you have the energy and time to do it, it would be great to review your patches. [1] https://review.opendev.org/c/openstack/cloudkitty/+/684734 On Thu, Nov 10, 2022 at 10:34 AM Mariusz Karpiarz wrote: > All, > CloudKitty docs for v1 APIs ( > https://docs.openstack.org/cloudkitty/latest/api-reference/v1/v1.html) > appear to be generated from the source code instead of using `os-api-ref` ( > https://opendev.org/openstack/os-api-ref), like in case of v2 docs. > I want to move both v1 and v2 API docs to a separate `api-ref/source` > directory in the root of the repository, where we will only be using > "os_api_ref" and "openstackdocstheme" Sphinx extensions, so we need to > decide what to do with v1 docs. > I started rewriting v1 docs to the format supported by `os-api-ref` but > they don't quite translate well to the new format, mainly because of > different ways results are presented (Python objects vs JSONs). How much do > we care about proper v1 API docs and would it be worth for someone (likely > me) to write them again from scratch? > There is also the option for carrying over the old extensions (they are > all listed here: > https://opendev.org/openstack/cloudkitty/src/branch/stable/zed/doc/source/conf.py#L42-L58) > but I'm not sure all of them are still supported by the system building > https://docs.openstack.org/api/ and this is a good opportunity to clean > this list up. :) > Please let me know if you have any ideas. > Mariusz > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Wed Nov 16 17:16:16 2022 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Wed, 16 Nov 2022 17:16:16 +0000 Subject: [openstack-ansible] Designate: rndc config file not generated In-Reply-To: References: <237889f3239b475da5700ab5d2e4ef73@elca.ch> Message-ID: <73d6f949ace94680841f0b743d100ad6@elca.ch> Hello James, Sorry for this late response ? I used ?designate_rndc_keys? var and the key has been created at the right location. I keep on solving the Designate mysteries and I think I?ll be able to deploy it shortly ! Thank you for your help! Jean-Francois From: James Denton Sent: vendredi, 4 novembre 2022 20:22 To: Taltavull Jean-Fran?ois ; openstack-discuss Subject: Re: [openstack-ansible] Designate: rndc config file not generated EXTERNAL MESSAGE - This email comes from outside ELCA companies. Hello Jean-Francois, When I did this recently, I seem to recall generating the RNDC key and conf on the BIND server(s) and copying those over to the Designate hosts (controller nodes, in my case). But looking at the playbook variables, it looks like there is a ?designate_rndc_keys? var that you can define to have it create the keys in the specified location. Have you tried that? Regards, James Denton Rackspace Private Cloud From: Taltavull Jean-Fran?ois > Date: Friday, November 4, 2022 at 8:55 AM To: openstack-discuss > Subject: [openstack-ansible] Designate: rndc config file not generated CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello, I'm deploying Designate on OpenStack Wallaby/Ubuntu 20.04 with DNS servers located outside the OpenStack platform. After running 'os-designate-install.yml' playbook, 'bind9-utils' package is correctly installed but I can't find rndc config file anywhere inside the lxc container. This prevents rndc from running well and communicating with the DNS servers. Any idea ? Regards, Jean-Francois -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.fernandez at epfl.ch Wed Nov 16 16:01:54 2022 From: daniel.fernandez at epfl.ch (Fernandez Rodriguez Daniel) Date: Wed, 16 Nov 2022 16:01:54 +0000 Subject: [puppet] Configure openid with Keycloak IdP in Keystone Message-ID: Hello, this is a bit of a long shot but maybe some of you succesfully configured Openstack to use Keycloak as an Identity Provider so we can use Single Sign-On on Horizon. To install and configure OpenStack Keystone I am using 'stable/xena' version of the https://github.com/openstack/puppet-keystone . Likewise for Horizon. So far so good. I would like to enable openid in Keystone so I can have Single Sign-On via Horizon. I am pretty much following the official docs: https://docs.openstack.org/keystone/latest/admin/federation/configure_federation.html with the help of the puppet module. To do it I included the class: include ::keystone::federation::openidc And configured some hiera variables: keystone::federation::openidc::keystone_url: "https://openstackdev.loadbalancer:5000" keystone::federation::openidc::methods: 'password,token,oauth1,mapped,openid' keystone::federation::openidc::idp_name: 'keycloak' keystone::federation::openidc::openidc_provider_metadata_url: 'https://keycloak_server/auth/realms/BBP/.well-known/openid-configuration' keystone::federation::openidc::openidc_client_id: 'a_keycloak_client' keystone::federation::openidc::openidc_client_secret: keystone::federation::openidc::openidc_crypto_passphrase: keystone::federation::openidc::remote_id_attribute: 'HTTP_OIDC_ISS' And this is the resulting relevant configuration in /etc/httpd/conf.d/10-keystone_wsgi.conf [...] OIDCClaimPrefix "OIDC-" OIDCResponseType "id_token" OIDCScope "openid email profile" OIDCProviderMetadataURL "https://keycloak_server/auth/realms/BBP/.well-known/openid-configuration" OIDCClientID "a_keycloak_client" OIDCClientSecret OIDCCryptoPassphrase # The following directives are necessary to support websso from Horizon # (Per https://docs.openstack.org/keystone/pike/advanced-topics/federation/websso.html) OIDCRedirectURI "https://openstackdev.loadbalancer:5000/v3/auth/OS-FEDERATION/identity_providers/keycloak/protocols/openid/websso" OIDCRedirectURI "https://openstackdev.loadbalancer:5000/v3/auth/OS-FEDERATION/websso/openid" AuthType "openid-connect" Require valid-user AuthType "openid-connect" Require valid-user ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ But unfortunately this does not work. First of all, the OIDCRedirectURI the module set points to a valid URL with content. So I manually changed them by: OIDCRedirectURI "https://openstackdev.loadbalancer:5000/v3/auth/OS-FEDERATION/identity_providers/keycloak/protocols/openid/websso/redirect_url" OIDCRedirectURI "https://openstackdev.loadbalancer:5000/v3/auth/OS-FEDERATION/websso/openid/redirect_url" After changing that now I get redirected to the Keycloak login page and I am able to enter my username and pass, after the login is done I get redirected to: https://openstackdev.loadbalancer:5000/v3/auth/OS-FEDERATION/websso/openid?origin=https://openstackdev.loadbalancer/dashboard/auth/websso/ and it shows the following error: error code 404 message "Could not find Identity Provider: https://keycloak_server/auth/realms/BBP." title "Not Found" And in: /var/log/keystone/keystone.log {"message": "Could not find Identity Provider: https://keycloak_server/auth/realms/BBP.", "asctime": "2022-11-16 16:24:56", "name": "keystone.server.flask.application", "msg": "Could not find Identity Provider: https://keycloak_server/auth/realms/BBP.", "args": [], "levelname": "WARNING", "levelno": 30, "pathname": "/usr/lib/python3.6/site-packages/keystone/server/flask/application.py", "filename": "application.py", "module": "application", "lineno": 87, "funcname": "_handle_keystone_exception", "created": 1668612296.6284614, "msecs": 628.4613609313965, "relative_created": 32117.148637771606, "thread": 140579135473408, "thread_name": "Dummy-1", "process_name": "MainProcess", "process": 3051629, "traceback": null, "hostname": "bbpcb030.bbp.epfl.ch", "error_summary": "keystone.exception.IdentityProviderNotFound: Could not find Identity Provider: https://keycloak_server/auth/realms/BBP.", "context": {"user_name": null, "project_name": null, "domain_name": null, "user_domain_name": null, "project_domain_name": null, "user": null, "tenant": null, "system_scope": null, "project": null, "domain": null, "user_domain": null, "project_domain": null, "is_admin": false, "read_only": false, "show_deleted": false, "auth_token": null, "request_id": "req-5187f72d-cb4b-470f-9635-6c05565707eb", "global_request_id": null, "resource_uuid": null, "roles": [], "user_identity": "- - - - -", "is_admin_project": true}, "extra": {"project": null, "version": "unknown"}} And this is how I configured the identity provider, mapping and federation protocol. # openstack identity provider show keycloak +-------------------+-----------------------------------------+ | Field | Value | +-------------------+-----------------------------------------+ | authorization_ttl | None | | description | None | | domain_id | 96a75a2b29b5411497a9971c14a2167c | | enabled | True | | id | keycloak | | remote_ids | https://keycloak_server/auth/realms/BBP | +-------------------+-----------------------------------------+ # openstack mapping show openid_mapping +-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ | id | openid_mapping | | rules | [{'local': [{'user': {'name': '{0}'}, 'group': {'domain': {'name': 'Default'}, 'name': 'federated_users'}}], 'remote': [{'type': 'OIDC-preferred_username'}]}] | +-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------+ # openstack federation protocol show --identity-provider keycloak openid +---------+----------------+ | Field | Value | +---------+----------------+ | id | openid | | mapping | openid_mapping | +---------+----------------+ Can someone please give me a hand with this? Thank you very much, Daniel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.karpiarz at eschercloud.ai Wed Nov 16 17:23:40 2022 From: m.karpiarz at eschercloud.ai (Mariusz Karpiarz) Date: Wed, 16 Nov 2022 17:23:40 +0000 Subject: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? In-Reply-To: References: Message-ID: Michal, Thank you for your message. To explain what I mean a little better, let?s look at a use case of a web-based service running in a cloud but not using a Database-as-a-Service offering. In this setup (a sample diagram: https://www.cozumpark.com/wp-content/uploads/2020/02/image-5.png) a good security practice is to use a different (?internal?) load balancer for database servers and different (?public?) - for all the web servers serving user requests. The database doesn?t need to be accessible from the outside world, so this split provides a physical separation of traffic and this is exactly what I?m suggesting here. As for how to archive this, we can keep one HAProxy process in one container (and use regular Kolla images) but there will simply be two HAProxy containers (one ?external? and one ?public?) running either on the same controllers or on different ones. I hope this explanation helps but please do let me know if you want me to elaborate on any particular aspect of it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Danny.Webb at thehutgroup.com Wed Nov 16 19:02:39 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Wed, 16 Nov 2022 19:02:39 +0000 Subject: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? In-Reply-To: References: Message-ID: In a way kolla already does this by separating certain things onto "internal" VIPs vs "external" VIPs. So even though a single haproxy instance is running both internal and external, they are separated into their own connection paths that enforce segregation. Ultimately running a separate haproxy won't really add much security as you'll essentially be doing what kolla is already doing. ________________________________ From: Mariusz Karpiarz Sent: 16 November 2022 17:23 To: Michal Arbet Cc: openstack-discuss Subject: Re: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? CAUTION: This email originates from outside THG ________________________________ Michal, Thank you for your message. To explain what I mean a little better, let?s look at a use case of a web-based service running in a cloud but not using a Database-as-a-Service offering. In this setup (a sample diagram: https://www.cozumpark.com/wp-content/uploads/2020/02/image-5.png) a good security practice is to use a different (?internal?) load balancer for database servers and different (?public?) - for all the web servers serving user requests. The database doesn?t need to be accessible from the outside world, so this split provides a physical separation of traffic and this is exactly what I?m suggesting here. As for how to archive this, we can keep one HAProxy process in one container (and use regular Kolla images) but there will simply be two HAProxy containers (one ?external? and one ?public?) running either on the same controllers or on different ones. I hope this explanation helps but please do let me know if you want me to elaborate on any particular aspect of it. Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonathan.rosser at rd.bbc.co.uk Thu Nov 17 08:37:14 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Thu, 17 Nov 2022 08:37:14 +0000 Subject: [openstack-ansible] Designate: rndc config file not generated In-Reply-To: <73d6f949ace94680841f0b743d100ad6@elca.ch> References: <237889f3239b475da5700ab5d2e4ef73@elca.ch> <73d6f949ace94680841f0b743d100ad6@elca.ch> Message-ID: <4cfa7a40-61a3-5b88-d5bf-0af1444b1e14@rd.bbc.co.uk> Hi Jean-Francois, Each ansible role used in openstack-ansible should have some documentation associated with it, in addition to the docs for the main openstack-ansible project itself. Here is the docs for the designate role which contains a brief example for a BIND9 integration https://docs.openstack.org/openstack-ansible-os_designate/latest/ Hopefully this is helpful, contributions to documentation are always welcome if you feel that improvement could be made. Jonathan. On 16/11/2022 17:16, Taltavull Jean-Fran?ois wrote: > > Hello James, > > Sorry for this late response ? > > I used ?designate_rndc_keys? var and the key has been created at the > right location. > > I keep on solving the Designate mysteries and I think I?ll be able to > deploy it shortly ! > > Thank you for your help! > > Jean-Francois > > *From:*James Denton > *Sent:* vendredi, 4 novembre 2022 20:22 > *To:* Taltavull Jean-Fran?ois ; > openstack-discuss > *Subject:* Re: [openstack-ansible] Designate: rndc config file not > generated > > > > */EXTERNAL MESSAGE /*-?This email comes from *outside ELCA companies*. > > Hello Jean-Francois, > > When I did this recently, I seem to recall generating the RNDC key and > conf on the BIND server(s) and copying those over to the Designate > hosts (controller nodes, in my case). But looking at the playbook > variables, it looks like there is a ?designate_rndc_keys? var that you > can define to have it create the keys in the specified location. Have > you tried that? > > Regards, > > James Denton > Rackspace Private Cloud > > *From: *Taltavull Jean-Fran?ois > *Date: *Friday, November 4, 2022 at 8:55 AM > *To: *openstack-discuss > *Subject: *[openstack-ansible] Designate: rndc config file not generated > > CAUTION: This message originated externally, please use caution when > clicking on links or opening attachments! > > > Hello, > > I'm deploying Designate on OpenStack Wallaby/Ubuntu 20.04 with DNS > servers located outside the OpenStack platform. > > After running 'os-designate-install.yml' playbook, 'bind9-utils' > package is correctly installed but I can't find rndc config file > anywhere inside the lxc container. > This prevents rndc from running well and communicating with the DNS > servers. > > Any idea ? > > Regards, > > Jean-Francois > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sandcruz666 at gmail.com Thu Nov 17 09:14:54 2022 From: sandcruz666 at gmail.com (K Santhosh) Date: Thu, 17 Nov 2022 14:44:54 +0530 Subject: No subject In-Reply-To: References: Message-ID: can any one help me out with this freezer_scheduler container On Wed, Nov 16, 2022 at 6:00 PM K Santhosh wrote: > /var/log/kolla/freezer_scheduler.log > and > logs from container > > > > On Tue, Nov 15, 2022 at 4:18 PM Michal Arbet > wrote: > >> What about logs from container ? >> What about log in /var/log/kolla..... >> >> >> Michal Arbet >> Openstack Engineer >> >> Ultimum Technologies a.s. >> Na Po???? 1047/26, 11000 Praha 1 >> Czech Republic >> >> +420 604 228 897 >> michal.arbet at ultimum.io >> *https://ultimum.io * >> >> LinkedIn | >> Twitter | Facebook >> >> >> >> ?t 15. 11. 2022 v 6:18 odes?latel K Santhosh >> napsal: >> >>> Hai , >>> I am Santhosh, >>> I do facing a problem with freezer deploymentnt >>> After the deployment of freezer . The freezer_scheduler >>> container is continuously restarting in kolla openstack >>> can you help me out with this freezer_scheduler container >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkchn.in at gmail.com Thu Nov 17 10:28:41 2022 From: kkchn.in at gmail.com (KK CHN) Date: Thu, 17 Nov 2022 15:58:41 +0530 Subject: Host node monitoring Message-ID: List, A general question: Can nagios core be used for HCI nodes and other nodes which are installed with Type1 hypervisors ? ( Eg: ESXi hypervisor doesn't have an OS to install the nagios on to it.. as it a bare metal hypervisor . ) How do people monitor these base machines with configurable alerts/notifications to email, and mobile phones of people who manage ?' Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Thu Nov 17 14:08:19 2022 From: michal.arbet at ultimum.io (Michal Arbet) Date: Thu, 17 Nov 2022 15:08:19 +0100 Subject: [Kolla][kolla-ansible][HAProxy] Splitting the load balancer into internal and external? In-Reply-To: References: Message-ID: Hi, Now i understand , but agree with Danny Webb, this is unnecessary ... Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook st 16. 11. 2022 v 20:02 odes?latel Danny Webb napsal: > In a way kolla already does this by separating certain things onto > "internal" VIPs vs "external" VIPs. So even though a single haproxy > instance is running both internal and external, they are separated into > their own connection paths that enforce segregation. Ultimately running a > separate haproxy won't really add much security as you'll essentially be > doing what kolla is already doing. > ------------------------------ > *From:* Mariusz Karpiarz > *Sent:* 16 November 2022 17:23 > *To:* Michal Arbet > *Cc:* openstack-discuss > *Subject:* Re: [Kolla][kolla-ansible][HAProxy] Splitting the load > balancer into internal and external? > > > * CAUTION: This email originates from outside THG * > ------------------------------ > > Michal, > Thank you for your message. > > > To explain what I mean a little better, let?s look at a use case of a > web-based service running in a cloud but not using a Database-as-a-Service > offering. In this setup (a sample diagram: > https://www.cozumpark.com/wp-content/uploads/2020/02/image-5.png) a good > security practice is to use a different (?internal?) load balancer for > database servers and different (?public?) - for all the web servers serving > user requests. The database doesn?t need to be accessible from the outside > world, so this split provides a physical separation of traffic and this is > exactly what I?m suggesting here. > > > > As for how to archive this, we can keep one HAProxy process in one > container (and use regular Kolla images) but there will simply be two > HAProxy containers (one ?external? and one ?public?) running either on the > same controllers or on different ones. > > > > I hope this explanation helps but please do let me know if you want me to > elaborate on any particular aspect of it. > > Danny Webb > Principal OpenStack Engineer > The Hut Group > > Tel: > Email: Danny.Webb at thehutgroup.com > > > For the purposes of this email, the "company" means The Hut Group Limited, > a company registered in England and Wales (company number 6539496) whose > registered office is at Fifth Floor, Voyager House, Chicago Avenue, > Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. > > *Confidentiality Notice* > This e-mail is confidential and intended for the use of the named > recipient only. If you are not the intended recipient please notify us by > telephone immediately on +44(0)1606 811888 or return it to us by e-mail. > Please then delete it from your system and note that any use, > dissemination, forwarding, printing or copying is strictly prohibited. Any > views or opinions are solely those of the author and do not necessarily > represent those of the company. > > *Encryptions and Viruses* > Please note that this e-mail and any attachments have not been encrypted. > They may therefore be liable to be compromised. Please also note that it is > your responsibility to scan this e-mail and any attachments for viruses. We > do not, to the extent permitted by law, accept any liability (whether in > contract, negligence or otherwise) for any virus infection and/or external > compromise of security and/or confidentiality in relation to transmissions > sent by e-mail. > > *Monitoring* > Activity and use of the company's systems is monitored to secure its > effective use and operation and for other lawful business purposes. > Communications using these systems will also be monitored and may be > recorded to secure effective use and operation and for other lawful > business purposes. > hgvyjuv > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Thu Nov 17 01:46:13 2022 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 16 Nov 2022 20:46:13 -0500 Subject: [kolla-ansible]Reset Configuration In-Reply-To: <7EC464AB-100D-4755-955A-11FC2966A2C0@univ-grenoble-alpes.fr> References: <887D56B6-6190-463D-AED9-A4C4D09C7EFF@univ-grenoble-alpes.fr> <90DE6D6B-024E-44F4-93C3-478AE4E184A9@univ-grenoble-alpes.fr> <7EC464AB-100D-4755-955A-11FC2966A2C0@univ-grenoble-alpes.fr> Message-ID: On Mon, Nov 14, 2022 at 8:26 AM Franck VEDEL < franck.vedel at univ-grenoble-alpes.fr> wrote: > Hello. > Thanks a lot Erik my problem was the MTU. > if I go back to a situation with MTU=1500 everywhere, all is working fine > !!! > > Is the following configuration possible and if so, how to configure with > kolla-ansible files ? : > > 3 networks: > - external (2 externals, VLAN 10 and VLAN 20): MTU = 1500 > - admin:MTU=1500 > - management : MTU = 9000 (a scsi bay stores volumes, with mtu 9000 ok). > > Like this: > ` > It is possible, but in some ways not advisable. Just from a general networking standpoint, I wouldn't set any interface used for traffic coming to / from the internet to use Jumbo Frames. Strange things happen when you start fragmenting to fit through standard internet routers, particularly when you run into something on the other end that is also using a large MTU. It's fine for internal management networks, storage networks, and the like. You could move your tenant / tunneling vlan over to a different interface and let that other one serve your internal needs. That being said, you need to account for VXLAN encapsulation overhead in your MTU considerations. Whatever your physical interface config is set to, your tenant networks need to use 50 bytes less. I think this is fine by default when using 1500, but can get weird when using Jumbo frames. If you put an override config file in /etc/kolla/config/neutron/ml2_ini.conf with something like: [ml2] path_mtu = 9000 it should tell Neutron to take that into account. -Erik Thanks a lot if you have a solution for this. > If impossible, I stay with 1500? it?s working, no problem. > > Franck > > > Le 12 nov. 2022 ? 21:00, Franck VEDEL > a ?crit : > > 3) Networking issues like mismatched MTU > > > My MTU (between nodes ) is 9000?. > > I believe my problem is the MTU. > > I modified /etc/kolla/config/neutron.conf and > /etc/kolla/config/neutron/ml2_conf.ini.conf > then kolla-ansible -i multinode reconfigures > > (case 1 here: > https://docs.openstack.org/newton/networking-guide/config-mtu.html) > > I test again everything and functions that did not work work again but not > all.... > > For example, instances get an ip through dhcp but can't ping the router, > but on some networks it works. > However, before the reboot of the servers, I had not had a problem with > the MTU of 9000. > > I'm going back to a 1500 MTU on Monday on site. > > Thank you Eric!!! > > Franck VEDEL > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture d?e?cran 2022-11-14 a? 14.25.19.png Type: image/png Size: 273266 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Thu Nov 17 16:55:30 2022 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Thu, 17 Nov 2022 16:55:30 +0000 Subject: [openstack-ansible] Designate: rndc config file not generated In-Reply-To: <4cfa7a40-61a3-5b88-d5bf-0af1444b1e14@rd.bbc.co.uk> References: <237889f3239b475da5700ab5d2e4ef73@elca.ch> <73d6f949ace94680841f0b743d100ad6@elca.ch> <4cfa7a40-61a3-5b88-d5bf-0af1444b1e14@rd.bbc.co.uk> Message-ID: Hello Jonathan, Thanks for the link. I got your message about doc contributions. A couple of years ago, I was one of the I18N team members and it?s been a real pleasure to work and share with Ian, Ilya, Frank and other OpenStack fellows. I will contribute to the doc again, and will be happy to do, as soon as I can recover free time for that ? Regards, Jean-Francois From: Jonathan Rosser Sent: jeudi, 17 novembre 2022 09:37 To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-ansible] Designate: rndc config file not generated EXTERNAL MESSAGE - This email comes from outside ELCA companies. Hi Jean-Francois, Each ansible role used in openstack-ansible should have some documentation associated with it, in addition to the docs for the main openstack-ansible project itself. Here is the docs for the designate role which contains a brief example for a BIND9 integration https://docs.openstack.org/openstack-ansible-os_designate/latest/ Hopefully this is helpful, contributions to documentation are always welcome if you feel that improvement could be made. Jonathan. On 16/11/2022 17:16, Taltavull Jean-Fran?ois wrote: Hello James, Sorry for this late response ? I used ?designate_rndc_keys? var and the key has been created at the right location. I keep on solving the Designate mysteries and I think I?ll be able to deploy it shortly ! Thank you for your help! Jean-Francois From: James Denton Sent: vendredi, 4 novembre 2022 20:22 To: Taltavull Jean-Fran?ois ; openstack-discuss Subject: Re: [openstack-ansible] Designate: rndc config file not generated EXTERNAL MESSAGE - This email comes from outside ELCA companies. Hello Jean-Francois, When I did this recently, I seem to recall generating the RNDC key and conf on the BIND server(s) and copying those over to the Designate hosts (controller nodes, in my case). But looking at the playbook variables, it looks like there is a ?designate_rndc_keys? var that you can define to have it create the keys in the specified location. Have you tried that? Regards, James Denton Rackspace Private Cloud From: Taltavull Jean-Fran?ois > Date: Friday, November 4, 2022 at 8:55 AM To: openstack-discuss > Subject: [openstack-ansible] Designate: rndc config file not generated CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello, I'm deploying Designate on OpenStack Wallaby/Ubuntu 20.04 with DNS servers located outside the OpenStack platform. After running 'os-designate-install.yml' playbook, 'bind9-utils' package is correctly installed but I can't find rndc config file anywhere inside the lxc container. This prevents rndc from running well and communicating with the DNS servers. Any idea ? Regards, Jean-Francois -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Nov 17 19:36:47 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 Nov 2022 11:36:47 -0800 Subject: [all][tc] 2023.1 cycle updated testing runtime (recommended Upgrade Path testing) Message-ID: <1848717fec8.da23df1e101805.7455620180929964203@ghanshyammann.com> Hello Everyone, You might have seen the TC discussion[1] in PTG for the best possible testing for smooth upgrade. When there is a distro version change in testing runtime, then we need to test both the old and the new distro versions (for the cycle when we are bumping the distro version). We have documented the recommended upgrade path testing in PTI also[2], refer to that for details. In 2023.1 cycle, we are moving from Ubuntu Focal (20.04) to Jammy (22.04)[3] and as per the upgrade path testing we should test the Focal (20.04) also. TC has merged these updates in the 2023.1 cycle testing runtime document[4] which are basically: 1. Testing Ubuntu Focal (20.04) in a single job (one job is enough to test the old version) in project gate. For example, neutron did[5]. This can be a new job or one of the existing jobs. 2. Python minimum version to test is python 3.8 (default version in Ubuntu Focal). We will be testing python 3.8 and python 3.10 unit/functional jobs. Unit test job template change has been merged[6] which will automatically start running py38 unit test job and functional jobs can be modified (if needed) by project side. [1] https://etherpad.opendev.org/p/tc-2023-1-ptg#L422 [2] https://governance.openstack.org/tc/reference/project-testing-interface.html#upgrade-testing [3] https://governance.openstack.org/tc/goals/selected/migrate-ci-jobs-to-ubuntu-jammy.html [4] https://review.opendev.org/c/openstack/governance/+/860599 [5] https://review.opendev.org/c/openstack/neutron/+/862492 [6] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/864464 -gmann From tobias at caktusgroup.com Fri Nov 18 01:49:11 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Thu, 17 Nov 2022 20:49:11 -0500 Subject: Kolla Ansible on Ubuntu 20.04 - cloud-init & other network issues In-Reply-To: <1639e0eb3f9ed24067caae1a6816d8a107605305.camel@redhat.com> References: <1639e0eb3f9ed24067caae1a6816d8a107605305.camel@redhat.com> Message-ID: Thank you all for the helpful responses and suggestions. I tried these steps, but I am afraid the problem was user error. I thought I had adequately tested the internal network previously, but that was not the case. cloud-init and security groups now appear to work seamlessly on an internal subnet. Furthermore, floating IPs from the external subnet are properly allocated and are reachable from the LAN. I believe the issue was that I accidentally left DHCP disabled on the internal subnet previously. When I disable DHCP on the internal subnet now, a new instance will hang for ~400-500 seconds at this point in the boot process: Starting [0;1;39mLoad AppArmor pro???managed internally by snapd[0m... Starting [0;1;39mInitial cloud-init job (pre-networking)[0m... Mounting [0;1;39mArbitrary Executable File Formats File System[0m... [[0;32m OK [0m] Mounted [0;1;39mArbitrary Executable File Formats File System[0m. [ 7.673299] cloud-init[508]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'init-local' at Fri, 18 Nov 2022 01:18:29 +0000. Up 7.61 seconds. [[0;32m OK [0m] Finished [0;1;39mLoad AppArmor pro???s managed internally by snapd[0m. Eventually the instance finishes booting and displays the timeout attempting to reach 169.254.169.254: [ 430.150383] cloud-init[551]: Cloud-init v. 22.3.4-0ubuntu1~22.04.1 running 'init' at Fri, 18 Nov 2022 01:25:31 +0000. Up 430.12 seconds. [ 430.210288] cloud-init[551]: 2022-11-18 01:25:31,748 - url_helper.py[ERROR]: Timed out, no response from urls: ['http://169.254.169.254/openstack'] [ 430.217100] cloud-init[551]: 2022-11-18 01:25:31,749 - util.py[WARNING]: No active metadata service found In summary, I believe that: * cloud-init will timeout if DHCP is disabled (presumably because it has no IP with which to make a request?) * Security groups may not work as expected for instances created in an external subnet. The proper configuration is to create instances in a virtual subnet and assign floating IPs from the external subnet. Hopefully this message is helpful to someone in the future, and thank you all for your patience and support! Tobias On Tue, Nov 15, 2022 at 12:27 PM Sean Mooney wrote: > > On Tue, 2022-11-15 at 09:02 -0800, Clark Boylan wrote: > > On Tue, Nov 15, 2022, at 6:14 AM, Tobias McNulty wrote: > > > As an update, I tried the non-HWE kernel with the same result. Could it > > > be a hardware/driver issue with the 10G NICs? It's so repeatable. I'll > > > look into finding some other hardware to test with. > > > > > > Has anyone else experienced such a complete failure with cloud-init > > > and/or security groups, and do you have any advice on how I might > > > continue to debug this? > > > > I'm not sure this will be helpful since you seem to have narrowed down the issue to VM networking, but here are some of the things that I do when debugging boot time VM setup failures: > > > > * Use config drive instead of metadata service. The metadata service hasn't always been reliable. > > * Bake information like DHCP config for interfaces and user ssh keys into an image and boot that. This way you don't need to rely on actions taken at boot time. > > * Use a different boot time configurator tool. Glean is the one the OpenDev team uses for test nodes. When I debug things there I tend to test with cloud-init to compare glean behavior. But you can do this in reverse. > > > > Again, I'm not sure this is helpful in this specific instance. But thought I'd send it out anyway to help those who may land here through Google search in the future. > > one thing that you shoudl check in addtion to considering ^ > is make sure that the nova api is configured to use memcache. > > cloud init only retries request until the first request succceds. > once the first request works it assumes that the rest will. if you are using a loadbalance and multipel nova-metadtaa-api process > without memcache, and it take more then 10-30 seconds(cant recall how long cloud-init waits) to build the metadatta respocnce then > cloud init can fail. basically if the second request need to rebuild everythign again because its not in a shared cache( memcache) > then teh request can time out and cloud init wont try again. > > > > > > > > > Many thanks, > > > Tobias > > > From gmann at ghanshyammann.com Fri Nov 18 05:51:32 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 Nov 2022 21:51:32 -0800 Subject: [tc][policy] RBAC (policy-popup team) meeting time Message-ID: <184894ad02d.b78838a7115594.3955022451177182234@ghanshyammann.com> Hello Everyone, I am starting the RBAC (policy-pop up) bi-weekly meeting on alternate Tuesday (starting Nov 22) at 17:00 UTC on #openstack-meeting IRC channel. Details: https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting ICS File: https://meetings.opendev.org/#Secure_Default_Policies_Popup-Team_Meeting -gmann From eblock at nde.ag Fri Nov 18 08:47:22 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 18 Nov 2022 08:47:22 +0000 Subject: [kolla-ansible][Yoga] Install with self-signed certificate In-Reply-To: References: <20221111201329.Horde.5Jstm8Mvo6YfTcBDJsTx7T3@webmail.nde.ag> <20221114132143.Horde.9V0vWb4JClSAJIGN1QXAfBX@webmail.nde.ag> <20221115104911.Horde.btGc5gANadpErM4Tmd9GuiO@webmail.nde.ag> <20221116081548.Horde.Yot1odWZ9C_0jce9mLVBeRS@webmail.nde.ag> Message-ID: <20221118084722.Horde.dph-ZVIZovIRm5BRSmfiZoH@webmail.nde.ag> At this point I running out of ideas, sorry. Hopefully someone else can chime in, or you get your new certificates soon. Zitat von wodel youchi : > "so the curl output looks correct. When you say you backported your > servers, what exactly does that mean?" > > It means : Servers are set back in time before the expiration of the > commercial certificate, this permits Openstack to work. > > When set back in time, the commercial certificate works because it is still > valid. > > My idea is to reconfigure my openstack to use a self-signed certificate > that covers one year for example, so I created a the self-signed > certificate while my servers are in the past, this self-signed will hold > one year, so if I can deploy it I can bring my servers back to the current > time. > > Regards. From ralonsoh at redhat.com Fri Nov 18 08:53:11 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 18 Nov 2022 09:53:11 +0100 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda, today's drivers meeting is cancelled. Have a nice weekend! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Nov 18 11:04:10 2022 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 18 Nov 2022 12:04:10 +0100 Subject: Host node monitoring In-Reply-To: References: Message-ID: On 17/11/2022 11:28, KK CHN wrote: > List, > > A general question: > > ?? Can nagios core be used for ? HCI nodes and other nodes which are > installed with Type1 hypervisors ? ( Eg:? ESXi? hypervisor doesn't have > an OS to install the nagios on to it.. as it a bare metal hypervisor . ) > > How do people monitor these base machines with configurable > alerts/notifications to email, and mobile phones? of people who manage ?' > > Krish Hi Krish, I'd say: usually collectd and send the metrics/events off to a central store. You could also use e.g node-exporter + prometheus. Matthias From roberto.acosta at luizalabs.com Fri Nov 18 13:45:17 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Fri, 18 Nov 2022 10:45:17 -0300 Subject: [neutron] metadata IPv6 Message-ID: Hey folks, Can you confirm if the metadata should work in an ipv6-only environment? As I understand from this discussion on LP:1460177 and the fork of the discussion in many opendev reviews #315604 , #738205 #745705 , ..., it seems like it should work. However, this comment in the openstack doc [1] has me questioning if it really works. *"There are no provisions for an IPv6-based metadata service similar to what is provided for IPv4. In the case of dual-stacked guests though it is always possible to use the IPv4 metadata service instead. IPv6-only guests will have to use another method for metadata injection such as using a configuration drive, which is described in the Nova documentation on config-drive ."* Is anyone using metadata in an ipv6-only Openstack setup? Regards, Roberto [1] https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Nov 18 14:30:27 2022 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 18 Nov 2022 15:30:27 +0100 Subject: [release] Release countdown for week R-17, Nov 21 - 25 Message-ID: Development Focus ----------------- We are now past the Antelope-1 milestone. Teams should now be focused on feature development and completion of release cycle goals [0]. [0] https://governance.openstack.org/tc/goals/selected/ General Information ------------------- Our next milestone in this development cycle will be Antelope-2, on January 5, 2023. This milestone is when we freeze the list of deliverables that will be included in the 2023.1 final release, so if you plan to introduce new deliverables in this release, please propose a change to add an empty deliverable file in the deliverables/antelope directory of the openstack/releases repository. Now is also generally a good time to look at bugfixes that were introduced in the master branch that might make sense to be backported and released in a stable release. If you have any question around the OpenStack release process, feel free to ask on this mailing-list or on the #openstack-release channel on IRC. Upcoming Deadlines & Dates -------------------------- Antelope-2 Milestone: January 5, 2023 -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 18 15:25:00 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 18 Nov 2022 16:25:00 +0100 Subject: [neutron] metadata IPv6 In-Reply-To: References: Message-ID: Hi Roberto: The documentation you are referring to must be updated. The LP#1460177 RFE implemented this feature. Actually there is a test class that is testing this functionality in the CI [1][2]. Regards. [1]https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/ [2] https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44 On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> wrote: > Hey folks, > > Can you confirm if the metadata should work in an ipv6-only environment? > > As I understand from this discussion on LP:1460177 > and the fork of the > discussion in many opendev reviews #315604 > , #738205 > #745705 > , ..., it seems > like it should work. > > However, this comment in the openstack doc [1] has me questioning if > it really works. > *"There are no provisions for an IPv6-based metadata service similar to > what is provided for IPv4. In the case of dual-stacked guests though it is > always possible to use the IPv4 metadata service instead. IPv6-only guests > will have to use another method for metadata injection such as using a > configuration drive, which is described in the Nova documentation > on config-drive > ."* > > Is anyone using metadata in an ipv6-only Openstack setup? > > Regards, > Roberto > > [1] > https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest > > > > > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdeore at redhat.com Fri Nov 18 15:25:22 2022 From: pdeore at redhat.com (Pranali Deore) Date: Fri, 18 Nov 2022 20:55:22 +0530 Subject: [Glance] No weekly meeting next week Message-ID: Hello, I will be on PTO from Tuesday, 22nd Nov to Friday, 25th Nov, so I won't be available for the weekly meeting next week. If anyone has anything important to be discussed feel free to add in the meeting agenda etherpad[1] and it would be nice if someone from the glance team volunteers to chair the meeting based on the agenda. [1]: https://etherpad.opendev.org/p/glance-team-meeting-agenda#L70 Thanks, Pranali -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Fri Nov 18 18:04:04 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Fri, 18 Nov 2022 15:04:04 -0300 Subject: [neutron] metadata IPv6 In-Reply-To: References: Message-ID: Hi Rodolfo, Thanks for the feedback, we know it's supported by default in neutron metadata agent. My question came because I couldn't make it work with the neutron-ovn-metadata-agent. Checking some logs I believe that the problem is because the Port_Binding external_ids should have the "neutron:cidrs" [1],but this is empty. [1] - https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 I still don't know how to solve this (: Regards, neutron-ovn-metadata-agent logs: Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingChassisCreatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(parent_port=[], chassis=[], mac=['fa:16:3e:e8:92:d8 2001:db9:1234::35e'], options={'mcast_flood_reports': 'true', 'requested-chassis': 'compute2'}, ha_chassis_group=[], type=, tag=[], requested_chassis=[], tunnel_key=3, up=[False], logical_port=2beb4efd-23c1-4bf6-b57d-6c97a0277124, gateway_chassis=[], external_ids={'neutron:cidrs': '2001:db9:1234::35e/64', 'neutron:device_id': 'cfbbc54a-1772-495b-8fe4-864c717e22b4', 'neutron:device_owner': 'compute:nova', 'neutron:network_name': 'neutron-2af7badf-1958-4fc8-b13a-b2e8379e6531', 'neutron:port_name': '', 'neutron:project_id': 'd11daecfe9d847ddb7d9ce2932c2fe26', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf2e7d53-0db7-4873-82ab-cf67eceda937'}, encap=[], virtual_parent=[], nat_addresses=[], datapath=02e203c7-714a-417c-bc02-c2877ec758a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 INFO neutron.agent.ovn.metadata.agent [-] Port 2beb4efd-23c1-4bf6-b57d-6c97a0277124 in datapath 2af7badf-1958-4fc8-b13a-b2e8379e6531 bound to our chassis Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 provision_datapath /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:434 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.997 188802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 or it has no MAC or IP addresses configured, tearing the namespace down if needed provision_datapath /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:442 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.997 188812 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aff129-2417-45c3-bee1-7b01ff6298f9]: (4, False) _call_back /usr/local/lib/python3.10/dist-packages/oslo_privsep/daemon.py:501 Em sex., 18 de nov. de 2022 ?s 12:25, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> escreveu: > Hi Roberto: > > The documentation you are referring to must be updated. The LP#1460177 RFE > implemented this feature. Actually there is a test class that is testing > this functionality in the CI [1][2]. > > Regards. > > [1]https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/ > [2] > https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44 > > On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta < > roberto.acosta at luizalabs.com> wrote: > >> Hey folks, >> >> Can you confirm if the metadata should work in an ipv6-only environment? >> >> As I understand from this discussion on LP:1460177 >> and the fork of the >> discussion in many opendev reviews #315604 >> , #738205 >> #745705 >> , ..., it seems >> like it should work. >> >> However, this comment in the openstack doc [1] has me questioning if >> it really works. >> *"There are no provisions for an IPv6-based metadata service similar to >> what is provided for IPv4. In the case of dual-stacked guests though it is >> always possible to use the IPv4 metadata service instead. IPv6-only guests >> will have to use another method for metadata injection such as using a >> configuration drive, which is described in the Nova documentation >> on config-drive >> ."* >> >> Is anyone using metadata in an ipv6-only Openstack setup? >> >> Regards, >> Roberto >> >> [1] >> https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest >> >> >> >> >> >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >> imediatamente anuladas e proibidas?.* >> >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >> por esse e-mail ou por seus anexos?.* >> > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Nov 18 19:36:58 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Nov 2022 11:36:58 -0800 Subject: [all][tc] Canceling next week TC meetings Message-ID: <1848c3e84ae.e6128d38182235.6072155693137450716@ghanshyammann.com> Hello Everyone, As many of us will not be available due to thanks giving, we are cancelling the next week's (Nov 23) TC meeting. -gmann From roberto.acosta at luizalabs.com Fri Nov 18 21:15:06 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Fri, 18 Nov 2022 18:15:06 -0300 Subject: [neutron] metadata IPv6 In-Reply-To: References: Message-ID: Hello Rodolfo, With some hacks in the functions/lines below, I can perform tests with the neutron-ovn-metadata-agent IPv6-only. [1] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 [2] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/driver.py#L59 [3] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/server.py#L101 [4] https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L922 However, I think the LLC address that the VM autoconfigures (needed by [3]), needs to be learned from the port_Binding table of the OVN southbound - or something to make this work on neutron-metadata side. Regards, Roberto ov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.575 206406 DEBUG eventlet.wsgi.server [-] (206406) accepted '' server /usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py:1004 Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.576 206406 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET / HTTP/1.0 Accept: */* Connection: close Content-Type: text/plain Host: [fe80::a9fe:a9fe] User-Agent: curl/7.68.0 X-Forwarded-For: fe80::f816:3eff:fe22:d958 X-Ovn-Network-Id: 2af7badf-1958-4fc8-b13a-b2e8379e6531 __call__ /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:84 Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.587 206406 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:164 Nov 18 20:56:21 compute2 haproxy[206448]: fe80::f816:3eff:fe22:d958:37348 [18/Nov/2022:20:56:21.574] listener listener/metadata 0/0/0/13/13 200 218 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1" Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.588 206406 INFO eventlet.wsgi.server [-] fe80::f816:3eff:fe22:d958, "GET / HTTP/1.1" status: 200 len: 234 time: 0.0112894 root at ubuntu:~# curl [fe80::a9fe:a9fe%ens3] 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 Em sex., 18 de nov. de 2022 ?s 15:04, Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> escreveu: > Hi Rodolfo, > > Thanks for the feedback, we know it's supported by default in neutron > metadata agent. > > My question came because I couldn't make it work with > the neutron-ovn-metadata-agent. Checking some logs I believe that the > problem is because the Port_Binding external_ids should have the "neutron:cidrs" > [1],but this is empty. > [1] - > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 > > I still don't know how to solve this (: > > Regards, > > neutron-ovn-metadata-agent logs: > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched > UPDATE: PortBindingChassisCreatedEvent(events=('update',), > table='Port_Binding', conditions=None, old_conditions=None), priority=20 to > row=Port_Binding(parent_port=[], chassis=[ 0x7f4e958ba770>], mac=['fa:16:3e:e8:92:d8 2001:db9:1234::35e'], > options={'mcast_flood_reports': 'true', 'requested-chassis': 'compute2'}, > ha_chassis_group=[], type=, tag=[], requested_chassis=[ object at 0x7f4e958ba770>], tunnel_key=3, up=[False], > logical_port=2beb4efd-23c1-4bf6-b57d-6c97a0277124, gateway_chassis=[], > external_ids={'neutron:cidrs': '2001:db9:1234::35e/64', > 'neutron:device_id': 'cfbbc54a-1772-495b-8fe4-864c717e22b4', > 'neutron:device_owner': 'compute:nova', 'neutron:network_name': > 'neutron-2af7badf-1958-4fc8-b13a-b2e8379e6531', 'neutron:port_name': '', > 'neutron:project_id': 'd11daecfe9d847ddb7d9ce2932c2fe26', > 'neutron:revision_number': '2', 'neutron:security_group_ids': > 'cf2e7d53-0db7-4873-82ab-cf67eceda937'}, encap=[], virtual_parent=[], > nat_addresses=[], datapath=02e203c7-714a-417c-bc02-c2877ec758a7) > old=Port_Binding(chassis=[]) matches > /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 INFO neutron.agent.ovn.metadata.agent [-] Port > 2beb4efd-23c1-4bf6-b57d-6c97a0277124 in datapath > 2af7badf-1958-4fc8-b13a-b2e8379e6531 bound to our chassis > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning > metadata for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:434 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no > metadata port for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 or it has no > MAC or IP addresses configured, tearing the namespace down if needed > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:442 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188812 DEBUG oslo.privsep.daemon [-] privsep: > reply[c6aff129-2417-45c3-bee1-7b01ff6298f9]: (4, False) _call_back > /usr/local/lib/python3.10/dist-packages/oslo_privsep/daemon.py:501 > > > > > > Em sex., 18 de nov. de 2022 ?s 12:25, Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> escreveu: > >> Hi Roberto: >> >> The documentation you are referring to must be updated. The LP#1460177 >> RFE implemented this feature. Actually there is a test class that is >> testing this functionality in the CI [1][2]. >> >> Regards. >> >> [1] >> https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/ >> [2] >> https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44 >> >> On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta < >> roberto.acosta at luizalabs.com> wrote: >> >>> Hey folks, >>> >>> Can you confirm if the metadata should work in an ipv6-only environment? >>> >>> As I understand from this discussion on LP:1460177 >>> and the fork of the >>> discussion in many opendev reviews #315604 >>> , #738205 >>> #745705 >>> , ..., it >>> seems like it should work. >>> >>> However, this comment in the openstack doc [1] has me questioning if >>> it really works. >>> *"There are no provisions for an IPv6-based metadata service similar to >>> what is provided for IPv4. In the case of dual-stacked guests though it is >>> always possible to use the IPv4 metadata service instead. IPv6-only guests >>> will have to use another method for metadata injection such as using a >>> configuration drive, which is described in the Nova documentation >>> on config-drive >>> ."* >>> >>> Is anyone using metadata in an ipv6-only Openstack setup? >>> >>> Regards, >>> Roberto >>> >>> [1] >>> https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest >>> >>> >>> >>> >>> >>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>> imediatamente anuladas e proibidas?.* >>> >>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>> por esse e-mail ou por seus anexos?.* >>> >> -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sat Nov 19 00:22:59 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 18 Nov 2022 16:22:59 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Nov 18: Reading: 5 min Message-ID: <1848d4460b5.d39c2b5e189057.7939709114439810195@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * * We had this week's meeting on Nov 16. Most of the meeting discussions are summarized in this email. Meeting logs are available @ https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-16-16.00.log.html * Next week (Nov 23) TC weekly meeting is canceled[1]. 2. What we completed this week: ========================= * Defined process for the TC chair nomination & election[2] * Updated the 2023.1 cycle testing runtime[3] * Documented the upgrade patch testing in PTI[4] * Added zookeeper role under OpenStack-Ansible governance[5] * TC stopped using Storybooard. We were not using it for many cycles but TC officially decided [6] to clear all story and close the governance repo in Storyboard[7]. We will not use any other task tracking tool, etherpad or tracking things in meetings are working fine. 3. Activities In progress: ================== TC Tracker for 2023.1 cycle --------------------------------- * Current cycle working items and their progress are present in 2023.1 tracker etherpad[8]. Open Reviews ----------------- * Four open reviews for ongoing activities[9]. OpenInfra Board + OpenStack Syncup Call -------------------------------------------------- We continued the syncup call with Board members on Nov 16[10]. We discussed two topic in this call 1. "Less Diversity" and 2. i18 SIG. Most of the discussion was around first topic. There were multiple ideads were shared and discussed including: - Spreading and encouraging the contribution to wider audiance including large deployement, users, company manager. - Making contribution process more easy for part time contributorts. operators etc by helping them in testing requirement, process, half-baked patches to merge. - Reachout out to platinum members company and company executives to know the reason for not contributing in community and explain/encourage them to contribution. There was no concrete action or step to improve the Diversity but we all togehther need to put effort in all direction to improve it slowly. i18 SIG proposal by Brian about Weblate funding which is ~1500 Euro/year and this sounds reasonable to do from foundation but Brian need to write the foaml proposal to foundation before Board meeting on Dec 6th so that it can be discussed there is needed. Thanks Brian for working and following up on this. The next syncup call will be on Feb 2022-02-08 at 20:00 UTC. If anyone is interested, feel free to join it. Renovate translation SIG i18 ---------------------------------- * We discussed it in Board syncup call and Brian will take it forward to put the Weblate funding formal proposal to foundation. TC Video meeting discussion ---------------------------------- Based on the feedback JayF has abandoned the resolution patch[11]. Thanks JayF for proposal and discussion. Project updates ------------------- * Add Skyline repository for OpenStack-Ansible[12] * Add the cinder-infinidat charm to Openstack charms[13] * Add the infinidat-tools subordinate charm to OpenStack charms[14] * Add the manila-infinidat charm to Openstack charms[15] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[16]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [17] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. See you all next week in PTG! [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031240.html [2] https://governance.openstack.org/tc/reference/tc-chair-elections.html [3] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031229.html [4] https://governance.openstack.org/tc/reference/project-testing-interface.html#upgrade-testing [5] https://review.opendev.org/c/openstack/governance/+/863161 [6] https://meetings.opendev.org/meetings/tc/2022/tc.2022-11-16-16.00.log.html#l-220 [7] https://review.opendev.org/c/openstack/project-config/+/864771 [8] https://etherpad.opendev.org/p/tc-2023.1-tracker [9] https://review.opendev.org/q/projects:openstack/governance+status:open [10] https://etherpad.opendev.org/p/2022-11-board-openstack-sync [11] https://review.opendev.org/c/openstack/governance/+/863685 [12] https://review.opendev.org/c/openstack/governance/+/863166 [13] https://review.opendev.org/c/openstack/governance/+/863958 [14] https://review.opendev.org/c/openstack/governance/+/864067 [15] https://review.opendev.org/c/openstack/governance/+/864068 [16] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [17] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From skaplons at redhat.com Sat Nov 19 14:45:32 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 19 Nov 2022 15:45:32 +0100 Subject: [neutron] metadata IPv6 In-Reply-To: References: Message-ID: <2226367.aS3vNnzWXl@p1> Hi, Dnia pi?tek, 18 listopada 2022 19:04:04 CET Roberto Bartzen Acosta pisze: > Hi Rodolfo, > > Thanks for the feedback, we know it's supported by default in neutron > metadata agent. > > My question came because I couldn't make it work with > the neutron-ovn-metadata-agent. Checking some logs I believe that the > problem is because the Port_Binding external_ids should have the > "neutron:cidrs" > [1],but this is empty. > [1] - > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 > > I still don't know how to solve this (: Unfortunately it's not yet supported by OVN backend. It will work only with the neutron-metadata-agent which is used e.g. in ML2/OVS and ML2/LB backends. Please also remember that AFAIK there is no support for that IPv6 metadata in cloud-init so You will probably need to have some own tool in the guest VMs which will send requests to the metadata server using IPv6. > > Regards, > > neutron-ovn-metadata-agent logs: > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched > UPDATE: PortBindingChassisCreatedEvent(events=('update',), > table='Port_Binding', conditions=None, old_conditions=None), priority=20 to > row=Port_Binding(parent_port=[], chassis=[ 0x7f4e958ba770>], mac=['fa:16:3e:e8:92:d8 2001:db9:1234::35e'], > options={'mcast_flood_reports': 'true', 'requested-chassis': 'compute2'}, > ha_chassis_group=[], type=, tag=[], requested_chassis=[ object at 0x7f4e958ba770>], tunnel_key=3, up=[False], > logical_port=2beb4efd-23c1-4bf6-b57d-6c97a0277124, gateway_chassis=[], > external_ids={'neutron:cidrs': '2001:db9:1234::35e/64', > 'neutron:device_id': 'cfbbc54a-1772-495b-8fe4-864c717e22b4', > 'neutron:device_owner': 'compute:nova', 'neutron:network_name': > 'neutron-2af7badf-1958-4fc8-b13a-b2e8379e6531', 'neutron:port_name': '', > 'neutron:project_id': 'd11daecfe9d847ddb7d9ce2932c2fe26', > 'neutron:revision_number': '2', 'neutron:security_group_ids': > 'cf2e7d53-0db7-4873-82ab-cf67eceda937'}, encap=[], virtual_parent=[], > nat_addresses=[], datapath=02e203c7-714a-417c-bc02-c2877ec758a7) > old=Port_Binding(chassis=[]) matches > /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 INFO neutron.agent.ovn.metadata.agent [-] Port > 2beb4efd-23c1-4bf6-b57d-6c97a0277124 in datapath > 2af7badf-1958-4fc8-b13a-b2e8379e6531 bound to our chassis > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning > metadata for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:434 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no > metadata port for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 or it has no > MAC or IP addresses configured, tearing the namespace down if needed > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:442 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188812 DEBUG oslo.privsep.daemon [-] privsep: > reply[c6aff129-2417-45c3-bee1-7b01ff6298f9]: (4, False) _call_back > /usr/local/lib/python3.10/dist-packages/oslo_privsep/daemon.py:501 > > > > > > Em sex., 18 de nov. de 2022 ?s 12:25, Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> escreveu: > > > Hi Roberto: > > > > The documentation you are referring to must be updated. The LP#1460177 RFE > > implemented this feature. Actually there is a test class that is testing > > this functionality in the CI [1][2]. > > > > Regards. > > > > [1]https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/ > > [2] > > https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44 > > > > On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta < > > roberto.acosta at luizalabs.com> wrote: > > > >> Hey folks, > >> > >> Can you confirm if the metadata should work in an ipv6-only environment? > >> > >> As I understand from this discussion on LP:1460177 > >> and the fork of the > >> discussion in many opendev reviews #315604 > >> , #738205 > >> #745705 > >> , ..., it seems > >> like it should work. > >> > >> However, this comment in the openstack doc [1] has me questioning if > >> it really works. > >> *"There are no provisions for an IPv6-based metadata service similar to > >> what is provided for IPv4. In the case of dual-stacked guests though it is > >> always possible to use the IPv4 metadata service instead. IPv6-only guests > >> will have to use another method for metadata injection such as using a > >> configuration drive, which is described in the Nova documentation > >> on config-drive > >> ."* > >> > >> Is anyone using metadata in an ipv6-only Openstack setup? > >> > >> Regards, > >> Roberto > >> > >> [1] > >> https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest > >> > >> > >> > >> > >> > >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no > >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > >> imediatamente anuladas e proibidas?.* > >> > >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > >> por esse e-mail ou por seus anexos?.* > >> > > > > -- > > > > > _?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?._ > > > * **?Apesar do Magazine Luiza tomar > todas as precau??es razo?veis para assegurar que nenhum v?rus esteja > presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por > quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Sat Nov 19 15:11:48 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 19 Nov 2022 15:11:48 +0000 Subject: [neutron] metadata IPv6 In-Reply-To: <2226367.aS3vNnzWXl@p1> References: <2226367.aS3vNnzWXl@p1> Message-ID: <20221119151147.4xotvkmljkp44bhr@yuggoth.org> On 2022-11-19 15:45:32 +0100 (+0100), Slawek Kaplonski wrote: [...] > Please also remember that AFAIK there is no support for that IPv6 > metadata in cloud-init so You will probably need to have some own > tool in the guest VMs which will send requests to the metadata > server using IPv6. [...] While it doesn't appear to be reflected on your https://launchpad.net/bugs/1906849 feature request yet, this merged two days ago: Add Support for IPv6 metadata to `DataSourceOpenStack` - Add openstack IPv6 metadata url `fe80::a9fe:a9fe` - Enable requesting multiple metadata sources in parallel https://github.com/canonical/cloud-init/pull/1805 I guess it will be included in the next cloud-init release. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fkr at osb-alliance.com Fri Nov 18 23:34:35 2022 From: fkr at osb-alliance.com (Felix Kronlage-Dammers) Date: Sat, 19 Nov 2022 00:34:35 +0100 Subject: Call for Participation: Sovereign Cloud DevRoom at FOSDEM 2023 Message-ID: <4EA03C8A-B6D1-4C69-846C-D01BD594AA95@osb-alliance.com> # FOSDEM 2023 In Person ? Sovereign Cloud DevRoom Call for Participation! The twenty-third edition of FOSDEM takes place Saturday 4th and Sunday 5th February 2023 in Brussels, Belgium. For the first time there will be a DevRoom that focuses on the subject of Sovereign Cloud and its practical aspect for digital sovereignty. ## Key dates tl;dr: - Conference dates 4-5 February, 2023 In person - Sovereign Cloud DevRoom date: Saturday 4th February - Submission deadline: Friday December 9th, 2022 - 23:59 UTC. - Announcement of selected talks: Thursday 15th December - You must be available in person to present your talk ## About the Sovereign Cloud DevRoom ### Overview The Sovereign Cloud DevRoom is for discussing topics and issues of user privacy and sovereignty and the intersection with the needs of infrastructure providers in the cloud computing era. A few topic examples to illustrate (a longer list is below): - Can all operations be open, or may there be a need for a two+ tier openess (country specific laws or company secrets)? - How can privacy relevant data be anonymized instead of being deleted, without losing the useful information? - Open Operations is more than just sharing runbooks, how is knowledge shared and fostered best across organizations? - Navigating our personal data lakes -- we're going to need better tools that are easier to use by more people for managing and curating encrypted data lakes - The role of machine learning in conducting data science on anonymized information - Countering the risks of integrated platforms These are just a few example suggestions, we welcome proposals on any aspects around these topic. More topic ideas and details on how to submit proposals is below. ### Discussing the Sovereign Cloud The recent EU focus on digital sovereignity has brought fresh attention and innovation to a problem area many of us have been interested in and working on for a long time. This DevRoom focuses on a practical aspect of digital sovereignty: how it affects user-oriented and potentially privacy-containing infrastructure in the cloud computing era. This aspect is the _sovereign cloud_, the intersection of digital sovereignty and a modern cloud-centric way to create and sustain infrastructure, operations, and development to address the needs of users (which may be other cloud services or actual, real people.) With the rise of the Confidential Computing Consortium (https://confidentialcomputing.io/), some people feel we are beginning to achieve "sovereign enough" where it comes to the balance of the needs of the individual and the needs of tech companies et al. But there are just as many other voices calling attention to potential problems and unfulfillable needs of staying on that path. Looking at the entire landscape of the problem, it's clear that Confidential Computing also aligns with usage on a sovereign cloud. It is a tool to ensure the least amount of provider trust, and it is still a good idea until we've moved new processor architectures into the mainstream that are less impacted by predictive execution vulnerabilities. So while it is not perfect, making it harder for an attacker is always a good idea where feasible. In looking for other paths to fill more needs, we naturally come to the intersection of Free/Open where interests from different backgrounds pursuing different goals suddenly find themselves working together in a common direction. Two of those groups are part of the organizing committee for this devroom: As a way of addressing the holistic needs and supporting a truly balanced digital sovereignty, organizations such as the overeign Cloud Stack (https://scs.community/) have come together to provide a complete technology solution, standards as well as reference implementations, that can be proven and recognized by everyone as a truly sovereign cloud. In this area where the requirements of a sovereign cloud are being met is another intersection from the ecosytem of Free and Open Source software development. The idea of Open Operations is essential to projects such as Operate First that provide a developer-friendly post-CI/CD platform for running, testing, and proving Open Source services. Longer list of topic ideas - this is not an exclusive list, feel free to submit further ideas: - Can all operations be open, or may there be a need for a two+ tier openess (country specific laws or company secrets)? - How can privacy relevant data be anonymized instead of being deleted, without losing the useful information? - Discussion: How open can software be called, if it relies on closed operations and closed infrastructure? - Do we need to refine the definition of upstream in the idea of Open Operations? - Is hybrid accelerating Open Operations or can it be a slow-downer/ separator? Or the solution to possible privacy issues? - Interoperability, transparency, and independence are the go to goals, what can be accepted on our way until we are there? - Open Operations is more than just sharing runbooks, how is knowledge shared and fostered best across organizations? - Share experiences and stories on creating environments of psychological safety so that failures indeed make us experts - Building communities of practice across organizations - Navigating our personal data lakes -- we're going to need better tools that are easier to use by more people for managing and curating encrypted data lakes - The role of machine learning in conducting data science on anonymized information - Countering the risks of integrated platforms Again, these are just suggestions. We welcome proposals on any aspects around these topic. Format and lengths of submissions: - Long (40 minutes, including Q&A) - Short (20 minutes, including Q&A) - Lightning (5 minutes, no Q&A) Aside from Presentation also Meetings/Discussions are welcome for topics where a BoF-style (Birds of Feather) session is appropriate. HOW TO SUBMIT A TALK - Head to the FOSDEM 2023 Pentabarf website. - If you already have a Pentabarf account, please don?t create a new one. - If you forgot your password, reset it. - Otherwise, follow the instructions to create an account. - Once logged in, select ?New Event? and click on ?Show All? in the top right corner to display the full form. Your submission must include the following information: - First and last name / Nickname (optional)/ Image - Email address - Mobile phone number (this is a very hard requirement as there will be no other reliable form of emergency communication on the day) - Title and subtitle of your talk (please be descriptive, as titles will be listed with ~500 from other projects) - Track: Select ?Sovereign Cloud DevRoom? as the track - Event type: - Lightning Talk OR - Meeting or Discussion OR - Presentation - Persons: Add yourself as the speaker with your bio - Description: Abstract (required)/ Full Description (optional) - Links to related websites / blogs etc. - Beyond giving us the above, let us know if there?s anything else you?d like to share as part of your submission ? Twitter handle, GitHub activity history ? whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome! - For issues with Pentabarf, please contact cloud-devroom-manager at fosdem.org. Feel free to send a notification of your submission to that email. If you need to get in touch with the organisers or program committee of the Sovereign Cloud DevRoom, email us at cloud-devroom-manager at fosdem.org FOSDEM website / FOSDEM code of conduct -------------- next part -------------- An HTML attachment was scrubbed... URL: From sahahmadi96 at gmail.com Sat Nov 19 09:58:28 2022 From: sahahmadi96 at gmail.com (Seyed Amir Hossein Ahmadi) Date: Sat, 19 Nov 2022 13:28:28 +0330 Subject: OpenStack: Can't create instances using KVM on host. Message-ID: I have a Dockerize installation from Devstack all-in-one. The goal for me is to connect to the host's KVM and create instances there. Nova was configured as follows for this purpose. *# /etc/nova/nova.conf# /etc/nova/nova-cpu.conf# [libvirt]# connection_uri = qemu+ssh://root at 172.10.1.1/system * When I try to build the instance, I get the following error. *# Build of instance cdd6f8b4-6dcf-4a43-b96a-fb6166b20235 aborted: Failed to allocate the network(s), not rescheduling.* ovs-vsctl commands cause the error. What is the problem? Does this need to be done differently? -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun Nov 20 07:25:09 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 20 Nov 2022 08:25:09 +0100 Subject: [neutron] metadata IPv6 In-Reply-To: <20221119151147.4xotvkmljkp44bhr@yuggoth.org> References: <2226367.aS3vNnzWXl@p1> <20221119151147.4xotvkmljkp44bhr@yuggoth.org> Message-ID: <13148264.uLZWGnKmhe@p1> Hi, Dnia sobota, 19 listopada 2022 16:11:48 CET Jeremy Stanley pisze: > On 2022-11-19 15:45:32 +0100 (+0100), Slawek Kaplonski wrote: > [...] > > Please also remember that AFAIK there is no support for that IPv6 > > metadata in cloud-init so You will probably need to have some own > > tool in the guest VMs which will send requests to the metadata > > server using IPv6. > [...] > > While it doesn't appear to be reflected on your > https://launchpad.net/bugs/1906849 feature request yet, this merged > two days ago: > > Add Support for IPv6 metadata to `DataSourceOpenStack` > - Add openstack IPv6 metadata url `fe80::a9fe:a9fe` > - Enable requesting multiple metadata sources in parallel > > https://github.com/canonical/cloud-init/pull/1805 > > I guess it will be included in the next cloud-init release. > -- > Jeremy Stanley > That's great. Thx for info fungi :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From nguyenhuukhoinw at gmail.com Sun Nov 20 13:43:45 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Sun, 20 Nov 2022 20:43:45 +0700 Subject: Kolla Ansible Image add python package. Message-ID: Hello guys. I read https://docs.openstack.org/kolla/latest/admin/image-building.html but I would like to know how to install python packages for all images at building time. I would be very glad if I can get some guides. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon Nov 21 09:08:35 2022 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 21 Nov 2022 10:08:35 +0100 Subject: [neutron] Bug deputy report (week starting on Nov-14-2022) Message-ID: Hey neutrinos, last bug deputy rotation of the year started: https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy and I was deputy last week, here are the reported bugs. Overall all bugs have patches or at least assignees Critical * [neutron-lib] Stable py3x jobs failing - https://bugs.launchpad.net/neutron/+bug/1996776 Patch and backports by ralonsoh merged - https://review.opendev.org/q/I3a4a27ec4672f8ea8848d7c04651730dae6f40ff * [CI] "test_live_migration_with_trunk" failing - https://bugs.launchpad.net/neutron/+bug/1997025 Caused by recent os-vif bump, workaround suggested by ralonsoh to block this newer version - https://review.opendev.org/c/openstack/neutron/+/865026 High * With new RBAC enabled (enforce_scope and enforce_new_defaults): 'router:external' field is missing in network list response - https://bugs.launchpad.net/neutron/+bug/1996836 Assigned to slaweq, fixed with https://review.opendev.org/c/openstack/neutron/+/865032 Also a related neutron-lib patch merged https://review.opendev.org/c/openstack/neutron-lib/+/864809 * With new RBAC enabled (enforce_scope and enforce_new_defaults): some security groups aren't visible for admin user - https://bugs.launchpad.net/neutron/+bug/1997089 Also assigned to slaweq, patch in progress https://review.opendev.org/c/openstack/neutron/+/865040 * Metadata service broken after minor neutron update when OVN 21.09+ is used - https://bugs.launchpad.net/neutron/+bug/1997092 Fix by ihrachys merged, backports in progress - https://review.opendev.org/c/openstack/neutron/+/864777 Medium * Unit test failure with Python 3.11 - https://bugs.launchpad.net/neutron/+bug/1996527 Fix by haleyb merged - https://review.opendev.org/c/openstack/neutron/+/864448 * OVN metadata randomly stops working - https://bugs.launchpad.net/neutron/+bug/1996594 Happening with xena and OVS 2.16, otherwiseguy investigating (may be caused by OVS version too low for RAFT) * QoS rules policies do not work for "owners" - https://bugs.launchpad.net/neutron/+bug/1996606 Assigned to ralonsoh * [OVN] support update fixed_ips of metadata port - https://bugs.launchpad.net/neutron/+bug/1996677 Patch in progress by huanghailun - https://review.opendev.org/c/openstack/neutron/+/864715 * Add support for DHCP option 119 (domain-search) for IPv4 in ML2/OVN - https://bugs.launchpad.net/neutron/+bug/1996759 Fix by lucasagomes merged - https://review.opendev.org/c/openstack/neutron/+/864740 * [OVN] Enabling and disabling networking log objects doesn't work as expected - https://bugs.launchpad.net/neutron/+bug/1996780 Patch in progress by elvira - https://review.opendev.org/c/openstack/neutron/+/864152 * [ovn-octavia-provider] HM created at fully populated loadbalancer stuck in PENDING_CREATE - https://bugs.launchpad.net/neutron/+bug/1997094 Assigned to froyo * ML2 When OVN mech driver is enabled dhcp extension is disabled - https://bugs.launchpad.net/neutron/+bug/1997185 When OVN is enabled with another driver (linuxbridge here), DHCP extension is disabled. Fix suggested by author to mark extension as supported in OVN - https://review.opendev.org/c/openstack/neutron/+/865081 Low * Replace the Linux Bridge references with Open vSwitch in the installation manuals - https://bugs.launchpad.net/neutron/+bug/1996772 Discussed in team meeting, this part of the doc is quite outdated Patch by ralonsoh merged - https://review.opendev.org/c/openstack/neutron/+/864748 * Install and configure controller node in Neutron - https://bugs.launchpad.net/neutron/+bug/1996889 Doc fix on domain names case, patch by haleyb - https://review.opendev.org/c/openstack/neutron/+/864921 RFE * add address scope to OVN Southbound/Northbound - https://bugs.launchpad.net/neutron/+bug/1996741 Enhancement to use ovn-bgp-agent with non-SNATed tenant networks Patch in review: https://review.opendev.org/c/openstack/neutron/+/861719 Opinion * The virtual network is broken on the node after neutron-openvswitch-agent is restarted if RPC requests return an error for a while - https://bugs.launchpad.net/neutron/+bug/1996788 ralonsoh++ jumped in on that one, issue on OVS agent start when RPC fails. Not much we can do here probably, but left as opinion for potential further discussion Incomplete * [OpenStack-OVN] Poor network performance - https://bugs.launchpad.net/neutron/+bug/1996593 Reported much lower performance with security groups enabled between ML2/OVS and ML2/OVN, more details asked on deployment setup -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Nov 21 10:35:55 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 21 Nov 2022 11:35:55 +0100 Subject: [largescale-sig] Next meeting: Nov 23, 15utc Message-ID: Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC. You can doublecheck how that UTC time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20221123T15 Feel free to add topics to the agenda: https://etherpad.opendev.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From rlandy at redhat.com Mon Nov 21 12:06:20 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Mon, 21 Nov 2022 07:06:20 -0500 Subject: [TripleO] Intermittent gate failure - centos-9-content-provider jobs Message-ID: Hello All, We are investigating an intermittent check/gate failure on content provider jobs related to DNS and mirror access/resolution: tripleo-ci-centos-9-content-provider and the -zed and -wallaby versions. Details are in the related Launchpad bug: https://bugs.launchpad.net/tripleo/+bug/1997202. We will update the list as we know more. Thanks, TripleO CI Team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From justin.lamp at netways.de Mon Nov 21 13:05:26 2022 From: justin.lamp at netways.de (Justin Lamp) Date: Mon, 21 Nov 2022 13:05:26 +0000 Subject: [neutron] metadata IPv6 In-Reply-To: References: Message-ID: <4406724495d1d753d45d0342e36f9b19e1ddba5c.camel@netways.de> Hi Roberto, thank you for your findings. Those are all great news, especially the recently merged commit in cloud-init! Do you already have a patchset that works? Is anyone working on it upstream? Best regards, Justin Am Freitag, dem 18.11.2022 um 18:15 -0300 schrieb Roberto Bartzen Acosta: Hello Rodolfo, With some hacks in the functions/lines below, I can perform tests with the neutron-ovn-metadata-agent IPv6-only. [1] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 [2] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/driver.py#L59 [3] https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/server.py#L101 [4] https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L922 However, I think the LLC address that the VM autoconfigures (needed by [3]), needs to be learned from the port_Binding table of the OVN southbound - or something to make this work on neutron-metadata side. Regards, Roberto ov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.575 206406 DEBUG eventlet.wsgi.server [-] (206406) accepted '' server /usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py:1004 Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.576 206406 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET / HTTP/1.0 Accept: / Connection: close Content-Type: text/plain Host: [fe80::a9fe:a9fe] User-Agent: curl/7.68.0 X-Forwarded-For: fe80::f816:3eff:fe22:d958 X-Ovn-Network-Id: 2af7badf-1958-4fc8-b13a-b2e8379e6531 call /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:84 Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.587 206406 DEBUG neutron.agent.ovn.metadata.server [-] _proxy_request /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:164 Nov 18 20:56:21 compute2 haproxy[206448]: fe80::f816:3eff:fe22:d958:37348 [18/Nov/2022:20:56:21.574] listener listener/metadata 0/0/0/13/13 200 218 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1" Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 20:56:21.588 206406 INFO eventlet.wsgi.server [-] fe80::f816:3eff:fe22:d958, "GET / HTTP/1.1" status: 200 len: 234 time: 0.0112894 root at ubuntu:~# curl [fe80::a9fe:a9fe%ens3] 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04 Em sex., 18 de nov. de 2022 ?s 15:04, Roberto Bartzen Acosta <[roberto.acosta at luizalabs.com](mailto:roberto.acosta at luizalabs.com)> escreveu: > Hi Rodolfo,> > Thanks for the feedback, we know it's supported by default in neutron metadata agent. > > > My question came because I couldn't make it work with the neutron-ovn-metadata-agent. Checking some logs I believe that the problem is because the Port_Binding external_ids should have the "neutron:cidrs" [1],but this is empty. [1] - [https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432](https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432) > > I still don't know how to solve this (: Regards, > > > neutron-ovn-metadata-agent logs: Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingChassisCreatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(parent_port=[], chassis=[], mac=['fa:16:3e:e8:92:d8 2001:db9:1234::35e'], options={'mcast_flood_reports': 'true', 'requested-chassis': 'compute2'}, ha_chassis_group=[], type=, tag=[], requested_chassis=[], tunnel_key=3, up=[False], logical_port=2beb4efd-23c1-4bf6-b57d-6c97a0277124, gateway_chassis=[], external_ids={'neutron:cidrs': '2001:db9:1234::35e/64', 'neutron:device_id': 'cfbbc54a-1772-495b-8fe4-864c717e22b4', 'neutron:device_owner': 'compute:nova', 'neutron:network_name': 'neutron-2af7badf-1958-4fc8-b13a-b2e8379e6531', 'neutron:port_name': '', 'neutron:project_id': 'd11daecfe9d847ddb7d9ce2932c2fe26', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf2e7d53-0db7-4873-82ab-cf67eceda937'}, encap=[], virtual_parent=[], nat_addresses=[], datapath=02e203c7-714a-417c-bc02-c2877ec758a7) old=Port_Binding(chassis=[]) matches /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 INFO neutron.agent.ovn.metadata.agent [-] Port 2beb4efd-23c1-4bf6-b57d-6c97a0277124 in datapath 2af7badf-1958-4fc8-b13a-b2e8379e6531 bound to our chassis Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.996 188802 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 provision_datapath /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:434 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.997 188802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 or it has no MAC or IP addresses configured, tearing the namespace down if needed provision_datapath /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:442 Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 17:38:52.997 188812 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aff129-2417-45c3-bee1-7b01ff6298f9]: (4, False) _call_back /usr/local/lib/python3.10/dist-packages/oslo_privsep/daemon.py:501 > > Em sex., 18 de nov. de 2022 ?s 12:25, Rodolfo Alonso Hernandez <[ralonsoh at redhat.com](mailto:ralonsoh at redhat.com)> escreveu: > > > > > Hi Roberto: > > > > > > The documentation you are referring to must be updated. The LP#1460177 RFE implemented this feature. Actually there is a test class that is testing this functionality in the CI [1][2]. > > > > > > Regards. > > > > [1][https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/](https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/) > > > > [2][https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44](https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44) > > > > On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta <[roberto.acosta at luizalabs.com](mailto:roberto.acosta at luizalabs.com)> wrote: > > > > > Hey folks, > > > > > > Can you confirm if the metadata should work in an ipv6-only environment? As I understand from this discussion on [LP:1460177](https://bugs.launchpad.net/neutron/+bug/1460177) and the fork of the discussion in many opendev reviews [#315604](https://review.opendev.org/c/openstack/neutron-specs/+/315604), [#738205](https://review.opendev.org/c/openstack/neutron-lib/+/738205) [#745705](https://review.opendev.org/c/openstack/neutron/+/745705), ..., it seems like it should work. > > > > > > However, this comment in the openstack doc [1] has me questioning if it really works. > > > **"There are no provisions for an IPv6-based metadata service similar to what is provided for IPv4. In the case of dual-stacked guests though it is always possible to use the IPv4 metadata service instead. IPv6-only guests will have to use another method for metadata injection such as using a configuration drive, which is described in the Nova documentation on [config-drive](https://docs.openstack.org/nova/latest/user/config-drive.html)."** > > > > > > Is anyone using metadata in an ipv6-only Openstack setup? > > > > > > Regards, > > > Roberto > > > > > > > > > > > > [1] [https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest](https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?.* > > > > > > > > > * **?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* > > > ?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?. * **?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* --? Justin Lamp Systems Engineer NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg Tel: +49 911 92885-0 | Fax: +49 911 92885-77 CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 https://www.netways.de | justin.lamp at netways.de ** stackconf 2023 - September - https://stackconf.eu ** ** OSMC 2023 - November - https://osmc.de ** ** New at NWS: Managed Database - https://nws.netways.de/managed-database ** ** NETWAYS Web Services - https://nws.netways.de ** -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Mon Nov 21 02:39:23 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Sun, 20 Nov 2022 20:39:23 -0600 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network Message-ID: Hi all, I have an OpenStack deployment using kolla-ansible with the Yoga version. Our network setup is provided in the attachment ( *network.jpg*). The configuration file (*globals.yml)* and inventory ( *multinode*) are also attached to this email. After the deployment steps, I try to run the script *init-runonce* to create the images and the networks by modifying the external network parameters according to our network setup. I am successful in launching instances from the horizon dashboard. However, the instances were not able to connect to the Internet. I have tried attaching the floating point IPs to the instance, where the IPs are successfully allocated from the range I have specified in the* init-runonce* script. When I launched the instance and tried to ping the external network, it failed. It will be very helpful if anyone can advise me to resolve this issue. I am currently using Ubuntu 20.04.05 LTS version on the controller and compute nodes. Best regards, Vincent -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: globals.yml Type: application/octet-stream Size: 31973 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: multinode Type: application/octet-stream Size: 9993 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: network.jpg Type: image/jpeg Size: 29084 bytes Desc: not available URL: From jsuazo at whitestack.com Mon Nov 21 14:45:59 2022 From: jsuazo at whitestack.com (Juan Pablo Suazo) Date: Mon, 21 Nov 2022 11:45:59 -0300 Subject: In Need of Reviewer for Proposal Message-ID: Hello All, I'm in need of a reviewer to take a look at my kolla-ansible proposal, which consists of the modification of a couple of cinder tasks to support placing multiple ceph conf files in a cinder/ceph/ directory to make integrating multiple volume services easier. I have already received insight from another reviewer ( radoslaw.piliszek at gmail.com) and made changes to comply with conventions and fix bugs. I couldn't tend to this proposal for a while, and in that time my reviewer has stopped reviewing patches altogether so I'm unable to get the approval for merging. All Zuul tasks are currently successful and the change has been tested locally to ensure it works correctly, so the review should be pretty straight forward. Best wishes, *Juan Pablo Suazo* Cloud Developer Trainee jsuazo at whitestack.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Nov 21 15:04:45 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 21 Nov 2022 12:04:45 -0300 Subject: In Need of Reviewer for Proposal In-Reply-To: References: Message-ID: Jello Juan, can you share the link for your proposal and patch? On Mon, Nov 21, 2022 at 12:03 PM Juan Pablo Suazo wrote: > Hello All, > > I'm in need of a reviewer to take a look at my kolla-ansible proposal, > which consists of the modification of a couple of cinder tasks to support > placing multiple ceph conf files in a cinder/ceph/ directory to make > integrating multiple volume services easier. > > I have already received insight from another reviewer ( > radoslaw.piliszek at gmail.com) and made changes to > comply with conventions and fix bugs. I couldn't tend to this proposal for > a while, and in that time my reviewer has stopped reviewing patches > altogether so I'm unable to get the approval for merging. > > All Zuul tasks are currently successful and the change has been tested > locally to ensure it works correctly, so the review should be pretty > straight forward. > > Best wishes, > > > *Juan Pablo Suazo* > Cloud Developer Trainee > jsuazo at whitestack.com > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsuazo at whitestack.com Mon Nov 21 15:06:57 2022 From: jsuazo at whitestack.com (Juan Pablo Suazo) Date: Mon, 21 Nov 2022 12:06:57 -0300 Subject: In Need of Reviewer for Proposal In-Reply-To: References: Message-ID: Of course, 848029: Allows for multiple Ceph Conf files | https://review.opendev.org/c/openstack/kolla-ansible/+/848029 Saludos cordiales, *Juan Pablo Suazo* Cloud Developer Trainee jsuazo at whitestack.com On Mon, Nov 21, 2022 at 12:05 PM Rafael Weing?rtner < rafaelweingartner at gmail.com> wrote: > Jello Juan, can you share the link for your proposal and patch? > > On Mon, Nov 21, 2022 at 12:03 PM Juan Pablo Suazo > wrote: > >> Hello All, >> >> I'm in need of a reviewer to take a look at my kolla-ansible proposal, >> which consists of the modification of a couple of cinder tasks to support >> placing multiple ceph conf files in a cinder/ceph/ directory to make >> integrating multiple volume services easier. >> >> I have already received insight from another reviewer ( >> radoslaw.piliszek at gmail.com) and made changes to >> comply with conventions and fix bugs. I couldn't tend to this proposal for >> a while, and in that time my reviewer has stopped reviewing patches >> altogether so I'm unable to get the approval for merging. >> >> All Zuul tasks are currently successful and the change has been tested >> locally to ensure it works correctly, so the review should be pretty >> straight forward. >> >> Best wishes, >> >> >> *Juan Pablo Suazo* >> Cloud Developer Trainee >> jsuazo at whitestack.com >> > > > -- > Rafael Weing?rtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at caktusgroup.com Mon Nov 21 15:54:48 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Mon, 21 Nov 2022 10:54:48 -0500 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: Message-ID: Not sure if this helps, but I asked a similar question recently and found I was launching instances in the wrong subnet: https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031230.html There were some other helpful potential fixes suggested in that thread as well. If it's still not working, sharing some information about what (local) network access the instances do have, if any, would be helpful. Also, is cloud-init working? Tobias On Mon, Nov 21, 2022, 9:50 AM vincent lee wrote: > Hi all, > I have an OpenStack deployment using kolla-ansible > with > the Yoga version. Our network setup is provided in the attachment ( > *network.jpg*). The configuration file (*globals.yml)* and inventory ( > *multinode*) are also attached to this email. After the deployment steps, > I try to run the script *init-runonce* to create the images and the > networks by modifying the external network parameters according to our > network setup. I am successful in launching instances from the horizon > dashboard. However, the instances were not able to connect to the Internet. > I have tried attaching the floating point IPs to the instance, where the > IPs are successfully allocated from the range I have specified in the > * init-runonce* script. When I launched the instance and tried to ping > the external network, it failed. It will be very helpful if anyone can > advise me to resolve this issue. I am currently using Ubuntu 20.04.05 LTS > version on the controller and compute nodes. > > Best regards, > Vincent > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Nov 21 16:54:47 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 21 Nov 2022 17:54:47 +0100 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: References: <1bf20b2fbc22ca185503ff8139113ebfef9f4b0d.camel@redhat.com> Message-ID: Hi Roberto: Please open a launchpad bug documenting this issue in os-ken. Thanks! On Wed, Nov 16, 2022 at 3:41 PM Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> wrote: > Thanks Sean. > > I believe that the os-ken driver is not coded to honor this premise > because the below function can learn new paths from peer update messages. > > https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/peer.py#L1544 > > Em qua., 16 de nov. de 2022 ?s 09:14, Sean Mooney > escreveu: > >> On Wed, 2022-11-16 at 09:05 -0300, Roberto Bartzen Acosta wrote: >> > Sorry for the mistake, I meant, the bgpspeaker should only >> "*advertise*!" >> > and not "learn" the AS_PATHs via BGP. >> yes that used to be the scope of that project to advertise only and not >> learn >> so i would geuss either that has change recently and they broke backward >> compaitbly >> or they have refactord it to use an external bgp speaker like frr and it >> learns by default >> >> i dont really see anything here >> https://github.com/openstack/neutron-dynamic-routing/commits/master >> im not really familar with the internals of the project but i dont see >> any code to learn routs form >> >> >> https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/driver/os_ken/driver.py >> >> it just has code for advertizing and withdrawing routes. >> >> >> >> > >> > Em qua., 16 de nov. de 2022 ?s 08:57, Sean Mooney >> > escreveu: >> > >> > > On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: >> > > > Hey folks, >> > > > >> > > > Please, I have a question here, the bgpspeaker should only "learn" >> and >> > > not >> > > > "advertise" the AS_PATHs via BGP, right? >> > > >> > > its been a while since i looked at it but in the past it did not >> supprot >> > > learning at all >> > > >> > > it just advertised the routes for the neutron netowrks >> > > >> > > > >> > > > In my tests, I can see that it is learning routes from BGP >> neighbors. >> > > This >> > > > behavior can cause an AS_PATH loop because the bgpspeaker learns >> back its >> > > > own advertised routes, and I see a message like this in the logs: >> > > > >> > > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on >> UPDATE >> > > > message has loops. Ignoring this message: >> > > > >> > > >> BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), >> > > > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), >> > > > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, >> > > > >> > > >> 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) >> > > > >> > > > This can be fixed by suppressing the neighbor route advertisement >> (using >> > > > route-map export), but have I misunderstood how >> neutron-dymanic-routing >> > > > works or do we have a possible bug here? >> > > > >> > > > Regards >> > > > >> > > >> > > >> > >> >> > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roberto.acosta at luizalabs.com Mon Nov 21 17:53:53 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Mon, 21 Nov 2022 14:53:53 -0300 Subject: [neutron] metadata IPv6 In-Reply-To: <4406724495d1d753d45d0342e36f9b19e1ddba5c.camel@netways.de> References: <4406724495d1d753d45d0342e36f9b19e1ddba5c.camel@netways.de> Message-ID: Hey folks, Thank you so much for the information. Em seg., 21 de nov. de 2022 ?s 10:05, Justin Lamp escreveu: > Hi Roberto, > > thank you for your findings. Those are all great news, especially the > recently merged commit in cloud-init! Do you already have a patchset that > works? Is anyone working on it upstream? > I don't have a patch to fix it at this moment. There is a nebulous question here in the function [4]. With IPv4, neutron-metadata-agent manages the VM address via DHCP and the ovn-southbound knows this address (in the Port_Binding table). In the IPv6-only case, the VM generates an LLA address automatically, and this local scope address is not known by ovn-southbound and neutron. At this point we have a hard time! In the current architecture, the metadata makes a proxy and uses the local address of the VM to find the corresponding port to forward and receive the traffic (Port_Binding table). It may be that the neutron-ovn-metadata-agent logic needs some modification to contemplate this case in which the VM address is not known (because it is dynamic). [4] https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L922 Regards > Best regards, Justin > > Am Freitag, dem 18.11.2022 um 18:15 -0300 schrieb Roberto Bartzen Acosta: > > Hello Rodolfo, > With some hacks in the functions/lines below, I can perform tests with the > neutron-ovn-metadata-agent IPv6-only. > [1] > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432 > [2] > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/driver.py#L59 > [3] > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/server.py#L101 > [4] > https://opendev.org/openstack/neutron/src/branch/master/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/impl_idl_ovn.py#L922 > > However, I think the LLC address that the VM autoconfigures (needed by > [3]), needs to be learned from the port_Binding table of the OVN southbound > - or something to make this work on neutron-metadata side. > > Regards, > Roberto > > ov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 > 20:56:21.575 206406 DEBUG eventlet.wsgi.server [-] (206406) accepted '' > server /usr/local/lib/python3.10/dist-packages/eventlet/wsgi.py:1004 > Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 > 20:56:21.576 206406 DEBUG neutron.agent.ovn.metadata.server [-] Request: > GET / HTTP/1.0 > Accept: */* > Connection: > close > Content-Type: > text/plain > Host: > [fe80::a9fe:a9fe] > User-Agent: > curl/7.68.0 > > X-Forwarded-For: fe80::f816:3eff:fe22:d958 > > X-Ovn-Network-Id: 2af7badf-1958-4fc8-b13a-b2e8379e6531 *call* > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:84 > Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 > 20:56:21.587 206406 DEBUG neutron.agent.ovn.metadata.server [-] [200]> _proxy_request > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/server.py:164 > Nov 18 20:56:21 compute2 haproxy[206448]: fe80::f816:3eff:fe22:d958:37348 > [18/Nov/2022:20:56:21.574] listener listener/metadata 0/0/0/13/13 200 218 - > - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1" > Nov 18 20:56:21 compute2 neutron-ovn-metadata-agent[206406]: 2022-11-18 > 20:56:21.588 206406 INFO eventlet.wsgi.server [-] fe80::f816:3eff:fe22:d958, > "GET / HTTP/1.1" status: 200 len: 234 time: 0.0112894 > > > root at ubuntu:~# curl [fe80::a9fe:a9fe%ens3] > 1.0 > 2007-01-19 > 2007-03-01 > 2007-08-29 > 2007-10-10 > 2007-12-15 > 2008-02-01 > 2008-09-01 > 2009-04-04 > > Em sex., 18 de nov. de 2022 ?s 15:04, Roberto Bartzen Acosta <[ > roberto.acosta at luizalabs.com](mailto:roberto.acosta at luizalabs.com)> > escreveu: > > Hi Rodolfo,> > Thanks for the feedback, we know it's supported by > default in neutron metadata agent. > > > > My question came because I couldn't make it work with > the neutron-ovn-metadata-agent. Checking some logs I believe that the > problem is because the Port_Binding external_ids should have the > "neutron:cidrs" [1],but this is empty. > [1] - [ > https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432](https://opendev.org/openstack/neutron/src/branch/master/neutron/agent/ovn/metadata/agent.py#L432) > > > I still don't know how to solve this (: > > Regards, > > > > neutron-ovn-metadata-agent logs: > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched > UPDATE: PortBindingChassisCreatedEvent(events=('update',), > table='Port_Binding', conditions=None, old_conditions=None), priority=20 to > row=Port_Binding(parent_port=[], chassis=[], mac=['fa:16:3e:e8:92:d8 > 2001:db9:1234::35e'], options={'mcast_flood_reports': 'true', > 'requested-chassis': 'compute2'}, ha_chassis_group=[], type=, tag=[], > requested_chassis=[], tunnel_key=3, up=[False], > logical_port=2beb4efd-23c1-4bf6-b57d-6c97a0277124, gateway_chassis=[], > external_ids={'neutron:cidrs': '2001:db9:1234::35e/64', > 'neutron:device_id': 'cfbbc54a-1772-495b-8fe4-864c717e22b4', > 'neutron:device_owner': 'compute:nova', 'neutron:network_name': > 'neutron-2af7badf-1958-4fc8-b13a-b2e8379e6531', 'neutron:port_name': '', > 'neutron:project_id': 'd11daecfe9d847ddb7d9ce2932c2fe26', > 'neutron:revision_number': '2', 'neutron:security_group_ids': > 'cf2e7d53-0db7-4873-82ab-cf67eceda937'}, encap=[], virtual_parent=[], > nat_addresses=[], datapath=02e203c7-714a-417c-bc02-c2877ec758a7) > old=Port_Binding(chassis=[]) matches > /usr/lib/python3/dist-packages/ovsdbapp/backend/ovs_idl/event.py:43 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 INFO neutron.agent.ovn.metadata.agent [-] Port > 2beb4efd-23c1-4bf6-b57d-6c97a0277124 in datapath > 2af7badf-1958-4fc8-b13a-b2e8379e6531 bound to our chassis > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.996 188802 DEBUG neutron.agent.ovn.metadata.agent [-] Provisioning > metadata for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:434 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188802 DEBUG neutron.agent.ovn.metadata.agent [-] There is no > metadata port for network 2af7badf-1958-4fc8-b13a-b2e8379e6531 or it has no > MAC or IP addresses configured, tearing the namespace down if needed > provision_datapath > /usr/lib/python3/dist-packages/neutron/agent/ovn/metadata/agent.py:442 > Nov 18 17:38:52 compute2 neutron-ovn-metadata-agent[188802]: 2022-11-18 > 17:38:52.997 188812 DEBUG oslo.privsep.daemon [-] privsep: > reply[c6aff129-2417-45c3-bee1-7b01ff6298f9]: (4, False) _call_back > /usr/local/lib/python3.10/dist-packages/oslo_privsep/daemon.py:501 > > > > > > > > Em sex., 18 de nov. de 2022 ?s 12:25, Rodolfo Alonso Hernandez <[ > ralonsoh at redhat.com](mailto:ralonsoh at redhat.com)> escreveu: > > > > > > Hi Roberto: > > > > > > > The documentation you are referring to must be updated. The > LP#1460177 RFE implemented this feature. Actually there is a test class > that is testing this functionality in the CI [1][2]. > > > > > > > Regards. > > > > > [1][ > https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/](https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/750355/) > > > > > > [2][ > https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44](https://github.com/openstack/neutron-tempest-plugin/blob/f10618eac3a12d35a35044443b63d144b71e0c6b/neutron_tempest_plugin/scenario/test_metadata.py#L36-L44) > > > > > > > On Fri, Nov 18, 2022 at 2:45 PM Roberto Bartzen Acosta <[ > roberto.acosta at luizalabs.com](mailto:roberto.acosta at luizalabs.com)> wrote: > > > > > > Hey folks, > > > > > > > Can you confirm if the metadata should work in an ipv6-only > environment? > > As I understand from this discussion on [LP:1460177]( > https://bugs.launchpad.net/neutron/+bug/1460177) and the fork of the > discussion in many opendev reviews [#315604]( > https://review.opendev.org/c/openstack/neutron-specs/+/315604), [#738205]( > https://review.opendev.org/c/openstack/neutron-lib/+/738205) [#745705]( > https://review.opendev.org/c/openstack/neutron/+/745705), ..., it seems > like it should work. > > > > > > > However, this comment in the openstack doc [1] has me > questioning if it really works. > > > > **"There are no provisions for an IPv6-based metadata service > similar to what is provided for IPv4. In the case of dual-stacked guests > though it is always possible to use the IPv4 metadata service instead. > IPv6-only guests will have to use another method for metadata injection > such as using a configuration drive, which is described in the Nova > documentation on [config-drive]( > https://docs.openstack.org/nova/latest/user/config-drive.html)."** > > > > > > > > Is anyone using metadata in an ipv6-only Openstack setup? > > > > > > > Regards, > > > > Roberto > > > > > > > > > > > > > [1] [ > https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest](https://docs.openstack.org/neutron/latest/admin/config-ipv6.html#configuring-interfaces-of-the-guest) > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > *?Esta mensagem ? > direcionada apenas para os endere?os constantes no cabe?alho inicial. Se > voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe > que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, > encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas > e proibidas?.* > > > > > > > > > > * **?Apesar do Magazine Luiza tomar todas as precau??es > razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a > empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos > causados por esse e-mail ou por seus anexos?.* > > > > > > > > > > > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > * **?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > > > > > > -- > Justin Lamp > Systems Engineer > > NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > CEO: Julian Hein, Bernd Erk | AG Nuernberg HRB25207 > https://www.netways.de | justin.lamp at netways.de > > ** stackconf 2023 - September - https://stackconf.eu ** > ** OSMC 2023 - November - https://osmc.de ** > ** New at NWS: Managed Database - https://nws.netways.de/managed-database > ** > ** NETWAYS Web Services - https://nws.netways.de ** > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Nov 21 19:16:40 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 21 Nov 2022 11:16:40 -0800 Subject: [all][openstack-dev][ptls] Migrating devstack jobs to Jammy (Ubuntu LTS 22.04) In-Reply-To: <1847c3f9af5.ba43e35e213379.1687695354888052685@ghanshyammann.com> References: <1847c3f9af5.ba43e35e213379.1687695354888052685@ghanshyammann.com> Message-ID: <1849b9f0427.11e4cbbae113176.4863879298841147523@ghanshyammann.com> ---- On Tue, 15 Nov 2022 09:04:14 -0800 Ghanshyam Mann wrote --- > ---- On Thu, 13 Oct 2022 12:52:03 -0700 Dmitriy Rabotyagov wrote --- > > Hi everyone, > > > > According to a 2023.1 community-wide goal [1], base-jobs including but > [....] > . On R-18, which is the first 2023.1 milestone that will happen on > > the 18th of November 2022, base-jobs patches mentioned in step 1 will > > be merged. Please ensure you have verified compatibility for your > > projects and landed the required changes if any were needed before > > this date otherwise, they might fail. > > Hello Everyone, > > The deadline for switching the CI/CD to Ubuntu Jammy (22.04) is approaching which > is after 3 days (Nov 18). We will merge the OpenStack tox base, devstack, and tempest > base jobs patches migrating them to Jammy on Nov 18(these will migrate most of the > jobs to run on Jammy). > > Currently, there are two known failures, feel free to add more failures if you know and > have not yet been fixed in the below etherpad > > https://etherpad.opendev.org/p/migrate-to-jammy > > 1. swift: https://bugs.launchpad.net/swift/+bug/1996627 > 2. devstack-plugin-ceph: https://bugs.launchpad.net/devstack-plugin-ceph/+bug/1996628 > > If projects need more time to fix the bugs then they can pin the nodeset to the focal for > time being and fix them asap. All the base patches (tox base job, devstack, tempest) are merged now and you might have observed most of the jobs are running on Jammy (unless you are overriding the nodeset). As the next step, let's fix and remove the nodeset pin (or move the nodeset overriding to Jammy) for the failing jobs. -gmann > > -gmann > > > > > Please, do not hesitate to raise any questions or concerns. > > > > > > [1] https://governance.openstack.org/tc/goals/selected/migrate-ci-jobs-to-ubuntu-jammy.html > > [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/861116 > > https://review.opendev.org/c/openstack/tempest/+/861110 > > https://review.opendev.org/c/openstack/devstack/+/860795 > > [3] https://review.opendev.org/c/openstack/nova/+/861111 > > [4] https://etherpad.opendev.org/p/migrate-to-jammy > > > > > > From roberto.acosta at luizalabs.com Mon Nov 21 19:49:35 2022 From: roberto.acosta at luizalabs.com (Roberto Bartzen Acosta) Date: Mon, 21 Nov 2022 16:49:35 -0300 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: References: <1bf20b2fbc22ca185503ff8139113ebfef9f4b0d.camel@redhat.com> Message-ID: Hi Rodolfo, Are you sure it would be on os-ken? I believe that the os-ken is a multi-purpose driver for BGP (bgpspeaker uses for advertising and withdrawal routes). What about other projects that use os-ken and need to learn routes? Shouldn't bgpspeaker be responsible for programming os-ken to not learn routes? Regards Em seg., 21 de nov. de 2022 ?s 13:55, Rodolfo Alonso Hernandez < ralonsoh at redhat.com> escreveu: > Hi Roberto: > > Please open a launchpad bug documenting this issue in os-ken. > > Thanks! > > On Wed, Nov 16, 2022 at 3:41 PM Roberto Bartzen Acosta < > roberto.acosta at luizalabs.com> wrote: > >> Thanks Sean. >> >> I believe that the os-ken driver is not coded to honor this premise >> because the below function can learn new paths from peer update messages. >> >> https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/peer.py#L1544 >> >> Em qua., 16 de nov. de 2022 ?s 09:14, Sean Mooney >> escreveu: >> >>> On Wed, 2022-11-16 at 09:05 -0300, Roberto Bartzen Acosta wrote: >>> > Sorry for the mistake, I meant, the bgpspeaker should only >>> "*advertise*!" >>> > and not "learn" the AS_PATHs via BGP. >>> yes that used to be the scope of that project to advertise only and not >>> learn >>> so i would geuss either that has change recently and they broke backward >>> compaitbly >>> or they have refactord it to use an external bgp speaker like frr and it >>> learns by default >>> >>> i dont really see anything here >>> https://github.com/openstack/neutron-dynamic-routing/commits/master >>> im not really familar with the internals of the project but i dont see >>> any code to learn routs form >>> >>> >>> https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/driver/os_ken/driver.py >>> >>> it just has code for advertizing and withdrawing routes. >>> >>> >>> >>> > >>> > Em qua., 16 de nov. de 2022 ?s 08:57, Sean Mooney >>> > escreveu: >>> > >>> > > On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: >>> > > > Hey folks, >>> > > > >>> > > > Please, I have a question here, the bgpspeaker should only "learn" >>> and >>> > > not >>> > > > "advertise" the AS_PATHs via BGP, right? >>> > > >>> > > its been a while since i looked at it but in the past it did not >>> supprot >>> > > learning at all >>> > > >>> > > it just advertised the routes for the neutron netowrks >>> > > >>> > > > >>> > > > In my tests, I can see that it is learning routes from BGP >>> neighbors. >>> > > This >>> > > > behavior can cause an AS_PATH loop because the bgpspeaker learns >>> back its >>> > > > own advertised routes, and I see a message like this in the logs: >>> > > > >>> > > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on >>> UPDATE >>> > > > message has loops. Ignoring this message: >>> > > > >>> > > >>> BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), >>> > > > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), >>> > > > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, >>> > > > >>> > > >>> 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) >>> > > > >>> > > > This can be fixed by suppressing the neighbor route advertisement >>> (using >>> > > > route-map export), but have I misunderstood how >>> neutron-dymanic-routing >>> > > > works or do we have a possible bug here? >>> > > > >>> > > > Regards >>> > > > >>> > > >>> > > >>> > >>> >>> >> >> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >> imediatamente anuladas e proibidas?.* >> >> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >> por esse e-mail ou por seus anexos?.* >> > -- _?Esta mensagem ? direcionada apenas para os endere?os constantes no cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o imediatamente anuladas e proibidas?._ *?**?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o poder? aceitar a responsabilidade por quaisquer perdas ou danos causados por esse e-mail ou por seus anexos?.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at caktusgroup.com Tue Nov 22 03:47:59 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Mon, 21 Nov 2022 22:47:59 -0500 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: Message-ID: On Mon, Nov 21, 2022 at 7:39 PM vincent lee wrote: > After reviewing the post you shared, I believe that we have the correct > subnet. Besides, we did not modify anything related to the cloud-init for > openstack. > I didn't either. But I found it's a good test of the network! If you are using an image that doesn't rely on it you might not notice (but I would not recommend that). > After launching the instances, we are able to ping between the instances > of the same subnet. However, we are not able to receive any internet > connection within those instances. From the instance, we are able to ping > the router IP addresses 10.42.0.56 and 10.0.0.1. > To make sure I understand: - 10.42.0.56 is the IP of the router external to OpenStack that provides internet access - This router is tested and working for devices outside of OpenStack - OpenStack compute instances can ping this router - OpenStack compute instances cannot reach the internet If that is correct, it does not sound like an OpenStack issue necessarily, but perhaps a missing default route on your compute instances. I would check that DHCP is enabled on the internal subnet and that it's providing everything necessary for an internet connection to the instances. Tobias -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Tue Nov 22 05:17:50 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 22 Nov 2022 10:47:50 +0530 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: Message-ID: it should be missing a default route most of the time. or check IP tables on router namespace the DNAT and SNAT are working properly On Tue, Nov 22, 2022 at 9:40 AM Tobias McNulty wrote: > On Mon, Nov 21, 2022 at 7:39 PM vincent lee > wrote: > >> After reviewing the post you shared, I believe that we have the correct >> subnet. Besides, we did not modify anything related to the cloud-init for >> openstack. >> > > I didn't either. But I found it's a good test of the network! If you are > using an image that doesn't rely on it you might not notice (but I > would not recommend that). > > >> After launching the instances, we are able to ping between the instances >> of the same subnet. However, we are not able to receive any internet >> connection within those instances. From the instance, we are able to ping >> the router IP addresses 10.42.0.56 and 10.0.0.1. >> > > To make sure I understand: > - 10.42.0.56 is the IP of the router external to OpenStack that provides > internet access > - This router is tested and working for devices outside of OpenStack > - OpenStack compute instances can ping this router > - OpenStack compute instances cannot reach the internet > > If that is correct, it does not sound like an OpenStack issue necessarily, > but perhaps a missing default route on your compute instances. I would > check that DHCP is enabled on the internal subnet and that it's providing > everything necessary for an internet connection to the instances. > > Tobias > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 22 09:49:59 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 22 Nov 2022 09:49:59 +0000 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: Message-ID: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> Just one more thing to check, did you edit the security-group rules to allow access to the outside world? Zitat von Adivya Singh : > it should be missing a default route most of the time. > or check IP tables on router namespace the DNAT and SNAT are working > properly > > > > On Tue, Nov 22, 2022 at 9:40 AM Tobias McNulty > wrote: > >> On Mon, Nov 21, 2022 at 7:39 PM vincent lee >> wrote: >> >>> After reviewing the post you shared, I believe that we have the correct >>> subnet. Besides, we did not modify anything related to the cloud-init for >>> openstack. >>> >> >> I didn't either. But I found it's a good test of the network! If you are >> using an image that doesn't rely on it you might not notice (but I >> would not recommend that). >> >> >>> After launching the instances, we are able to ping between the instances >>> of the same subnet. However, we are not able to receive any internet >>> connection within those instances. From the instance, we are able to ping >>> the router IP addresses 10.42.0.56 and 10.0.0.1. >>> >> >> To make sure I understand: >> - 10.42.0.56 is the IP of the router external to OpenStack that provides >> internet access >> - This router is tested and working for devices outside of OpenStack >> - OpenStack compute instances can ping this router >> - OpenStack compute instances cannot reach the internet >> >> If that is correct, it does not sound like an OpenStack issue necessarily, >> but perhaps a missing default route on your compute instances. I would >> check that DHCP is enabled on the internal subnet and that it's providing >> everything necessary for an internet connection to the instances. >> >> Tobias >> >> >> From xek at redhat.com Tue Nov 22 10:21:32 2022 From: xek at redhat.com (Grzegorz Grasza) Date: Tue, 22 Nov 2022 11:21:32 +0100 Subject: [barbican][release] Transition Queens, Rocky & Stein to EOL Message-ID: Hi all, The barbican team no longer plans to maintain branches older than Train. The patch to mark these branches as EOL is in review [1], but according to the steps to achieve this [2], we need first to announce it. Please send your feedback by leaving a comment on the patch. I will unblock it next week. / Greg [1] https://review.opendev.org/c/openstack/releases/+/862515 [2] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Nov 22 10:35:41 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 22 Nov 2022 11:35:41 +0100 Subject: [neutron-dynamic-routing] BGPspeaker LOOP In-Reply-To: References: <1bf20b2fbc22ca185503ff8139113ebfef9f4b0d.camel@redhat.com> Message-ID: Hello Roberto: I didn't investigate the os-ken code in depth but I don't see how to configure it not to learn these advertised routes. This is why I was asking this. If you know how to do this (and that implies not modifying the os-ken code), perfect. Let me know and I'll help you (if needed) to change the n-d-r code. If you need it, ping me in IRC (ralonsoh, #openstack-neutron channel) Regards. On Mon, Nov 21, 2022 at 8:49 PM Roberto Bartzen Acosta < roberto.acosta at luizalabs.com> wrote: > Hi Rodolfo, > Are you sure it would be on os-ken? I believe that the os-ken is a > multi-purpose driver for BGP (bgpspeaker uses for advertising and > withdrawal routes). What about other projects that use os-ken and need to > learn routes? > Shouldn't bgpspeaker be responsible for programming os-ken to not learn > routes? > > Regards > > Em seg., 21 de nov. de 2022 ?s 13:55, Rodolfo Alonso Hernandez < > ralonsoh at redhat.com> escreveu: > >> Hi Roberto: >> >> Please open a launchpad bug documenting this issue in os-ken. >> >> Thanks! >> >> On Wed, Nov 16, 2022 at 3:41 PM Roberto Bartzen Acosta < >> roberto.acosta at luizalabs.com> wrote: >> >>> Thanks Sean. >>> >>> I believe that the os-ken driver is not coded to honor this premise >>> because the below function can learn new paths from peer update messages. >>> >>> https://opendev.org/openstack/os-ken/src/branch/master/os_ken/services/protocols/bgp/peer.py#L1544 >>> >>> Em qua., 16 de nov. de 2022 ?s 09:14, Sean Mooney >>> escreveu: >>> >>>> On Wed, 2022-11-16 at 09:05 -0300, Roberto Bartzen Acosta wrote: >>>> > Sorry for the mistake, I meant, the bgpspeaker should only >>>> "*advertise*!" >>>> > and not "learn" the AS_PATHs via BGP. >>>> yes that used to be the scope of that project to advertise only and not >>>> learn >>>> so i would geuss either that has change recently and they broke >>>> backward compaitbly >>>> or they have refactord it to use an external bgp speaker like frr and >>>> it learns by default >>>> >>>> i dont really see anything here >>>> https://github.com/openstack/neutron-dynamic-routing/commits/master >>>> im not really familar with the internals of the project but i dont see >>>> any code to learn routs form >>>> >>>> >>>> https://github.com/openstack/neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/driver/os_ken/driver.py >>>> >>>> it just has code for advertizing and withdrawing routes. >>>> >>>> >>>> >>>> > >>>> > Em qua., 16 de nov. de 2022 ?s 08:57, Sean Mooney >>> > >>>> > escreveu: >>>> > >>>> > > On Wed, 2022-11-16 at 08:43 -0300, Roberto Bartzen Acosta wrote: >>>> > > > Hey folks, >>>> > > > >>>> > > > Please, I have a question here, the bgpspeaker should only >>>> "learn" and >>>> > > not >>>> > > > "advertise" the AS_PATHs via BGP, right? >>>> > > >>>> > > its been a while since i looked at it but in the past it did not >>>> supprot >>>> > > learning at all >>>> > > >>>> > > it just advertised the routes for the neutron netowrks >>>> > > >>>> > > > >>>> > > > In my tests, I can see that it is learning routes from BGP >>>> neighbors. >>>> > > This >>>> > > > behavior can cause an AS_PATH loop because the bgpspeaker learns >>>> back its >>>> > > > own advertised routes, and I see a message like this in the logs: >>>> > > > >>>> > > > 2022-11-11 19:45:41.967 7220 ERROR bgpspeaker.peer [-] AS_PATH on >>>> UPDATE >>>> > > > message has loops. Ignoring this message: >>>> > > > >>>> > > >>>> BGPUpdate(len=91,nlri=[],path_attributes=[BGPPathAttributeMpReachNLRI(afi=2,flags=144,length=46,next_hop='2001:db7:1::1',nlri=[IP6AddrPrefix(addr='2001:db9:1234::',length=64)],safi=1,type=14), >>>> > > > BGPPathAttributeOrigin(flags=64,length=1,type=1,value=0), >>>> > > > BGPPathAttributeAsPath(flags=80,length=10,type=2,value=[[65001, >>>> > > > >>>> > > >>>> 65000]])],total_path_attribute_len=68,type=2,withdrawn_routes=[],withdrawn_routes_len=0) >>>> > > > >>>> > > > This can be fixed by suppressing the neighbor route advertisement >>>> (using >>>> > > > route-map export), but have I misunderstood how >>>> neutron-dymanic-routing >>>> > > > works or do we have a possible bug here? >>>> > > > >>>> > > > Regards >>>> > > > >>>> > > >>>> > > >>>> > >>>> >>>> >>> >>> *?Esta mensagem ? direcionada apenas para os endere?os constantes no >>> cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no >>> cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa >>> mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o >>> imediatamente anuladas e proibidas?.* >>> >>> *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para >>> assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o >>> poder? aceitar a responsabilidade por quaisquer perdas ou danos causados >>> por esse e-mail ou por seus anexos?.* >>> >> > > *?Esta mensagem ? direcionada apenas para os endere?os constantes no > cabe?alho inicial. Se voc? n?o est? listado nos endere?os constantes no > cabe?alho, pedimos-lhe que desconsidere completamente o conte?do dessa > mensagem e cuja c?pia, encaminhamento e/ou execu??o das a??es citadas est?o > imediatamente anuladas e proibidas?.* > > *?Apesar do Magazine Luiza tomar todas as precau??es razo?veis para > assegurar que nenhum v?rus esteja presente nesse e-mail, a empresa n?o > poder? aceitar a responsabilidade por quaisquer perdas ou danos causados > por esse e-mail ou por seus anexos?.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lennart.vangijtenbeek at routz.nl Tue Nov 22 10:55:23 2022 From: lennart.vangijtenbeek at routz.nl (Lennart van Gijtenbeek | Routz) Date: Tue, 22 Nov 2022 10:55:23 +0000 Subject: Strange situation regarding Availability Zones Message-ID: <192bad0002ab40fea243dd1a432c08f0@routz.nl> Hello everyone, We have a strange situation regarding Availability Zones in our Openstack environment (version: Queens). We have 3 AZs: room1, room2, and room3. There are duplicate entries in the output of this command (openstack-pythonclient version 6.0.0): ? openstack availability zone list +-----------+-------------+ | Zone Name | Zone Status | +-----------+-------------+ | internal | available | | room2 | available | | room1 | available | | room3 | available | | nova | available | | room2 | available | | room3 | available | | room1 | available | | room2 | available | | room3 | available | +-----------+-------------+ However, in our database, I cannot find any reference to these duplicate entries. What's going on here? And how should we go about removing the duplicate entries if they don't show in the database? MariaDB [nova_api]> select * from aggregate_metadata; +---------------------+------------+----+--------------+-------------------+-----------------+ | created_at | updated_at | id | aggregate_id | key | value | +---------------------+------------+----+--------------+-------------------+-----------------+ | 2019-08-07 06:11:47 | NULL | 1 | 1 | hypervisor | standard | | 2019-08-07 06:22:39 | NULL | 10 | 10 | availability_zone | room1 | | 2019-08-07 06:22:43 | NULL | 13 | 13 | availability_zone | room2 | | 2019-08-15 11:31:39 | NULL | 16 | 16 | availability_zone | room3 | | 2019-09-11 07:50:22 | NULL | 17 | 17 | hypervisor | test | | 2021-08-06 08:43:28 | NULL | 20 | 20 | hypervisor | ephemeral_local | +---------------------+------------+----+--------------+-------------------+-----------------+ 6 rows in set (0.00 sec) MariaDB [nova_api]> select * from aggregates; +---------------------+------------+----+--------------------------------------+------------------+ | created_at | updated_at | id | uuid | name | +---------------------+------------+----+--------------------------------------+------------------+ | 2019-08-07 06:11:34 | NULL | 1 | 5f6baa56-252d-4f06-998f-5d166957490d | compute_standard | | 2019-08-07 06:22:39 | NULL | 10 | f7805dfb-00d2-458b-bb30-24919b8c9cd2 | room1 | | 2019-08-07 06:22:43 | NULL | 13 | b03f7028-f303-47a2-b6ca-128065f23030 | room2 | | 2019-08-15 11:31:39 | NULL | 16 | 1aff7a43-695a-4a10-9b24-116af73d18c1 | room3 | | 2019-09-11 07:50:02 | NULL | 17 | 952fa0aa-2ef6-443e-a906-76021acced0a | compute_test | | 2021-08-06 08:42:40 | NULL | 20 | bab386c4-f1a3-4482-b0ea-74c74fb172bd | ephemeral_local | +---------------------+------------+----+--------------------------------------+------------------+ 6 rows in set (0.00 sec) Thank you. Best regards, Lennart -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Nov 22 11:45:54 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 22 Nov 2022 11:45:54 +0000 Subject: Strange situation regarding Availability Zones In-Reply-To: <192bad0002ab40fea243dd1a432c08f0@routz.nl> References: <192bad0002ab40fea243dd1a432c08f0@routz.nl> Message-ID: <23f55f5e05dbbeb3616f6811604d0442f679940d.camel@redhat.com> On Tue, 2022-11-22 at 10:55 +0000, Lennart van Gijtenbeek | Routz wrote: > Hello everyone, > > We have a strange situation regarding Availability Zones?in our Openstack > environment (version:?Queens). > > We have 3 AZs: room1, room2, and room3. > > There are?duplicate entries in the output of this command?(openstack- > pythonclient version 6.0.0): > > ? openstack availability zone list > +-----------+-------------+ > | Zone Name | Zone Status | > +-----------+-------------+ > | internal | available | > | room2 | available | > | room1 | available | > | room3 | available | > | nova | available | > | room2 | available | > | room3 | available | > | room1 | available | > | room2 | available | > | room3 | available | > +-----------+-------------+ > Please run this command with debug mode enabled (--debug). That should give you far more information regarding what's going on here. Stephen > However, in our database, I cannot find any reference to these duplicate > entries. > > What's going on here? > > And how should we go about removing the duplicate entries if?they don't show > in the database? > > MariaDB [nova_api]> select * from aggregate_metadata; > +---------------------+------------+----+--------------+-------------------+-- > ---------------+ > | created_at | updated_at | id | aggregate_id | key | value | > +---------------------+------------+----+--------------+-------------------+-- > ---------------+ > | 2019-08-07 06:11:47 | NULL | 1 | 1 | hypervisor | standard | > | 2019-08-07 06:22:39 | NULL | 10 | 10 | availability_zone | room1 | > | 2019-08-07 06:22:43 | NULL | 13 | 13 | availability_zone | room2 | > | 2019-08-15 11:31:39 | NULL | 16 | 16 | availability_zone | room3 | > | 2019-09-11 07:50:22 | NULL | 17 | 17 | hypervisor | test | > | 2021-08-06 08:43:28 | NULL | 20 | 20 | hypervisor | ephemeral_local | > +---------------------+------------+----+--------------+-------------------+-- > ---------------+ > 6 rows in set (0.00 sec) > > MariaDB [nova_api]> select * from aggregates; > +---------------------+------------+----+------------------------------------- > -+------------------+ > | created_at | updated_at | id | uuid | name | > +---------------------+------------+----+------------------------------------- > -+------------------+ > | 2019-08-07 06:11:34 | NULL | 1 | 5f6baa56-252d-4f06-998f-5d166957490d | > compute_standard | > | 2019-08-07 06:22:39 | NULL | 10 | f7805dfb-00d2-458b-bb30-24919b8c9cd2 | > room1 | > | 2019-08-07 06:22:43 | NULL | 13 | b03f7028-f303-47a2-b6ca-128065f23030 | > room2 | > | 2019-08-15 11:31:39 | NULL | 16 | 1aff7a43-695a-4a10-9b24-116af73d18c1 | > room3 | > | 2019-09-11 07:50:02 | NULL | 17 | 952fa0aa-2ef6-443e-a906-76021acced0a | > compute_test | > | 2021-08-06 08:42:40 | NULL | 20 | bab386c4-f1a3-4482-b0ea-74c74fb172bd | > ephemeral_local | > +---------------------+------------+----+------------------------------------- > -+------------------+ > 6 rows in set (0.00 sec) > > > Thank you. > > Best regards, > Lennart -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Nov 22 12:11:39 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 22 Nov 2022 13:11:39 +0100 Subject: [neutron] CI meeting 22.11 cancelled Message-ID: <1921217.i3G9dK0l4Y@p1> Hi, I don't feel well today and I will not be able to chair neutron CI meeting. See You on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From lennart.vangijtenbeek at routz.nl Tue Nov 22 12:32:12 2022 From: lennart.vangijtenbeek at routz.nl (Lennart van Gijtenbeek | Routz) Date: Tue, 22 Nov 2022 12:32:12 +0000 Subject: Strange situation regarding Availability Zones In-Reply-To: <23f55f5e05dbbeb3616f6811604d0442f679940d.camel@redhat.com> References: <192bad0002ab40fea243dd1a432c08f0@routz.nl>, <23f55f5e05dbbeb3616f6811604d0442f679940d.camel@redhat.com> Message-ID: Thanks for the tip. I will investigate further. It seems that the AZs Zone Resource are of type 'network' and 'router'. I was not aware of that distinction. ? openstack availability zone list --network --long +-----------+-------------+---------------+-----------+--------------+----------------+ | Zone Name | Zone Status | Zone Resource | Host Name | Service Name | Service Status | +-----------+-------------+---------------+-----------+--------------+----------------+ | room2 | available | network | | | | | room3 | available | router | | | | | room1 | available | network | | | | | room2 | available | router | | | | | room3 | available | network | | | | +-----------+-------------+---------------+-----------+--------------+----------------+ ________________________________ From: Stephen Finucane Sent: Tuesday, November 22, 2022 12:45 PM To: Lennart van Gijtenbeek | Routz; openstack-discuss Subject: Re: Strange situation regarding Availability Zones CAUTION: This email originated from outside the organization. On Tue, 2022-11-22 at 10:55 +0000, Lennart van Gijtenbeek | Routz wrote: Hello everyone, We have a strange situation regarding Availability Zones in our Openstack environment (version: Queens). We have 3 AZs: room1, room2, and room3. There are duplicate entries in the output of this command (openstack-pythonclient version 6.0.0): ? openstack availability zone list +-----------+-------------+ | Zone Name | Zone Status | +-----------+-------------+ | internal | available | | room2 | available | | room1 | available | | room3 | available | | nova | available | | room2 | available | | room3 | available | | room1 | available | | room2 | available | | room3 | available | +-----------+-------------+ Please run this command with debug mode enabled (--debug). That should give you far more information regarding what's going on here. Stephen However, in our database, I cannot find any reference to these duplicate entries. What's going on here? And how should we go about removing the duplicate entries if they don't show in the database? MariaDB [nova_api]> select * from aggregate_metadata; +---------------------+------------+----+--------------+-------------------+-----------------+ | created_at | updated_at | id | aggregate_id | key | value | +---------------------+------------+----+--------------+-------------------+-----------------+ | 2019-08-07 06:11:47 | NULL | 1 | 1 | hypervisor | standard | | 2019-08-07 06:22:39 | NULL | 10 | 10 | availability_zone | room1 | | 2019-08-07 06:22:43 | NULL | 13 | 13 | availability_zone | room2 | | 2019-08-15 11:31:39 | NULL | 16 | 16 | availability_zone | room3 | | 2019-09-11 07:50:22 | NULL | 17 | 17 | hypervisor | test | | 2021-08-06 08:43:28 | NULL | 20 | 20 | hypervisor | ephemeral_local | +---------------------+------------+----+--------------+-------------------+-----------------+ 6 rows in set (0.00 sec) MariaDB [nova_api]> select * from aggregates; +---------------------+------------+----+--------------------------------------+------------------+ | created_at | updated_at | id | uuid | name | +---------------------+------------+----+--------------------------------------+------------------+ | 2019-08-07 06:11:34 | NULL | 1 | 5f6baa56-252d-4f06-998f-5d166957490d | compute_standard | | 2019-08-07 06:22:39 | NULL | 10 | f7805dfb-00d2-458b-bb30-24919b8c9cd2 | room1 | | 2019-08-07 06:22:43 | NULL | 13 | b03f7028-f303-47a2-b6ca-128065f23030 | room2 | | 2019-08-15 11:31:39 | NULL | 16 | 1aff7a43-695a-4a10-9b24-116af73d18c1 | room3 | | 2019-09-11 07:50:02 | NULL | 17 | 952fa0aa-2ef6-443e-a906-76021acced0a | compute_test | | 2021-08-06 08:42:40 | NULL | 20 | bab386c4-f1a3-4482-b0ea-74c74fb172bd | ephemeral_local | +---------------------+------------+----+--------------------------------------+------------------+ 6 rows in set (0.00 sec) Thank you. Best regards, Lennart -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Nov 22 12:59:10 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 22 Nov 2022 12:59:10 +0000 Subject: Strange situation regarding Availability Zones In-Reply-To: References: <192bad0002ab40fea243dd1a432c08f0@routz.nl> , <23f55f5e05dbbeb3616f6811604d0442f679940d.camel@redhat.com> Message-ID: <321198511b72e583841719c225ff1c8b6f157da7.camel@redhat.com> On Tue, 2022-11-22 at 12:32 +0000, Lennart van Gijtenbeek | Routz wrote: > Thanks for the tip. > I will investigate further. > > It seems that the AZs Zone Resource are of type 'network' and 'router'. > I was not aware of that distinction. > > > ? openstack availability zone list --network --long > +-----------+-------------+---------------+-----------+--------------+----------------+ > > Zone Name | Zone Status | Zone Resource | Host Name | Service Name | Service Status | > +-----------+-------------+---------------+-----------+--------------+----------------+ > > room2 | available | network | | | | > > room3 | available | router | | | | > > room1 | available | network | | | | > > room2 | available | router | | | | > > room3 | available | network | | | | > +-----------+-------------+---------------+-----------+--------------+----------------+ that sounds like two diffent types of AZ have been put under the same command openstack availability zone list is ment to return just the nova viablitiey zones other service like cinder can declare tehre services are algined to the nova AZ but there is no such thing as a zone resouce or zone avaiablety form a nova perspective. so it sound like osc has incorreclty started including other concepts in teh ouput of the "openstack availability zone list" command it was added a long time ago however https://github.com/openstack/python-openstackclient/commit/4d332defbc4231f77b7459d4abda88a36a65d37d so its proably too late to undo that now but the concepts fo an aviableity zone in cinder and neutron are just refence to the nvoa AZs you should not have indepentdly listabel network ro volume azs. its true that neutron has an AZ api now https://docs.openstack.org/api-ref/network/v2/?expanded=#list-all-availability-zones i hope that is admin only because it shoul dnot be leaked to normal users. cidner does not expsoe AZ as a toplevel api although i think nova can get the infor form the backend as aprt of the attachemt/connection info my guess was there has been a regression somewhere and osc started including the network info by default however that seams to have always been the case https://github.com/openstack/python-openstackclient/blob/master/openstackclient/common/availability_zone.py#L180 you can workaround thsi by passing --compute to osc but it feels wrong to be mixing tow differnet concepts unter the same command. > > > > ________________________________ > From: Stephen Finucane > Sent: Tuesday, November 22, 2022 12:45 PM > To: Lennart van Gijtenbeek | Routz; openstack-discuss > Subject: Re: Strange situation regarding Availability Zones > > > CAUTION: This email originated from outside the organization. > > On Tue, 2022-11-22 at 10:55 +0000, Lennart van Gijtenbeek | Routz wrote: > > Hello everyone, > > We have a strange situation regarding Availability Zones in our Openstack environment (version: Queens). > > > We have 3 AZs: room1, room2, and room3. > > There are duplicate entries in the output of this command (openstack-pythonclient version 6.0.0): > > > ? openstack availability zone list > +-----------+-------------+ > > Zone Name | Zone Status | > +-----------+-------------+ > > internal | available | > > room2 | available | > > room1 | available | > > room3 | available | > > nova | available | > > room2 | available | > > room3 | available | > > room1 | available | > > room2 | available | > > room3 | available | > +-----------+-------------+ > > > Please run this command with debug mode enabled (--debug). That should give you far more information regarding what's going on here. > > Stephen > > However, in our database, I cannot find any reference to these duplicate entries. > > What's going on here? > > And how should we go about removing the duplicate entries if they don't show in the database? > > > MariaDB [nova_api]> select * from aggregate_metadata; > +---------------------+------------+----+--------------+-------------------+-----------------+ > > created_at | updated_at | id | aggregate_id | key | value | > +---------------------+------------+----+--------------+-------------------+-----------------+ > > 2019-08-07 06:11:47 | NULL | 1 | 1 | hypervisor | standard | > > 2019-08-07 06:22:39 | NULL | 10 | 10 | availability_zone | room1 | > > 2019-08-07 06:22:43 | NULL | 13 | 13 | availability_zone | room2 | > > 2019-08-15 11:31:39 | NULL | 16 | 16 | availability_zone | room3 | > > 2019-09-11 07:50:22 | NULL | 17 | 17 | hypervisor | test | > > 2021-08-06 08:43:28 | NULL | 20 | 20 | hypervisor | ephemeral_local | > +---------------------+------------+----+--------------+-------------------+-----------------+ > 6 rows in set (0.00 sec) > > MariaDB [nova_api]> select * from aggregates; > +---------------------+------------+----+--------------------------------------+------------------+ > > created_at | updated_at | id | uuid | name | > +---------------------+------------+----+--------------------------------------+------------------+ > > 2019-08-07 06:11:34 | NULL | 1 | 5f6baa56-252d-4f06-998f-5d166957490d | compute_standard | > > 2019-08-07 06:22:39 | NULL | 10 | f7805dfb-00d2-458b-bb30-24919b8c9cd2 | room1 | > > 2019-08-07 06:22:43 | NULL | 13 | b03f7028-f303-47a2-b6ca-128065f23030 | room2 | > > 2019-08-15 11:31:39 | NULL | 16 | 1aff7a43-695a-4a10-9b24-116af73d18c1 | room3 | > > 2019-09-11 07:50:02 | NULL | 17 | 952fa0aa-2ef6-443e-a906-76021acced0a | compute_test | > > 2021-08-06 08:42:40 | NULL | 20 | bab386c4-f1a3-4482-b0ea-74c74fb172bd | ephemeral_local | > +---------------------+------------+----+--------------------------------------+------------------+ > 6 rows in set (0.00 sec) > > Thank you. > > Best regards, > Lennart > From sandcruz666 at gmail.com Tue Nov 22 06:21:09 2022 From: sandcruz666 at gmail.com (K Santhosh) Date: Tue, 22 Nov 2022 11:51:09 +0530 Subject: Problem in freezer deploymentnt Message-ID: Hai , I am Santhosh, I do facing a problem with freezer deploymentnt After the deployment of freezer . The freezer_scheduler container is continuously restarting in kolla openstack can you help me out with this freezer_scheduler container -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- (xaas-openstack) root at srv1:~# docker logs freezer_scheduler + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start + sudo -E kolla_set_configs INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json INFO:__main__:Validating config file INFO:__main__:Kolla config strategy set to: COPY_ALWAYS INFO:__main__:Copying service configuration files INFO:__main__:Deleting /etc/freezer/freezer.conf INFO:__main__:Copying /var/lib/kolla/config_files/freezer.conf to /etc/freezer/freezer.conf INFO:__main__:Setting permission for /etc/freezer/freezer.conf INFO:__main__:Writing out command to execute INFO:__main__:Setting permission for /var/log/kolla/freezer INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-scheduler.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api_access.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-manage.log INFO:__main__:Setting permission for /var/log/kolla/freezer/freezer-api.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-error.log INFO:__main__:Setting permission for /var/log/kolla/freezer/apache-access.log ++ cat /run_command + CMD='freezer-scheduler --config-file /etc/freezer/freezer.conf start' + ARGS= + sudo kolla_copy_cacerts + [[ ! -n '' ]] + . kolla_extend_start ++ LOG_DIR=/var/log/kolla/freezer ++ [[ ! -d /var/log/kolla/freezer ]] +++ stat -c %U:%G /var/log/kolla/freezer ++ [[ freezer:freezer != \f\r\e\e\z\e\r\:\k\o\l\l\a ]] ++ chown freezer:kolla /var/log/kolla/freezer +++ stat -c %a /var/log/kolla/freezer ++ [[ 2755 != \7\5\5 ]] ++ chmod 755 /var/log/kolla/freezer ++ . /usr/local/bin/kolla_freezer_extend_start + echo 'Running command: '\''freezer-scheduler --config-file /etc/freezer/freezer.conf start'\''' Running command: 'freezer-scheduler --config-file /etc/freezer/freezer.conf start' + exec freezer-scheduler --config-file /etc/freezer/freezer.conf start -------------- next part -------------- (xaas-openstack) root at srv1:/var/log/kolla/freezer# cat freezer-scheduler.log 2022-11-16 12:07:19.316 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:19.316 7 ERROR freezer-scheduler 2022-11-16 12:07:22.023 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:22.023 7 ERROR freezer-scheduler 2022-11-16 12:07:24.887 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:24.887 7 ERROR freezer-scheduler 2022-11-16 12:07:27.531 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:27.531 7 ERROR freezer-scheduler 2022-11-16 12:07:30.707 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:30.707 7 ERROR freezer-scheduler 2022-11-16 12:07:34.598 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:34.598 7 ERROR freezer-scheduler 2022-11-16 12:07:40.165 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:40.165 7 ERROR freezer-scheduler 2022-11-16 12:07:49.205 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:07:49.205 7 ERROR freezer-scheduler 2022-11-16 12:08:04.494 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:08:04.494 7 ERROR freezer-scheduler 2022-11-16 12:08:32.580 6 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler daemon.start() 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:08:32.580 6 ERROR freezer-scheduler 2022-11-16 12:09:26.354 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:09:26.354 7 ERROR freezer-scheduler 2022-11-16 12:10:28.837 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:10:28.837 7 ERROR freezer-scheduler 2022-11-16 12:11:31.484 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:11:31.484 7 ERROR freezer-scheduler 2022-11-16 12:12:34.034 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:12:34.034 7 ERROR freezer-scheduler 2022-11-16 12:13:36.754 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:13:36.754 7 ERROR freezer-scheduler 2022-11-16 12:14:39.645 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:14:39.645 7 ERROR freezer-scheduler 2022-11-16 12:15:42.439 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:15:42.439 7 ERROR freezer-scheduler 2022-11-16 12:16:45.339 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:16:45.339 7 ERROR freezer-scheduler 2022-11-16 12:17:47.960 6 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler daemon.start() 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:17:47.960 6 ERROR freezer-scheduler 2022-11-16 12:18:50.757 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:18:50.757 7 ERROR freezer-scheduler 2022-11-16 12:19:53.415 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:19:53.415 7 ERROR freezer-scheduler 2022-11-16 12:20:56.170 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:20:56.170 7 ERROR freezer-scheduler 2022-11-16 12:21:58.745 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:21:58.745 7 ERROR freezer-scheduler 2022-11-16 12:23:01.327 6 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler daemon.start() 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:23:01.327 6 ERROR freezer-scheduler 2022-11-16 12:24:04.044 7 CRITICAL freezer-scheduler [-] Unhandled error: OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler Traceback (most recent call last): 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/bin/freezer-scheduler", line 8, in 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler sys.exit(main()) 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/freezer_scheduler.py", line 255, in main 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler daemon.start() 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/scheduler/daemon.py", line 178, in start 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler with DaemonContext(pidfile=pidfile, signal_map=self.signal_map, 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 132, in __init__ 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler self.detach_process = detach_required() 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 416, in detach_required 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler if parent_is_inet() or parent_is_init(): 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/var/lib/kolla/venv/lib/python3.8/site-packages/freezer/lib/pep3143daemon/daemon.py", line 394, in parent_is_inet 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler sock = socket.fromfd( 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 544, in fromfd 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler return socket(family, type, proto, nfd) 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler File "/usr/lib/python3.8/socket.py", line 231, in __init__ 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler _socket.socket.__init__(self, family, type, proto, fileno) 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler OSError: [Errno 88] Socket operation on non-socket 2022-11-16 12:24:04.044 7 ERROR freezer-scheduler -------------- next part -------------- A non-text attachment was scrubbed... Name: image(2)(2).png Type: image/png Size: 12286 bytes Desc: not available URL: From eblock at nde.ag Tue Nov 22 14:33:17 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 22 Nov 2022 14:33:17 +0000 Subject: Problem in freezer deploymentnt In-Reply-To: Message-ID: <20221122143317.Horde.S-e2XMd8UKHEsusX5p_KdQ2@webmail.nde.ag> Hi, a quick internet search reveals this bug [1]. Apparently, freezer is not (well) maintained, but [2] might be a potential fix. Can you check? [1] https://bugs.launchpad.net/kolla-ansible/+bug/1901698 [2] https://review.opendev.org/c/openstack/freezer/+/795715 Zitat von K Santhosh : > Hai , > I am Santhosh, > I do facing a problem with freezer deploymentnt > After the deployment of freezer . The freezer_scheduler > container is continuously restarting in kolla openstack > can you help me out with this freezer_scheduler container From dwilde at redhat.com Tue Nov 22 14:56:08 2022 From: dwilde at redhat.com (Dave Wilde) Date: Tue, 22 Nov 2022 08:56:08 -0600 Subject: [keystone] Weekly Meeting Cancelled Message-ID: Sorry for the late notice, but the weekly meeting is cancelled this week as I?m AFK for the U.S. holiday. Please let me know if you need anything and have a wonderful holiday if you?re celebrating. /Dave Sent from my iPad From ralonsoh at redhat.com Tue Nov 22 15:09:06 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 22 Nov 2022 16:09:06 +0100 Subject: [neutron] networking-midonet mantainers Message-ID: Hello Neutrinos: We have recently found some zuul errors related to networking-midonet. In order to fix them, we have pushed [1]. However, the CI status of this project is not in good shape. This mail is a kind request for maintainers of this project. We need to ensure that the stable branches are still accepting patches and the CI jobs are passing. Thank you in advance. [1] https://review.opendev.org/q/project:openstack%252Fnetworking-midonet+status:open -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Nov 22 15:52:28 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 22 Nov 2022 16:52:28 +0100 Subject: [neutron][tap-as-a-service][release] Deletion old branches of taas Message-ID: Hi, Tap-as-a-service is a Neutron stadium project, and recently we decided it is time to EOL old branches of it as we did for other stadium projects (see for example [1]). The branches which we would like to delete are: - stable/ocata - stable/pike - stable/queens - stable/rocky - stable/stein If you would like to keep these branches please answer back to this mail. As tap-as-a-service was moved in and out of stadium projects, it has no yaml files in releases repo before zed, I would like to ask the help of the release team to delete and tag these branches. I list here the hashes to have everything here as reference: - stable/ocata: 23536669c161be786185ecb6fb8831458476f74b - stable/pike: 0c974fe28b31453ab02eb597b4242f059b274c95 - stable/queens: 33f3e6d0432241e2846288291f1a8229c95b1425 - stable/rocky: 0eae0a540d16192c583232b8dfc574f233e60199 - stable/stein: 951390c12655f296460a3210c221da47f9cb5a3b [1]: https://review.opendev.org/c/openstack/releases/+/846188 Best wishes Lajos (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Wed Nov 23 05:22:14 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Wed, 23 Nov 2022 10:52:14 +0530 Subject: (Openstack-nova or neutron error) in Canonical Openstack Message-ID: hi, What could be the reason for this error? Any guesses Went to status ERROR due to "Message: Build of instance 29ff9e37-e619-4406-adb8-200eec3aa1c7 aborted: Failed to allocate the network(s), not rescheduling., Code: 500" I am planning to restart libvirt services on compute node, may be Start the Nova services in compute node, neutron ovs services in compute node Maybe change the parameter in the plugin for the neutron time parameter from 10 sec to 60 sec. Any guesses, what more can I do more? -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Nov 23 07:52:04 2022 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 23 Nov 2022 08:52:04 +0100 Subject: (Openstack-nova or neutron error) in Canonical Openstack In-Reply-To: References: Message-ID: Hi, Do you have perhaps debug logs from Neutron (I suppose you use ovs-agent, so from that for sure but from neutron-server also) and for Nova? Lajos Adivya Singh ezt ?rta (id?pont: 2022. nov. 23., Sze, 6:33): > hi, > > What could be the reason for this error? Any guesses > > Went to status ERROR due to "Message: Build of instance > 29ff9e37-e619-4406-adb8-200eec3aa1c7 aborted: Failed to allocate the > network(s), not rescheduling., Code: 500" > > I am planning to restart libvirt services on compute node, may be Start > the Nova services in compute node, neutron ovs services in compute node > > Maybe change the parameter in the plugin for the neutron time parameter > from 10 sec to 60 sec. > > Any guesses, what more can I do more? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Nov 23 07:59:09 2022 From: eblock at nde.ag (Eugen Block) Date: Wed, 23 Nov 2022 07:59:09 +0000 Subject: (Openstack-nova or neutron error) in Canonical Openstack In-Reply-To: Message-ID: <20221123075909.Horde.VeiVkk2iCHPopdauyXqvKQX@webmail.nde.ag> Hi, you don't give us much to work on. Which versions are you using? Did it work before and now stopped or has it never worked? This message can have multiple different root causes, e. g. flavors with numa that are not applicable, neutron services not reachable, etc. What's in the neutron logs? Are all agents up and running ('openstack network agent list')? What else have you tried so far? Regards, Eugen Zitat von Adivya Singh : > hi, > > What could be the reason for this error? Any guesses > > Went to status ERROR due to "Message: Build of instance > 29ff9e37-e619-4406-adb8-200eec3aa1c7 aborted: Failed to allocate the > network(s), not rescheduling., Code: 500" > > I am planning to restart libvirt services on compute node, may be Start the > Nova services in compute node, neutron ovs services in compute node > > Maybe change the parameter in the plugin for the neutron time parameter > from 10 sec to 60 sec. > > Any guesses, what more can I do more? From christian.rohmann at inovex.de Wed Nov 23 10:26:58 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 23 Nov 2022 11:26:58 +0100 Subject: [neutron] Switching the ML2 driver in-place from linuxbridge to OVN for an existing Cloud In-Reply-To: References: <2446920.D5JjJbiaP6@p1> <4318fbe5-f0f7-34eb-f852-15a6fb6810a6@inovex.de> Message-ID: <45707ec7-4279-a691-ced5-1d6dd302a163@inovex.de> Hey James, I am really sorry I just get back to you now. On 29/08/2022 19:54, James Denton wrote: > > In my experience, it is possible to perform in-place migration from > ML2/LXB -> ML2/OVN, albeit with a shutdown or hard reboot of the > instance(s) to complete the VIF plugging and some other needed > operations. I have a very rough outline of required steps if you?re > interested, but they?re geared towards an openstack-ansible based > deployment. I?ll try to put a writeup together in the next week or two > demonstrating the process in a multi-node environment; the only one I > have done recently was an all-in-one. > > James Denton > > Rackspace Private Cloud > Thanks for replying, I'd really love to see your outline / list of steps. BTW, we are actively working on switching to openstack-ansible - so that would suit us well. We also came to the conclusion that a shutdown of all instances might be required. Question is, if that has to happen instantly or if one could do that on a project by project base. Our cloud is small enough to still make this feasible, but I suppose this topic is or will become more important? to other, larger clouds as well. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Nov 23 11:05:01 2022 From: zigo at debian.org (Thomas Goirand) Date: Wed, 23 Nov 2022 12:05:01 +0100 Subject: (Openstack-nova or neutron error) in Canonical Openstack In-Reply-To: References: Message-ID: <8c29f5eb-e2a8-1fd0-7575-0743ab3988ad@debian.org> On 11/23/22 06:22, Adivya Singh wrote: > hi, > > What could be the reason for this error? Any guesses > > ?Went to status ERROR due to "Message: Build of instance > 29ff9e37-e619-4406-adb8-200eec3aa1c7 aborted: Failed to allocate the > network(s), not rescheduling., Code: 500" Basically, this nova-compute.log tells you there was an error in Neutron. So please look at your Neutron logs to find the root cause. Cheers, Thomas Goirand (zigo) From senrique at redhat.com Wed Nov 23 11:30:47 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 23 Nov 2022 11:30:47 +0000 Subject: Bug report from 11-16-2022 to 11-23-2022 Message-ID: This is a bug report from 11-16-2022 to 11-23-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Low - https://bugs.launchpad.net/cinder/+bug/1997088 "Implement TODOs from change 94dfad99c2b." Assigned to Sofia Enriquez. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Wed Nov 23 12:11:51 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 23 Nov 2022 13:11:51 +0100 Subject: [designate] How to avoid NXDOMAIN or stale data during cold start of a (new) machine In-Reply-To: <81a3d69e-f96b-7607-6625-06fb465cd8f9@inovex.de> References: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> <81a3d69e-f96b-7607-6625-06fb465cd8f9@inovex.de> Message-ID: Hello again, On 01/07/2022 09:10, Christian Rohmann wrote: > On 07/06/2022 02:04, Michael Johnson wrote: >> There are two ways zones can be resynced: >> 1. Using the "designate-manage pool update" command. This will force >> an update/recreate of all of the zones. >> [...] > When playing with this issue of a cold start with no zones and > "designate-manage pool update" no fixing it. > We found that somebody just ran into the issue of > (https://bugs.launchpad.net/designate/+bug/1958409/) > and proposed a fix (rndc modzone -> rndc addzone). > > With this patch the "pool update" does cause all them missing zones to > be created in a BIND instance that has either lost it's zones > or has just been added to the pool. yet another update on this "cold start" and "resync" of secondary nameserver topic: Since we really did not like the scaling of calling "rndc modzone" and? "rndc addzone" for each and every zone of a pool and for every pool member we looked around for other solutions. We then ran into Catalog Zones (https://datatracker.ietf.org/doc/draft-ietf-dnsop-dns-catalog-zones/), supported by major DNS servers (BIND, NSD, Knot, PowerDNS, ...), which can provide just a list of zones to secondaries for their kind consideration and they shall then provision themselves. Shameless pointer to the spec I proposed to add support for catalog zones to Designate: https://review.opendev.org/c/openstack/designate-specs/+/849109 Regards Christian From stephenfin at redhat.com Wed Nov 23 14:11:01 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 23 Nov 2022 14:11:01 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients Message-ID: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> ? tl;dr: $subject I reviewed a patch against openstackclient (OSC) today [1] and left a rather lengthy comment that I thought worthy of bringing up to a wider audience. The patch itself doesn't matter so much as what is was trying to achieve, namely modifying an existing OSC command to better match the structure of the equivalent legacy client command. The review provides more detail than I do here but the tl;dr: is that this is a big no-no and OSC will and must maintain consistency between OSC commands over consistency with legacy clients. As I noted in the review, consistency is one of the biggest advantages of OSC over the legacy clients: if you know the name of the resource type you wish to work with, you can pretty accurately guess the command and its structure. This is a thing that operators have consistently said they love about OSC and its one of the key reasons we're trying to get every command to provide full current API implementations in OSC (and SDK). Now I get that the way some of these consistent commands have been implemented has been the cause of contention in the past. I don't imagine it remains any less contentious today. However, these patterns are well-understood, well-known patterns that have for the most part worked just fine for close to a decade now. The kind of patterns I'm thinking about include: * The command to create a new resource should always take the format ' create * The command to modify some property of a resource should always take the format ' set --property=value ' * The command to list, fetch or delete resources should always take the format ' list', ' get ', and ' delete ', respectively. * Boolean options should always take the form of flags with an alternate negative option like '--flag' and '--no-flag', rather than '-- flag=' * And a couple of other things that we tend to highlight in reviews. We want to preserve this behavior, lest OSC lose this huge USP and devolve into a muddle mess of different ideas and various individuals'/teams' preferences. I'm more than happy to discuss and debate this stuff with anyone who's interested and we'll continue reviewing each patch on its merit and providing exceptions to these rules where they make sense, but it will remain an ongoing goal and it's something we'd like people to consider when working on OSC itself or any of its plugins. I will now get off my pedestal/ivory tower ? Thanks! Stephen PS: I'm talking here about the command themselves, not their implementations. We do somethings extra in OSC that are user helpful, like allowing users to identify resources by their name in addition to by their UUIDs. We also currently do things that no so user helpful, like crashing and burning if the name lookups fail (I'm thinking about the various Glance-related commands that error out if a project name/ID is passed to a command and the user can't look up that project). These are things we're more than willing to fix and will happily accept patches for :) [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 From hanguangyu2 at gmail.com Wed Nov 23 14:38:59 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Wed, 23 Nov 2022 22:38:59 +0800 Subject: =?UTF-8?Q?=E2=80=9CCan_not_allocate_kernel_buffer=E2=80=9D_when_I_use_offi?= =?UTF-8?Q?cail_ubuntu_image_to_create_instance?= Message-ID: Hi, all I download ubuntu focal-server-cloudimg-amd64.img from officail website[1]. And when I use it to creae instance, I can access the grub interface in horizon vnc. But if I choose "Ubuntu", I get: error: cannot allocate kernel buffer. error: you need to load the kernel first. Can I get some advice on how I should use it properly to create instances [1] https://cloud-images.ubuntu.com/focal/20221121/focal-server-cloudimg-amd64.img From smooney at redhat.com Wed Nov 23 14:44:01 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Nov 2022 14:44:01 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> Message-ID: <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > ? > > tl;dr: $subject > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > lengthy comment that I thought worthy of bringing up to a wider audience. The > patch itself doesn't matter so much as what is was trying to achieve, namely > modifying an existing OSC command to better match the structure of the > equivalent legacy client command. The review provides more detail than I do here > but the tl;dr: is that this is a big no-no and OSC will and must maintain > consistency between OSC commands over consistency with legacy clients. As I > noted in the review, consistency is one of the biggest advantages of OSC over > the legacy clients: if you know the name of the resource type you wish to work > with, you can pretty accurately guess the command and its structure. This is a > thing that operators have consistently said they love about OSC and its one of > the key reasons we're trying to get every command to provide full current API > implementations in OSC (and SDK). > > Now I get that the way some of these consistent commands have been implemented > has been the cause of contention in the past. I don't imagine it remains any > less contentious today. However, these patterns are well-understood, well-known > patterns that have for the most part worked just fine for close to a decade now. > The kind of patterns I'm thinking about include: > > * The command to create a new resource should always take the format > ' create > * The command to modify some property of a resource should always take the > format ' set --property=value ' > * The command to list, fetch or delete resources should always take the format > ' list', ' get ', and ' > delete ', respectively. you have listed ' get ' to fetch a resouce but in my experince "show" is the more common action openstack server show openstack image show openstack volume show also network and port and subnet baiscaly all the resouce form the core services get does not really seam to be used. > * Boolean options should always take the form of flags with an alternate > negative option like '--flag' and '--no-flag', rather than '-- > flag=' i personally dont like this but i agree with being consitant. i strongly prefer the '--flag=' approch as something that is more readble? but its not the pattern in use in osc. i would prefer to keep things consitent then change this at this point. > * And a couple of other things that we tend to highlight in reviews. > > We want to preserve this behavior, lest OSC lose this huge USP and devolve into > a muddle mess of different ideas and various individuals'/teams' preferences. > I'm more than happy to discuss and debate this stuff with anyone who's > interested and we'll continue reviewing each patch on its merit and providing > exceptions to these rules where they make sense, but it will remain an ongoing > goal and it's something we'd like people to consider when working on OSC itself > or any of its plugins. i agree with what you said in general but there is one digerance already that we might need to reconsider. i hate that i bring this up but one of the design guidlines of OSC was commands must not auto negociagte the latest micorverion. that again was for consitency so that command would work the same across different clouds with different api versions. many plugins have broken this design requirement btu the core osc client still maintains its orginal design. to level set osc intentionally does not support microverion negocaitation, it was a desgin choice not an oversight. since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major version fo osc. we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them selves. again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic form a consticy point of view then any other divergance. > > I will now get off my pedestal/ivory tower ? > > Thanks! > Stephen > > PS: I'm talking here about the command themselves, not their implementations. We > do somethings extra in OSC that are user helpful, like allowing users to > identify resources by their name in addition to by their UUIDs. We also > currently do things that no so user helpful, like crashing and burning if the > name lookups fail (I'm thinking about the various Glance-related commands that > error out if a project name/ID is passed to a command and the user can't look up > that project). These are things we're more than willing to fix and will happily > accept patches for :) > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > > From fungi at yuggoth.org Wed Nov 23 14:55:51 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 23 Nov 2022 14:55:51 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> Message-ID: <20221123145550.5uar4ovwup5dvfot@yuggoth.org> On 2022-11-23 14:44:01 +0000 (+0000), Sean Mooney wrote: [...] > i hate that i bring this up but one of the design guidlines of OSC > was commands must not auto negociagte the latest micorverion. that > again was for consitency so that command would work the same > across different clouds with different api versions. many plugins > have broken this design requirement btu the core osc client still > maintains its orginal design. > > to level set osc intentionally does not support microverion > negocaitation, it was a desgin choice not an oversight. > > since many of the plugins have ignored that and implemnted it > anyway i think it would be good to provide a way to opt into the > desired behavior. i.e. provide a --latest global flag or change > the default for the --os-compute-api ectr command to latest in a > major version fo osc. [...] Remind me what you mean specifically by microversion negotiation and why it's a bad thing? Is detecting the latest supported microversion and only making calls it will support considered negotiation? Yesterday in working to try to get recent versions of the SDK to boot servers in Rackspace, we discovered that 0.99.0 started supplying network:auto in boot calls which is only supported after a specific nova microversion, but wasn't checking whether the API was sufficiently new enough to have that microversion. Is detecting that condition what you're saying is a bad idea? Or are you saying specifically doing it in the openstackclient/plugin code is wrong but it's okay to do it in the SDK? Sometimes it's unclear to me when people talk about the client whether they're also referring to the SDK or vice versa, especially since the client uses the SDK increasingly and both are now maintained by the same team. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Wed Nov 23 15:24:37 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 23 Nov 2022 16:24:37 +0100 Subject: [largescale-sig] Next meeting: Nov 23, 15utc In-Reply-To: References: Message-ID: Here is the summary of our SIG meeting today. We failed to finalize guests for our Dec 8 OpenInfra Live episode, but we did confirm the hosts. amorin mentioned wanting to start a discussion (and doc) around the database connection, fine tuning the number of connections needed per service. You can help with that by replying to his thread at: https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030935.html You can read the detailed meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-11-23-15.00.html In two weeks we'll have our OpenInfra Live! episode. Our next IRC meeting will be January 4, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry Carrez (ttx) From artem.goncharov at gmail.com Wed Nov 23 15:46:50 2022 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 23 Nov 2022 16:46:50 +0100 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <20221123145550.5uar4ovwup5dvfot@yuggoth.org> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <20221123145550.5uar4ovwup5dvfot@yuggoth.org> Message-ID: > > On 2022-11-23 14:44:01 +0000 (+0000), Sean Mooney wrote: > [...] >> i hate that i bring this up but one of the design guidlines of OSC >> was commands must not auto negociagte the latest micorverion. that >> again was for consitency so that command would work the same >> across different clouds with different api versions. many plugins >> have broken this design requirement btu the core osc client still >> maintains its orginal design. >> My problem with that (and the reason we started moving away) was forcing user to explicitly know which micro version added some feature, which broke it again and which repaired it finally and questioning which version is actually available on a certain cloud. This is/was causing quite a mess for regular users (not advanced users). I would even state it like that: why should a regular user ever care in which microversion it became possible to get tags fetched together with servers or how to figure out which micro version you need to set once you depend on features from different micro versions. This is not user friendly at all. Chances a single person is using OSC to communicate to different clouds (and as such get inconsistent results) are much lower then needing to remember (and hardcode) micro versions for you talking to a single cloud. On the other side - there is nothing blocking you to specify ?os-X-api-version to get the expected result. From the consistency pov I would say it is better to have same behaviour in SDK/OSC/Ansible rather than CLI explicitly doing an opposite to Ansible. And again - for me it is very important to make tools user friendly while having full flexibility and consistency. Last but no least, I personally never knew of such design requirement in OSC. Artem From johnsomor at gmail.com Wed Nov 23 16:55:22 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 23 Nov 2022 08:55:22 -0800 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> Message-ID: Stephen, I 100% agree. This is what has made the OpenStack client so much better than the legacy clients. Michael On Wed, Nov 23, 2022 at 6:11 AM Stephen Finucane wrote: > > ? > > tl;dr: $subject > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > lengthy comment that I thought worthy of bringing up to a wider audience. The > patch itself doesn't matter so much as what is was trying to achieve, namely > modifying an existing OSC command to better match the structure of the > equivalent legacy client command. The review provides more detail than I do here > but the tl;dr: is that this is a big no-no and OSC will and must maintain > consistency between OSC commands over consistency with legacy clients. As I > noted in the review, consistency is one of the biggest advantages of OSC over > the legacy clients: if you know the name of the resource type you wish to work > with, you can pretty accurately guess the command and its structure. This is a > thing that operators have consistently said they love about OSC and its one of > the key reasons we're trying to get every command to provide full current API > implementations in OSC (and SDK). > > Now I get that the way some of these consistent commands have been implemented > has been the cause of contention in the past. I don't imagine it remains any > less contentious today. However, these patterns are well-understood, well-known > patterns that have for the most part worked just fine for close to a decade now. > The kind of patterns I'm thinking about include: > > * The command to create a new resource should always take the format > ' create > * The command to modify some property of a resource should always take the > format ' set --property=value ' > * The command to list, fetch or delete resources should always take the format > ' list', ' get ', and ' > delete ', respectively. > * Boolean options should always take the form of flags with an alternate > negative option like '--flag' and '--no-flag', rather than '-- > flag=' > * And a couple of other things that we tend to highlight in reviews. > > We want to preserve this behavior, lest OSC lose this huge USP and devolve into > a muddle mess of different ideas and various individuals'/teams' preferences. > I'm more than happy to discuss and debate this stuff with anyone who's > interested and we'll continue reviewing each patch on its merit and providing > exceptions to these rules where they make sense, but it will remain an ongoing > goal and it's something we'd like people to consider when working on OSC itself > or any of its plugins. > > I will now get off my pedestal/ivory tower ? > > Thanks! > Stephen > > PS: I'm talking here about the command themselves, not their implementations. We > do somethings extra in OSC that are user helpful, like allowing users to > identify resources by their name in addition to by their UUIDs. We also > currently do things that no so user helpful, like crashing and burning if the > name lookups fail (I'm thinking about the various Glance-related commands that > error out if a project name/ID is passed to a command and the user can't look up > that project). These are things we're more than willing to fix and will happily > accept patches for :) > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > > From ces.eduardo98 at gmail.com Wed Nov 23 16:58:36 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 23 Nov 2022 13:58:36 -0300 Subject: [manila] Weekly meeting cancelled (Nov 24th 2022) Message-ID: Hello, Zorillas! Since there are some holidays in the US in the next few days, and some of our usual crowd will be offline, we are cancelling tomorrow's weekly meeting. The next Manila weekly meeting will be on December 1st. If you would like to chat about something urgent, please ping me on IRC. Sorry for the late notice. Thanks, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Nov 23 17:02:54 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 23 Nov 2022 17:02:54 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <20221123145550.5uar4ovwup5dvfot@yuggoth.org> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <20221123145550.5uar4ovwup5dvfot@yuggoth.org> Message-ID: On Wed, 2022-11-23 at 14:55 +0000, Jeremy Stanley wrote: > On 2022-11-23 14:44:01 +0000 (+0000), Sean Mooney wrote: > [...] > > i hate that i bring this up but one of the design guidlines of OSC > > was commands must not auto negociagte the latest micorverion. that > > again was for consitency so that command would work the same > > across different clouds with different api versions. many plugins > > have broken this design requirement btu the core osc client still > > maintains its orginal design. > > > > to level set osc intentionally does not support microverion > > negocaitation, it was a desgin choice not an oversight. > > > > since many of the plugins have ignored that and implemnted it > > anyway i think it would be good to provide a way to opt into the > > desired behavior. i.e. provide a --latest global flag or change > > the default for the --os-compute-api ectr command to latest in a > > major version fo osc. > [...] > > Remind me what you mean specifically by microversion negotiation and > why it's a bad thing? Is detecting the latest supported microversion > and only making calls it will support considered negotiation? detecting and refusing to send if i request 2.100 and it only support 2.50 woudl be not really negociation sending the req ust automaticaly with 2.50 is negociation if we did not want to break the orginal design but improve the ux my markign each command with the minium rpc that was requried and only request that. that would provide a stabel behavior provided the cloud supported that mini > > Yesterday in working to try to get recent versions of the SDK to > boot servers in Rackspace, we discovered that 0.99.0 started > supplying network:auto in boot calls which is only supported after a > specific nova microversion, but wasn't checking whether the API was > sufficiently new enough to have that microversion. Is detecting that > condition what you're saying is a bad idea? Or are you saying > specifically doing it in the openstackclient/plugin code is wrong > but it's okay to do it in the SDK? the sdk is not auto negociating if its using a hard coded value. renwer micoversions are generally not backwards compatiable. they may be additive but the can be subtractive changes so in general you need to look a the parmater passed to the sdk fucntion to knwo what range of micorverisons are valid and then choose a approtiate one. just using lateset in the sdk without backleveling to the max supported in the cloud at a minium is a bad idea since the sdk is ment to support older openstack relesase. its also a bad idea in general as the sdk is ment to provide stable behvior for applicaitons and newer micoverion can remove fucntionality that is still aviabel if you use the older microverions. so client and the sdk shoudl default to the oldest microveion that is supproted for a given request not the newest to provide stable behvior. as an example before 2.92 nova support keypair generation for rsa ssh keys https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#microversion-2-92 with 2.92+ or latest (2.93) that functionality is not aviable however the same request using the old microverion will work since the code still exist in nova. defaulting to auto for this api will break user of a zed cloud. defaultin to the oldest version that supported this api 2.1 will still work with zed. > > Sometimes it's unclear to me when people talk about the client > whether they're also referring to the SDK or vice versa, especially > since the client uses the SDK increasingly and both are now > maintained by the same team. the sdk really shoudl not default to latest. the unifed client does nto today to provide a stable cli but could if we dont want the openstack client to provide a stable scripting interface. the comment i have heard alot is if you want to script agaisnt the nova api use the python client or sdk but that is not useful if you are writng said script in bash or another language. for example in devstack we use osc directly when need and i dont think we are ever going to rewite that to use python so i dont like saying a stable commandline interface is out of scope of the unified openstack clinet. From hanguangyu2 at gmail.com Thu Nov 24 06:27:09 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Thu, 24 Nov 2022 14:27:09 +0800 Subject: =?UTF-8?Q?Re=3A_=E2=80=9CCan_not_allocate_kernel_buffer=E2=80=9D_when_I_use_?= =?UTF-8?Q?officail_ubuntu_image_to_create_instance?= In-Reply-To: References: Message-ID: Hi, Sorry to bother. I get the stupid reason. I use a too small RAM. If I use a usual RAM, I can get a instance. Thank you . ??? ?2022?11?23??? 22:38??? > > Hi, all > > I download ubuntu focal-server-cloudimg-amd64.img from officail website[1]. > > And when I use it to creae instance, I can access the grub interface > in horizon vnc. But if I choose "Ubuntu", I get: > error: cannot allocate kernel buffer. > error: you need to load the kernel first. > > Can I get some advice on how I should use it properly to create instances > > > > [1] https://cloud-images.ubuntu.com/focal/20221121/focal-server-cloudimg-amd64.img From christian.rohmann at inovex.de Thu Nov 24 08:48:53 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 24 Nov 2022 09:48:53 +0100 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> Message-ID: On 23/11/2022 17:55, Michael Johnson wrote: > Stephen, I 100% agree. This is what has made the OpenStack client so > much better than the legacy clients. Yes -> this! The major part of users' bad experiences and support efforts when using OpenStack clouds is about clients. Yes, users need to understand the concepts of how the resources themselves work and how they can interact. But if different clients come on top or if they have different names for similar things and different approaches - this is just totally avoidable. And to me that's is what the generic openstackclient (via openstacksdk) is all about: Reducing complexity for humans. Quite honestly, for my liking there should even be more conventions. Why are there still commands that lack generic filters such as "--project". Why is that not something that every command has to support? If the backend API or the sdk cannot do it -> raise a bug. But I am getting of course topic myself ... In short: Please keep the focus on strongly aligning things with the clients. Regards Christian From hjensas at redhat.com Thu Nov 24 09:31:41 2022 From: hjensas at redhat.com (Harald Jensas) Date: Thu, 24 Nov 2022 10:31:41 +0100 Subject: [OVB - openstack-virtual-baremetal] - Douglas Viroel and Chandan Kumar as core Message-ID: Hi, After discussions with Douglas, Chandan and Ronelle Landy I would like to suggest adding Douglas and Chandan to the OVB core team. The repository have very little activity, i.e there is not a lot of review history to base the decision on. I did work with both individuals when onboarding new clouds to run TripleO CI jobs utilizing OVB, they have a good understanding of how the thing works. If there are no objections, I will add them to them as core reviewers next week. Regards, Harald From rafaelweingartner at gmail.com Thu Nov 24 11:23:56 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 24 Nov 2022 08:23:56 -0300 Subject: [CloudKitty] use of Monasca In-Reply-To: References: Message-ID: Hello guys, as we did not have any feedback on this matter we will proceed with the deprecation notice in Antelope of the Monasca support removal, and then in the next release, we will be removing it. On Tue, Nov 1, 2022 at 1:49 PM Rafael Weing?rtner < rafaelweingartner at gmail.com> wrote: > Hello guys, > As discussed in the PTG [1], in October, we wanted to check with the > community if there are people using CloudKitty with Monasca. This > discussion was brought up during the PTG that Kolla-ansible is deprecating > support to Monasca, and we wanted to check if others are using CloudKitty > with Monasca. This integration is not being actively tested and maintained; > therefore, we are considering the issue of a deprecation notice and further > removal of the integration. > > What do you guys think? > > Are there people using CloudKitty with Monasca? > > [1] https://etherpad.opendev.org/p/oct2022-ptg-cloudkitty > > -- > Rafael Weing?rtner > -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Nov 24 12:17:41 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 24 Nov 2022 12:17:41 +0000 Subject: [openstackclient] Autonegotiation of microversions (Was: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients) In-Reply-To: <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> Message-ID: <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> On Wed, 2022-11-23 at 14:44 +0000, Sean Mooney wrote: > On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > > ? > > > > tl;dr: $subject > > > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > > lengthy comment that I thought worthy of bringing up to a wider audience. The > > patch itself doesn't matter so much as what is was trying to achieve, namely > > modifying an existing OSC command to better match the structure of the > > equivalent legacy client command. The review provides more detail than I do here > > but the tl;dr: is that this is a big no-no and OSC will and must maintain > > consistency between OSC commands over consistency with legacy clients. As I > > noted in the review, consistency is one of the biggest advantages of OSC over > > the legacy clients: if you know the name of the resource type you wish to work > > with, you can pretty accurately guess the command and its structure. This is a > > thing that operators have consistently said they love about OSC and its one of > > the key reasons we're trying to get every command to provide full current API > > implementations in OSC (and SDK). > > > > Now I get that the way some of these consistent commands have been implemented > > has been the cause of contention in the past. I don't imagine it remains any > > less contentious today. However, these patterns are well-understood, well-known > > patterns that have for the most part worked just fine for close to a decade now. > > The kind of patterns I'm thinking about include: > > > > * The command to create a new resource should always take the format > > ' create > > * The command to modify some property of a resource should always take the > > format ' set --property=value ' > > * The command to list, fetch or delete resources should always take the format > > ' list', ' get ', and ' > > delete ', respectively. > you have listed ' get ' to fetch a resouce but in my experince > "show" is the more common action > > openstack server show > openstack image show > openstack volume show > > also network and port and subnet baiscaly all the resouce form the core services > > get does not really seam to be used. Whoops, typo. ' show ' is what I meant. > > ?* Boolean options should always take the form of flags with an alternate > > ?negative option like '--flag' and '--no-flag', rather than '-- > > ?flag=' > i personally dont like this but i agree with being consitant. i strongly > prefer the > '--flag=' approch as something that is more readble? > but its not the pattern in use in osc. i would prefer to keep things consitent > then change this at this point. > > > ?* And a couple of other things that we tend to highlight in reviews. > > > > We want to preserve this behavior, lest OSC lose this huge USP and devolve > > into > > a muddle mess of different ideas and various individuals'/teams' > > preferences. > > I'm more than happy to discuss and debate this stuff with anyone who's > > interested and we'll continue reviewing each patch on its merit and > > providing > > exceptions to these rules where they make sense, but it will remain an > > ongoing > > goal and it's something we'd like people to consider when working on OSC > > itself > > or any of its plugins. > > i agree with what you said in general but there is one digerance already that > we might need to reconsider. > > i hate that i bring this up but one of the design guidlines of OSC was > commands must not auto negociagte the latest micorverion. > that again was for consitency so that command would work the same across > different clouds with different api versions. > many plugins have broken this design requirement btu the core osc client still > maintains its orginal design. > > to level set osc intentionally does not support microverion negocaitation, it > was a desgin choice not an oversight. Like gtema, I'm not aware of any such design decision in OSC. Looking through the docs and git logs, I'm also unable to find any references to it. I _suspect_ that you might be confusing OSC with the legacy clients, where this behavior was very much a design choice. OSC has traditionally inherited this behavior owing to its use of the API bindings from the legacy clients but this wasn't intentional. We must remember that OSC is designed for humans first and foremost, while machines should use SDK or the clients directly. When auto- negotiation is done correctly (i.e. without the bugs that fungi highlighted), it allows a human to get the best possible functionality from their deployment (if we are to assume that each new microversion is an improvement on its predecessors) which ultimately results in a better user experience. As we replace use of these clients with SDK, we are slowly fixing this in core OSC and we'd like to eventually see all commands in OSC auto-negotiating microversions, where this makes sense for the underlying service. > since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the > desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major > version fo osc. As you know, you can manually set API versions in three ways: * Via command-line arguments * Via environment variables * Via clouds.yaml Any of these will override the version negotiation done by OSC. For a power user like yourself, I suspect this might be what you want and you're free to do it. Nothing changes with these. We'll just start doing auto-negotiation by default. > we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them > selves. > > again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across > clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack > and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. > > i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic > form a consticy point of view then any other divergance. See above :) Stephen > > > > I will now get off my pedestal/ivory tower ? > > > > Thanks! > > Stephen > > > > PS: I'm talking here about the command themselves, not their > > implementations. We > > do somethings extra in OSC that are user helpful, like allowing users to > > identify resources by their name in addition to by their UUIDs. We also > > currently do things that no so user helpful, like crashing and burning if > > the > > name lookups fail (I'm thinking about the various Glance-related commands > > that > > error out if a project name/ID is passed to a command and the user can't > > look up > > that project). These are things we're more than willing to fix and will > > happily > > accept patches for :) > > > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 From smooney at redhat.com Thu Nov 24 13:36:25 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 24 Nov 2022 13:36:25 +0000 Subject: [openstackclient] Autonegotiation of microversions (Was: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients) In-Reply-To: <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> Message-ID: <2bf3b12d2bda1de4d037cd56f951be2be9ff1bda.camel@redhat.com> On Thu, 2022-11-24 at 12:17 +0000, Stephen Finucane wrote: > On Wed, 2022-11-23 at 14:44 +0000, Sean Mooney wrote: > > On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > > > ? > > > > > > tl;dr: $subject > > > > > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > > > lengthy comment that I thought worthy of bringing up to a wider audience. The > > > patch itself doesn't matter so much as what is was trying to achieve, namely > > > modifying an existing OSC command to better match the structure of the > > > equivalent legacy client command. The review provides more detail than I do here > > > but the tl;dr: is that this is a big no-no and OSC will and must maintain > > > consistency between OSC commands over consistency with legacy clients. As I > > > noted in the review, consistency is one of the biggest advantages of OSC over > > > the legacy clients: if you know the name of the resource type you wish to work > > > with, you can pretty accurately guess the command and its structure. This is a > > > thing that operators have consistently said they love about OSC and its one of > > > the key reasons we're trying to get every command to provide full current API > > > implementations in OSC (and SDK). > > > > > > Now I get that the way some of these consistent commands have been implemented > > > has been the cause of contention in the past. I don't imagine it remains any > > > less contentious today. However, these patterns are well-understood, well-known > > > patterns that have for the most part worked just fine for close to a decade now. > > > The kind of patterns I'm thinking about include: > > > > > > * The command to create a new resource should always take the format > > > ' create > > > * The command to modify some property of a resource should always take the > > > format ' set --property=value ' > > > * The command to list, fetch or delete resources should always take the format > > > ' list', ' get ', and ' > > > delete ', respectively. > > you have listed ' get ' to fetch a resouce but in my experince > > "show" is the more common action > > > > openstack server show > > openstack image show > > openstack volume show > > > > also network and port and subnet baiscaly all the resouce form the core services > > > > get does not really seam to be used. > > Whoops, typo. ' show ' is what I meant. > > > > ?* Boolean options should always take the form of flags with an alternate > > > ?negative option like '--flag' and '--no-flag', rather than '-- > > > ?flag=' > > i personally dont like this but i agree with being consitant. i strongly > > prefer the > > '--flag=' approch as something that is more readble? > > but its not the pattern in use in osc. i would prefer to keep things consitent > > then change this at this point. > > > > > ?* And a couple of other things that we tend to highlight in reviews. > > > > > > We want to preserve this behavior, lest OSC lose this huge USP and devolve > > > into > > > a muddle mess of different ideas and various individuals'/teams' > > > preferences. > > > I'm more than happy to discuss and debate this stuff with anyone who's > > > interested and we'll continue reviewing each patch on its merit and > > > providing > > > exceptions to these rules where they make sense, but it will remain an > > > ongoing > > > goal and it's something we'd like people to consider when working on OSC > > > itself > > > or any of its plugins. > > > > i agree with what you said in general but there is one digerance already that > > we might need to reconsider. > > > > i hate that i bring this up but one of the design guidlines of OSC was > > commands must not auto negociagte the latest micorverion. > > that again was for consitency so that command would work the same across > > different clouds with different api versions. > > many plugins have broken this design requirement btu the core osc client still > > maintains its orginal design. > > > > to level set osc intentionally does not support microverion negocaitation, it > > was a desgin choice not an oversight. > > Like gtema, I'm not aware of any such design decision in OSC. Looking through > the docs and git logs, I'm also unable to find any references to it. I _suspect_ > that you might be confusing OSC with the legacy clients, where this behavior was > very much a design choice. OSC has traditionally inherited this behavior > owing to its use of the API bindings from the legacy clients but this wasn't > intentional.? > that is not the case if you want to understand the history dean has captured it here https://youtu.be/D-4Avtxjby0?t=310 i was stitting in the room at the time. the commitment ot provideing a stable comandline interface for puppet ansible and other scripting was intoduced in the 1.0 release. this was one of the big departure form the project cleint that do not provide a stabel command line gurantee for scripting. we try not to break people intentually but osc was ment to be the stable client that had consitenty behavior. > We must remember that OSC is designed for humans first and > foremost, while machines should use SDK or the clients directly. > again that is wrong machine parsable output is a core part of the openstack client phiosophy https://youtu.be/D-4Avtxjby0?t=911 https://www.youtube.com/watch?v=EMy9IsRHY-o&t=1528s > When auto- > negotiation is done correctly (i.e. without the bugs that fungi highlighted), it > allows a human to get the best possible functionality from their deployment (if > we are to assume that each new microversion is an improvement on its > predecessors) which ultimately results in a better user experience. > its a better user experince only if its correct and to have it be correct and consitent to be both you would need to use the oldest microversoin that supports the parmaters you passed using the oldest ensure the behavior is consitent across clouds. > As we > replace use of these clients with SDK, we are slowly fixing this in core OSC and > we'd like to eventually see all commands in OSC auto-negotiating microversions, > where this makes sense for the underlying service. that will directly break existing users and documentation. so if you eant to enabel that it needs to be a majory version as it will break the api gurantees of the openstack client. im not arguring that auto negociation would not be a better ux but we dont get it for free. the cost will be in consitent behavior across openstack clouds and versiosn. > > > since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the > > desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major > > version fo osc. > > As you know, you can manually set API versions in three ways: > > * Via command-line arguments > * Via environment variables > * Via clouds.yaml > > Any of these will override the version negotiation done by OSC. For a power user > like yourself, I suspect this might be what you want and you're free to do it. > Nothing changes with these. We'll just start doing auto-negotiation by default. im concerned about the ecosystem of user and docs that have been built based on teh stable command line api gurenetee that the openstack cleint team commited to in the 1.0 release. i will be fine in any case but you have reniforced my perspective that teh people that gave that guarnteee have left the proejct and the fact that was a fundemental part fo the desgin has been lost for the knowlage of the current team. we use a lazy concent model in openstack as with may opensource proejct and if we decied to revoke that guarenttee and decied that as a comunity its better to focuse on human users i am fine with that if its a deliberate decsion. i dont think we should sleep walk into that because there is no one to advocate for the orginal design commitments. the oringal design has a user centric approch to ensuring that all command are inutitve and hiding the implmantion detail of which project provide a given logic resouce. i woudl like it if my command just worked too but we need to be very very carful with this type of change. > > > we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them > > selves. > > > > again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across > > clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack > > and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. > > > > i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic > > form a consticy point of view then any other divergance. > > See above :) > > Stephen > > > > > > > I will now get off my pedestal/ivory tower ? > > > > > > Thanks! > > > Stephen > > > > > > PS: I'm talking here about the command themselves, not their > > > implementations. We > > > do somethings extra in OSC that are user helpful, like allowing users to > > > identify resources by their name in addition to by their UUIDs. We also > > > currently do things that no so user helpful, like crashing and burning if > > > the > > > name lookups fail (I'm thinking about the various Glance-related commands > > > that > > > error out if a project name/ID is passed to a command and the user can't > > > look up > > > that project). These are things we're more than willing to fix and will > > > happily > > > accept patches for :) > > > > > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > From smooney at redhat.com Thu Nov 24 13:43:12 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 24 Nov 2022 13:43:12 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> Message-ID: <03c960fbdecc8e06e9dbe1eb802b565eca2d77ec.camel@redhat.com> On Thu, 2022-11-24 at 09:48 +0100, Christian Rohmann wrote: > On 23/11/2022 17:55, Michael Johnson wrote: > > Stephen, I 100% agree. This is what has made the OpenStack client so > > much better than the legacy clients. > > Yes -> this! > > The major part of users' bad experiences and support efforts when using > OpenStack clouds is about clients. > Yes, users need to understand the concepts of how the resources > themselves work and how they can interact. > > But if different clients come on top or if they have different names for > similar things and different approaches - this is just totally avoidable. > And to me that's is what the generic openstackclient (via openstacksdk) > is all about: Reducing complexity for humans. > > Quite honestly, for my liking there should even be more conventions. Why > are there still commands that lack generic filters such > as "--project". Why is that not something that every command has to > support? If the backend API or the sdk cannot do it -> raise a bug. > But I am getting of course topic myself ... cross project request requrie admin prvialdges. and osc at the very start did not want to have admin commands. that changed as the goal of eventually replacing the project-commands for all usecase became more important. also api change are not bugs. they have to be take very carfully because once we commit to an api we have to support it for a very very very long time. addign --project to everything is not correct as not all resouce are proejct owned. for example keyparis in nova are owned by the user not the proejct, the same is ture for secrets in barbican flavor are not owend by any project or user. so it woudl be incorrect for --proejct to exsit on everything. for proejct scoped resouce like images, server, volumes it might make sense but only if the service support domain scoped tokesn or the request is made by an admin. out side of keystone i dont think domain scoped tokens are really a thing. most reqcust via osc will be using a project scoped token. > > > In short: Please keep the focus on strongly aligning things with the > clients. > > > > Regards > > > Christian > > From nguyenhuukhoinw at gmail.com Thu Nov 24 14:40:04 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Thu, 24 Nov 2022 21:40:04 +0700 Subject: [Openstack-Magnum] Health Check without Floating IP Message-ID: Hello guys. I have used Magnum to setup k8s cluster and I dont use floating ip then my cluster show as below: [image: image.png] I used provider network only. Could you tell me if there is any way to have a health check? And Is there any way to setup k8s multi master without loadbalancer? Many thanks Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 8850 bytes Desc: not available URL: From stephenfin at redhat.com Thu Nov 24 16:15:58 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 24 Nov 2022 16:15:58 +0000 Subject: [openstackclient] Autonegotiation of microversions (Was: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients) In-Reply-To: <2bf3b12d2bda1de4d037cd56f951be2be9ff1bda.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> <2bf3b12d2bda1de4d037cd56f951be2be9ff1bda.camel@redhat.com> Message-ID: On Thu, 2022-11-24 at 13:36 +0000, Sean Mooney wrote: > On Thu, 2022-11-24 at 12:17 +0000, Stephen Finucane wrote: > > On Wed, 2022-11-23 at 14:44 +0000, Sean Mooney wrote: > > > On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > > > > ? > > > > > > > > tl;dr: $subject > > > > > > > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > > > > lengthy comment that I thought worthy of bringing up to a wider audience. The > > > > patch itself doesn't matter so much as what is was trying to achieve, namely > > > > modifying an existing OSC command to better match the structure of the > > > > equivalent legacy client command. The review provides more detail than I do here > > > > but the tl;dr: is that this is a big no-no and OSC will and must maintain > > > > consistency between OSC commands over consistency with legacy clients. As I > > > > noted in the review, consistency is one of the biggest advantages of OSC over > > > > the legacy clients: if you know the name of the resource type you wish to work > > > > with, you can pretty accurately guess the command and its structure. This is a > > > > thing that operators have consistently said they love about OSC and its one of > > > > the key reasons we're trying to get every command to provide full current API > > > > implementations in OSC (and SDK). > > > > > > > > Now I get that the way some of these consistent commands have been implemented > > > > has been the cause of contention in the past. I don't imagine it remains any > > > > less contentious today. However, these patterns are well-understood, well-known > > > > patterns that have for the most part worked just fine for close to a decade now. > > > > The kind of patterns I'm thinking about include: > > > > > > > > * The command to create a new resource should always take the format > > > > ' create > > > > * The command to modify some property of a resource should always take the > > > > format ' set --property=value ' > > > > * The command to list, fetch or delete resources should always take the format > > > > ' list', ' get ', and ' > > > > delete ', respectively. > > > you have listed ' get ' to fetch a resouce but in my experince > > > "show" is the more common action > > > > > > openstack server show > > > openstack image show > > > openstack volume show > > > > > > also network and port and subnet baiscaly all the resouce form the core services > > > > > > get does not really seam to be used. > > > > Whoops, typo. ' show ' is what I meant. > > > > > > ?* Boolean options should always take the form of flags with an alternate > > > > ?negative option like '--flag' and '--no-flag', rather than '-- > > > > ?flag=' > > > i personally dont like this but i agree with being consitant. i strongly > > > prefer the > > > '--flag=' approch as something that is more readble? > > > but its not the pattern in use in osc. i would prefer to keep things consitent > > > then change this at this point. > > > > > > > ?* And a couple of other things that we tend to highlight in reviews. > > > > > > > > We want to preserve this behavior, lest OSC lose this huge USP and devolve > > > > into > > > > a muddle mess of different ideas and various individuals'/teams' > > > > preferences. > > > > I'm more than happy to discuss and debate this stuff with anyone who's > > > > interested and we'll continue reviewing each patch on its merit and > > > > providing > > > > exceptions to these rules where they make sense, but it will remain an > > > > ongoing > > > > goal and it's something we'd like people to consider when working on OSC > > > > itself > > > > or any of its plugins. > > > > > > i agree with what you said in general but there is one digerance already that > > > we might need to reconsider. > > > > > > i hate that i bring this up but one of the design guidlines of OSC was > > > commands must not auto negociagte the latest micorverion. > > > that again was for consitency so that command would work the same across > > > different clouds with different api versions. > > > many plugins have broken this design requirement btu the core osc client still > > > maintains its orginal design. > > > > > > to level set osc intentionally does not support microverion negocaitation, it > > > was a desgin choice not an oversight. > > > > Like gtema, I'm not aware of any such design decision in OSC. Looking through > > the docs and git logs, I'm also unable to find any references to it. I _suspect_ > > that you might be confusing OSC with the legacy clients, where this behavior was > > very much a design choice. OSC has traditionally inherited this behavior > > owing to its use of the API bindings from the legacy clients but this wasn't > > intentional.? > > > that is not the case if you want to understand the history dean has captured it here https://youtu.be/D-4Avtxjby0?t=310 > i was stitting in the room at the time. the commitment ot provideing a stable comandline interface for puppet ansible and other > scripting was intoduced in the 1.0 release. > > this was one of the big departure form the project cleint that do not provide a stabel command line gurantee for scripting. > we try not to break people intentually but osc was ment to be the stable client that had consitenty behavior.# This is a different thing. What Dean is talking about there is the command structure itself. We still do this. For example, you can create a volume based on an existing volume snapshot. Older versions of OSC did this like so: openstack volume create --snapshot-id ... However, we allow users to specify either a name or ID for the snapshot so this name was misleading. As a result, at some point this option was renamed and you'd now create a volume from a snapshot like so: openstack volume create --snapshot ... Crucially though, we did not remove '--snapshot-id'. It's no longer emitted in the help message (we do this using 'help=argparse.SUPPRESS') but if you pass this, OSC will continue to honour it. There have been exceptions to this. The old '--live ' parameter for 'server migrate' jumps to mind, but that was removed because it was actively harmful: users almost never want to bypass the scheduler when live migrating. They are exceptions though. > > We must remember that OSC is designed for humans first and > > foremost, while machines should use SDK or the clients directly. > > > again that is wrong machine parsable output is a core part of the openstack client phiosophy > https://youtu.be/D-4Avtxjby0?t=911 > https://www.youtube.com/watch?v=EMy9IsRHY-o&t=1528s I'm not suggesting that machine parseable output isn't a concern and I've invested time in fixing bugs with the machine readable formats. However, the human-readable output is our primary concern since that's what most people that talk to us about OSC care about. I suspect most others are using something like Ansible nowadays... > > When auto- > > negotiation is done correctly (i.e. without the bugs that fungi highlighted), it > > allows a human to get the best possible functionality from their deployment (if > > we are to assume that each new microversion is an improvement on its > > predecessors) which ultimately results in a better user experience. > > > its a better user experince only if its correct and to have it be correct and consitent > to be both you would need to use the oldest microversoin that supports the parmaters you passed > using the oldest ensure the behavior is consitent across clouds. We have to ensure it's correct, yes, but I don't think it has to be consistent. We should strive to provide the best experience to a user and this means using the latest API versions. If a user wants to be consistent then they can explicitly request an API version. > > As we > > replace use of these clients with SDK, we are slowly fixing this in core OSC and > > we'd like to eventually see all commands in OSC auto-negotiating microversions, > > where this makes sense for the underlying service. > that will directly break existing users and documentation. > so if you eant to enabel that it needs to be a majory version as it will break the api gurantees of the openstack client. > im not arguring that auto negociation would not be a better ux but we dont get it for free. > the cost will be in consitent behavior across openstack clouds and versiosn. We're already way down the road and have released multiple major versions since we started using SDK (for glance initially, I think). '--os-{server}-api- version' is an option to. > > > since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the > > > desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major > > > version fo osc. > > > > As you know, you can manually set API versions in three ways: > > > > * Via command-line arguments > > * Via environment variables > > * Via clouds.yaml > > > > Any of these will override the version negotiation done by OSC. For a power user > > like yourself, I suspect this might be what you want and you're free to do it. > > Nothing changes with these. We'll just start doing auto-negotiation by default. > im concerned about the ecosystem of user and docs that have been built based on teh stable command line api gurenetee > that the openstack cleint team commited to in the 1.0 release. > > i will be fine in any case but you have reniforced my perspective that teh people that gave that guarnteee have left the proejct > and the fact that was a fundemental part fo the desgin has been lost for the knowlage of the current team. > > we use a lazy concent model in openstack as with may opensource proejct and if we decied to revoke that guarenttee and decied that as a comunity its > better to focuse on human users i am fine with that if its a deliberate decsion. i dont think we should sleep walk into that because > there is no one to advocate for the orginal design commitments. the oringal design has a user centric approch to ensuring that > all command are inutitve and hiding the implmantion detail of which project provide a given logic resouce. > > i woudl like it if my command just worked too but we need to be very very carful with this type of change. As noted above, I think we're talking about different things and I don't think we're planning on blindly removing commands themselves. A given command invocation should continue working for a long-time to come. Stephen > > > > > > we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them > > > selves. > > > > > > again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across > > > clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack > > > and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. > > > > > > i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic > > > form a consticy point of view then any other divergance. > > > > See above :) > > > > Stephen > > > > > > > > > > I will now get off my pedestal/ivory tower ? > > > > > > > > Thanks! > > > > Stephen > > > > > > > > PS: I'm talking here about the command themselves, not their > > > > implementations. We > > > > do somethings extra in OSC that are user helpful, like allowing users to > > > > identify resources by their name in addition to by their UUIDs. We also > > > > currently do things that no so user helpful, like crashing and burning if > > > > the > > > > name lookups fail (I'm thinking about the various Glance-related commands > > > > that > > > > error out if a project name/ID is passed to a command and the user can't > > > > look up > > > > that project). These are things we're more than willing to fix and will > > > > happily > > > > accept patches for :) > > > > > > > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > > > From smooney at redhat.com Thu Nov 24 17:57:03 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 24 Nov 2022 17:57:03 +0000 Subject: [openstackclient] Autonegotiation of microversions (Was: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients) In-Reply-To: References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> <2bf3b12d2bda1de4d037cd56f951be2be9ff1bda.camel@redhat.com> Message-ID: On Thu, 2022-11-24 at 16:15 +0000, Stephen Finucane wrote: > On Thu, 2022-11-24 at 13:36 +0000, Sean Mooney wrote: > > On Thu, 2022-11-24 at 12:17 +0000, Stephen Finucane wrote: > > > On Wed, 2022-11-23 at 14:44 +0000, Sean Mooney wrote: > > > > On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > > > > > ? > > > > > > > > > > tl;dr: $subject > > > > > > > > > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > > > > > lengthy comment that I thought worthy of bringing up to a wider audience. The > > > > > patch itself doesn't matter so much as what is was trying to achieve, namely > > > > > modifying an existing OSC command to better match the structure of the > > > > > equivalent legacy client command. The review provides more detail than I do here > > > > > but the tl;dr: is that this is a big no-no and OSC will and must maintain > > > > > consistency between OSC commands over consistency with legacy clients. As I > > > > > noted in the review, consistency is one of the biggest advantages of OSC over > > > > > the legacy clients: if you know the name of the resource type you wish to work > > > > > with, you can pretty accurately guess the command and its structure. This is a > > > > > thing that operators have consistently said they love about OSC and its one of > > > > > the key reasons we're trying to get every command to provide full current API > > > > > implementations in OSC (and SDK). > > > > > > > > > > Now I get that the way some of these consistent commands have been implemented > > > > > has been the cause of contention in the past. I don't imagine it remains any > > > > > less contentious today. However, these patterns are well-understood, well-known > > > > > patterns that have for the most part worked just fine for close to a decade now. > > > > > The kind of patterns I'm thinking about include: > > > > > > > > > > * The command to create a new resource should always take the format > > > > > ' create > > > > > * The command to modify some property of a resource should always take the > > > > > format ' set --property=value ' > > > > > * The command to list, fetch or delete resources should always take the format > > > > > ' list', ' get ', and ' > > > > > delete ', respectively. > > > > you have listed ' get ' to fetch a resouce but in my experince > > > > "show" is the more common action > > > > > > > > openstack server show > > > > openstack image show > > > > openstack volume show > > > > > > > > also network and port and subnet baiscaly all the resouce form the core services > > > > > > > > get does not really seam to be used. > > > > > > Whoops, typo. ' show ' is what I meant. > > > > > > > > ?* Boolean options should always take the form of flags with an alternate > > > > > ?negative option like '--flag' and '--no-flag', rather than '-- > > > > > ?flag=' > > > > i personally dont like this but i agree with being consitant. i strongly > > > > prefer the > > > > '--flag=' approch as something that is more readble? > > > > but its not the pattern in use in osc. i would prefer to keep things consitent > > > > then change this at this point. > > > > > > > > > ?* And a couple of other things that we tend to highlight in reviews. > > > > > > > > > > We want to preserve this behavior, lest OSC lose this huge USP and devolve > > > > > into > > > > > a muddle mess of different ideas and various individuals'/teams' > > > > > preferences. > > > > > I'm more than happy to discuss and debate this stuff with anyone who's > > > > > interested and we'll continue reviewing each patch on its merit and > > > > > providing > > > > > exceptions to these rules where they make sense, but it will remain an > > > > > ongoing > > > > > goal and it's something we'd like people to consider when working on OSC > > > > > itself > > > > > or any of its plugins. > > > > > > > > i agree with what you said in general but there is one digerance already that > > > > we might need to reconsider. > > > > > > > > i hate that i bring this up but one of the design guidlines of OSC was > > > > commands must not auto negociagte the latest micorverion. > > > > that again was for consitency so that command would work the same across > > > > different clouds with different api versions. > > > > many plugins have broken this design requirement btu the core osc client still > > > > maintains its orginal design. > > > > > > > > to level set osc intentionally does not support microverion negocaitation, it > > > > was a desgin choice not an oversight. > > > > > > Like gtema, I'm not aware of any such design decision in OSC. Looking through > > > the docs and git logs, I'm also unable to find any references to it. I _suspect_ > > > that you might be confusing OSC with the legacy clients, where this behavior was > > > very much a design choice. OSC has traditionally inherited this behavior > > > owing to its use of the API bindings from the legacy clients but this wasn't > > > intentional.? > > > > > that is not the case if you want to understand the history dean has captured it here https://youtu.be/D-4Avtxjby0?t=310 > > i was stitting in the room at the time. the commitment ot provideing a stable comandline interface for puppet ansible and other > > scripting was intoduced in the 1.0 release. > > > > this was one of the big departure form the project cleint that do not provide a stabel command line gurantee for scripting. > > we try not to break people intentually but osc was ment to be the stable client that had consitenty behavior.# > > This is a different thing. What Dean is talking about there is the command > structure itself. We still do this. For example, you can create a volume based > on an existing volume snapshot. Older versions of OSC did this like so: > > openstack volume create --snapshot-id ... > > However, we allow users to specify either a name or ID for the snapshot so this > name was misleading. As a result, at some point this option was renamed and > you'd now create a volume from a snapshot like so: > > openstack volume create --snapshot ... > > Crucially though, we did not remove '--snapshot-id'. It's no longer emitted in > the help message (we do this using 'help=argparse.SUPPRESS') but if you pass > this, OSC will continue to honour it. > > There have been exceptions to this. The old '--live ' parameter for > 'server migrate' jumps to mind, but that was removed because it was actively > harmful: users almost never want to bypass the scheduler when live migrating. > They are exceptions though. > > > > We must remember that OSC is designed for humans first and > > > foremost, while machines should use SDK or the clients directly. > > > > > again that is wrong machine parsable output is a core part of the openstack client phiosophy > > https://youtu.be/D-4Avtxjby0?t=911 > > https://www.youtube.com/watch?v=EMy9IsRHY-o&t=1528s > > I'm not suggesting that machine parseable output isn't a concern and I've > invested time in fixing bugs with the machine readable formats. However, the > human-readable output is our primary concern since that's what most people that > talk to us about OSC care about. I suspect most others are using something like > Ansible nowadays... > > > > When auto- > > > negotiation is done correctly (i.e. without the bugs that fungi highlighted), it > > > allows a human to get the best possible functionality from their deployment (if > > > we are to assume that each new microversion is an improvement on its > > > predecessors) which ultimately results in a better user experience. > > > > > its a better user experince only if its correct and to have it be correct and consitent > > to be both you would need to use the oldest microversoin that supports the parmaters you passed > > using the oldest ensure the behavior is consitent across clouds. > > We have to ensure it's correct, yes, but I don't think it has to be consistent. > We should strive to provide the best experience to a user and this means using > the latest API versions. If a user wants to be consistent then they can > explicitly request an API version. > > > > As we > > > replace use of these clients with SDK, we are slowly fixing this in core OSC and > > > we'd like to eventually see all commands in OSC auto-negotiating microversions, > > > where this makes sense for the underlying service. > > that will directly break existing users and documentation. > > so if you eant to enabel that it needs to be a majory version as it will break the api gurantees of the openstack client. > > im not arguring that auto negociation would not be a better ux but we dont get it for free. > > the cost will be in consitent behavior across openstack clouds and versiosn. > > We're already way down the road and have released multiple major versions since > we started using SDK (for glance initially, I think). '--os-{server}-api- > version' is an option to. sure so theere shoudl be no issue with doing another one if/when we start enabling version negocation in osc the sdk is currently working on its 1.0.0 release but it has already broken other consumer liek the ansible openstack collections in advance of that release. https://github.com/openstack/ansible-collections-openstack#breaking-backward-compatibility-warning fortunetly that also means that the ansibel collections shoudl not be affected by osc change since they shoudl already be using the sdk. for other integrations liek the openstack puppet moduels for creating opentack flavors that is implemtned in ruby and cant use the sdk so they use the openstack client and that module will start having flavor extraspec validation enabled by default if we negociate the latest micoversion. https://github.com/openstack/puppet-nova/blob/master/lib/puppet/provider/nova_flavor/openstack.rb there service list inteface will start using uuid ids instead of int https://github.com/openstack/puppet-nova/blob/master/lib/puppet/provider/nova_service/openstack.rb#L17 i honestly dont really read puppet/ruby and dont really know if any of those changes will break them but if we follow semver and treat this as a breaking change in terms of the default behavor at least puppet can pin the osc version and take the same approch ansibel took with pinning the sdk version to <0.99.0 to allow them to adapt. the api for interacting with glance https://github.com/openstack/puppet-glance/blob/master/lib/puppet/provider/glance_image/openstack.rb via puppet seam pretty small and it seams to still default to v2... but hopefully the were not broken the previous changes. the did howeer need to adapt to osc v4.0.0 previously https://github.com/openstack/puppet-glance/commit/13653589788c6ebe4f0d129968161216fd53f161 i would expect they will have to do something simialr if we adopt version autonegocation in general. > > > > > since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the > > > > desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major > > > > version fo osc. > > > > > > As you know, you can manually set API versions in three ways: > > > > > > * Via command-line arguments > > > * Via environment variables > > > * Via clouds.yaml > > > > > > Any of these will override the version negotiation done by OSC. For a power user > > > like yourself, I suspect this might be what you want and you're free to do it. > > > Nothing changes with these. We'll just start doing auto-negotiation by default. > > im concerned about the ecosystem of user and docs that have been built based on teh stable command line api gurenetee > > that the openstack cleint team commited to in the 1.0 release. > > > > i will be fine in any case but you have reniforced my perspective that teh people that gave that guarnteee have left the proejct > > and the fact that was a fundemental part fo the desgin has been lost for the knowlage of the current team. > > > > we use a lazy concent model in openstack as with may opensource proejct and if we decied to revoke that guarenttee and decied that as a comunity its > > better to focuse on human users i am fine with that if its a deliberate decsion. i dont think we should sleep walk into that because > > there is no one to advocate for the orginal design commitments. the oringal design has a user centric approch to ensuring that > > all command are inutitve and hiding the implmantion detail of which project provide a given logic resouce. > > > > i woudl like it if my command just worked too but we need to be very very carful with this type of change. > > As noted above, I think we're talking about different things and I don't think > we're planning on blindly removing commands themselves. A given command > invocation should continue working for a long-time to come. coreect and im not talkign about command removal. what i dont think shoudl happen without a osc major version release is for the beahivor of an exsiting command to change from defaulting to oldest microversion to latest. that would break things like rsa keypair generateion which is nolonger supported in the latest microverion anythin that used to get stats form teh hypervior api would break, there are severall other examples. so if we do this all im really asking for is a major veriosn of the openstack client to signal to peopel and project that there is semantic api breakage that is not backward compatible and that they shoudl pause before using it. there may not be a syntaxtic breakage which woudl happen if a command was remvoed or renamemed but there is a sematic change. > > Stephen > > > > > > > > > > we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them > > > > selves. > > > > > > > > again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across > > > > clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack > > > > and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. > > > > > > > > i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic > > > > form a consticy point of view then any other divergance. > > > > > > See above :) > > > > > > Stephen > > > > > > > > > > > > > I will now get off my pedestal/ivory tower ? > > > > > > > > > > Thanks! > > > > > Stephen > > > > > > > > > > PS: I'm talking here about the command themselves, not their > > > > > implementations. We > > > > > do somethings extra in OSC that are user helpful, like allowing users to > > > > > identify resources by their name in addition to by their UUIDs. We also > > > > > currently do things that no so user helpful, like crashing and burning if > > > > > the > > > > > name lookups fail (I'm thinking about the various Glance-related commands > > > > > that > > > > > error out if a project name/ID is passed to a command and the user can't > > > > > look up > > > > > that project). These are things we're more than willing to fix and will > > > > > happily > > > > > accept patches for :) > > > > > > > > > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > > > > > > From tsrini.alr at gmail.com Tue Nov 22 15:38:17 2022 From: tsrini.alr at gmail.com (Srinivasan T) Date: Tue, 22 Nov 2022 21:08:17 +0530 Subject: ISO file as user-data Message-ID: Hi Team, I'm trying to understand if it is possible to pass an ISO file as user-data or config-drive while creating an instance. Ex., *openstack server create --image --flavor --user-data /path/to/myiso.iso --network --config-drive true * Basically we need to mount & get the files in our ISO that is passed to the VM instance for initial configuration. When trying the above OpenStack CLI command with ISO as user-data, below error is thrown (where as plain text file works), *'utf8' codec can't decode byte 0xbb in position 32848: invalid start byte* Regards, Srini -------------- next part -------------- An HTML attachment was scrubbed... URL: From rishat.azizov at gmail.com Thu Nov 24 12:50:15 2022 From: rishat.azizov at gmail.com (=?UTF-8?B?0KDQuNGI0LDRgiDQkNC30LjQt9C+0LI=?=) Date: Thu, 24 Nov 2022 18:50:15 +0600 Subject: [swift] Periodically crashes of proxy-server In-Reply-To: References: Message-ID: In systemd log we get the following: proxy-server[396412]: STDERR: Traceback (most recent call last): proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 111, in wait#012 listener.cb(fileno) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 221, in main#012 result = function(*args, **kwargs) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 825, in process_request#012 proto.__init__(conn_state, self) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 395, in __init__#012 wsgi.HttpProtocol.__init__(self, *args, **kwargs) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 357, in __init__#012 self.handle() proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 390, in handle#012 self.handle_one_request() proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 521, in handle_one_request#012 got = wsgi.HttpProtocol.handle_one_request(self) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 419, in handle_one_request#012 self.raw_requestline = self._read_request_line() proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/swift/common/wsgi.py", line 513, in _read_request_line#012 got = wsgi.HttpProtocol._read_request_line(self) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/wsgi.py", line 402, in _read_request_line#012 return self.rfile.readline(self.server.url_length_limit) proxy-server[396412]: STDERR: File "/usr/lib/python3.8/socket.py", line 669, in readinto#012 return self._sock.recv_into(b) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 374, in recv_into#012 return self._recv_loop(self.fd.recv_into, 0, buffer, nbytes, flags) proxy-server[396412]: STDERR: File "/usr/lib/python3/dist-packages/eventlet/greenio/base.py", line 352, in _recv_loop#012 return recv_meth(*args) proxy-server[396412]: STDERR: TimeoutError: [Errno 110] Connection timed out proxy-server[396412]: STDERR: Removing descriptor: 65 ??, 3 ????. 2022 ?. ? 20:27, ????? ?????? : > Hello! > > Periodically we get "UNCAUGHT EXCEPTION#012Traceback" errors on > swift-proxy, log attached to this email. After that the swift-proxy > process crashes, clients get 502 errors. Could you please help with this? > Swift version - yoga 2.29.1. > > Thank you. Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Fri Nov 25 01:05:33 2022 From: sbaker at redhat.com (Steve Baker) Date: Fri, 25 Nov 2022 14:05:33 +1300 Subject: [OVB - openstack-virtual-baremetal] - Douglas Viroel and Chandan Kumar as core In-Reply-To: References: Message-ID: <5b46e28e-4985-191e-18bd-f650b9b48358@redhat.com> +1! On 24/11/22 22:31, Harald Jensas wrote: > Hi, > > After discussions with Douglas, Chandan and Ronelle Landy I would like > to suggest adding Douglas and Chandan to the OVB core team. The > repository have very little activity, i.e there is not a lot of review > history to base the decision on. I did work with both individuals when > onboarding new clouds to run TripleO CI jobs utilizing OVB, they have > a good understanding of how the thing works. > > If there are no objections, I will add them to them as core reviewers > next week. > > > Regards, > Harald > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Fri Nov 25 05:42:06 2022 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Fri, 25 Nov 2022 14:42:06 +0900 Subject: [neutron] networking-midonet mantainers In-Reply-To: References: Message-ID: sorry for the inconvenience. On Wed, Nov 23, 2022 at 12:11 AM Rodolfo Alonso Hernandez wrote: > > Hello Neutrinos: > > We have recently found some zuul errors related to networking-midonet. In order to fix them, we have pushed [1]. However, the CI status of this project is not in good shape. > > This mail is a kind request for maintainers of this project. We need to ensure that the stable branches are still accepting patches and the CI jobs are passing. > > Thank you in advance. > > [1]https://review.opendev.org/q/project:openstack%252Fnetworking-midonet+status:open > From ueha.ayumu at fujitsu.com Fri Nov 25 07:40:52 2022 From: ueha.ayumu at fujitsu.com (Ayumu Ueha (Fujitsu)) Date: Fri, 25 Nov 2022 07:40:52 +0000 Subject: [tacker] Tacker's Legacy API deprecation Message-ID: Hi Tacker users, Tacker team is going to mark the Tacker's Legacy APIs [1] (excluding VIM feature) to "deprecated" in Antelope cycle. Please see etherpad [2] about list of target APIs. We follow deprecation guideline [3] of OpenStack, and need to know how many users use Tacker's Legacy API. Please let me know if anyone still uses the Legacy API that will be marked "deprecated" via Rest-API or CLI(python-tackerclient) or GUI(tacker-horizon). If no one is using it, we will obsolete it in C cycle as planned. Thanks. [1] https://docs.openstack.org/api-ref/nfv-orchestration/v1/legacy.html [2] https://etherpad.opendev.org/p/tacker-legacy-deprecation [3] https://docs.openstack.org/project-team-guide/deprecation.html#guidelines Best Regards, Ueha -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 25 08:19:07 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 25 Nov 2022 09:19:07 +0100 Subject: [neutron] Neutron drivers meeting cancelled Message-ID: Hello Neutrinos: Due to the lack of agenda and quorum this week, the Neutron drivers meeting is cancelled today. Have a nice weekend and see you next week! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Fri Nov 25 11:44:02 2022 From: jean-francois.taltavull at elca.ch (=?iso-8859-1?Q?Taltavull_Jean-Fran=E7ois?=) Date: Fri, 25 Nov 2022 11:44:02 +0000 Subject: [openstack-ansible] Designate: role seems trying to update DNS server pools before syncing database Message-ID: <27b50913162d497192325a9d65b1bed0@elca.ch> Hello, During the first run, the playbook 'os-designate-install.yml' fails and the 'designate-manage pool update' command produces the log line below: 'Nov 25 11:50:06 pp3controller1a-designate-container-53d945bb designate-manage[2287]: 2022-11-25 11:50:06.518 2287 CRITICAL designate [designate-manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: Remote error: ProgrammingError (pymysql.err.ProgrammingError) (1146, "Table 'designate.pools' doesn't exist")' Looking at the 'os_designate' role code shows that the handler ` Perform Designate pools update` is flushed before tables are created in the 'designate' database. O.S.: Ubuntu 20.04 OpenStack release: Wallaby OSA tag: 23.2.0 Regards, Jean-Francois From hemant.sonawane at itera.io Fri Nov 25 12:08:01 2022 From: hemant.sonawane at itera.io (Hemant Sonawane) Date: Fri, 25 Nov 2022 13:08:01 +0100 Subject: [cinder] volume_attachement entries are not getting deleted from DB Message-ID: Hello I am using wallaby release openstack and having issues with cinder volumes as once I try to delete, resize or unshelve the shelved vms the volume_attachement entries do not get deleted in cinder db and therefore the above mentioned operations fail every time. I have to delete these volume_attachement entries manually then it works. Is there any way to fix this issue ? nova-compute logs: cinderclient.exceptions.ClientException: Unable to update attachment.(Invalid volume: duplicate connectors detected on volume Help will be really appreciated Thanks ! -- Thanks and Regards, Hemant Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Fri Nov 25 12:20:30 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 25 Nov 2022 17:50:30 +0530 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hi Hemant, If your final goal is to delete the attachment entries in the cinder DB, we have attachment APIs to perform these tasks. The command useful for you is attachment list[1] and attachment delete[2]. Make sure you pass the right microversion i.e. 3.27 to be able to execute these operations. Eg: cinder --os-volume-api-version 3.27 attachment-list [1] https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list [2] https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane wrote: > Hello > I am using wallaby release openstack and having issues with cinder > volumes as once I try to delete, resize or unshelve the shelved vms the > volume_attachement entries do not get deleted in cinder db and therefore > the above mentioned operations fail every time. I have to delete these > volume_attachement entries manually then it works. Is there any way to fix > this issue ? > > nova-compute logs: > > cinderclient.exceptions.ClientException: Unable to update > attachment.(Invalid volume: duplicate connectors detected on volume > > Help will be really appreciated Thanks ! > -- > Thanks and Regards, > > Hemant Sonawane > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Nov 25 12:57:52 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 25 Nov 2022 12:57:52 +0000 Subject: [neutron] neutron-db-manage multiple heads Message-ID: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> Hi *, I'd like to ask you for advice on how to clean up my neutron db. At some point (which I don't know exactly, probably train) my neutron database got inconsistent, apparently one of the upgrades did not go as planned. The interesting thing is that the database still works, I just upgraded from ussuri to victoria where that issue popped up again during 'neutron-db-manage upgrade --expand', I'll add the information at the end of this email. Apparently, I have multiple heads, and one of them is from train, it seems as if I never ran --contract (or it failed and I didn't notice). Just some additional information what I did with this database: this cloud started out as a test environment with a single control node and then became a production environment. About two and a half years ago we decided to reinstall this cloud with version ussuri and import the databases. I had a virtual machine in which I upgraded the database dump from production to the latest versions at that time. That all worked quite well, I only didn't notice that something was missing. Now that I finished the U --> V upgrade I want to fix this inconsistency, I just have no idea how to do it. As I'm not sure how all the neutron-db-manage commands work exactly I'd like to ask for some guidance. For example, could the "stamp" command possibly help? Or how else can I get rid of the train head and/or how to get the train revision to "contract" so I can finish the upgrade and contract the victoria revision? I can paste the whole neutron-db history if necessary (neutron-db-manage history), please let me know what information would be required to get to the bottom of this. Any help is greatly appreciated! Thanks! Eugen ---snip--- controller01:~ # neutron-db-manage upgrade --expand [...] alembic.script.revision.MultipleHeads: Multiple heads are present for given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 controller01:~ # neutron-db-manage current --verbose Running current for neutron ... INFO [alembic.runtime.migration] Context impl MySQLImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn/neutron: Rev: bebe95aae4d4 (head) Parent: b5344a66e818 Branch names: contract Path: /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py Rev: 633d74ebbc4b (head) Parent: 6c9eb0469914 Branch names: expand Path: /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py Rev: I38991de2b4 (head) Parent: 49d8622c5221 Branch names: expand Path: /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py OK ---snip--- From hemant.sonawane at itera.io Fri Nov 25 13:12:39 2022 From: hemant.sonawane at itera.io (Hemant Sonawane) Date: Fri, 25 Nov 2022 14:12:39 +0100 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hi Rajat, It's not about deleting attachments entries but the normal operations from horizon or via cli does not work because of that. So it really needs to be fixed to perform resize, shelve unshelve operations. Here are the detailed attachment entries you can see for the shelved instance. +--------------------------------------+--------------------------------------+--------------------------+---------------------------------- *----+---------------+-----------------------------------------**??* *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ??* *| id | volume_id | attached_host | instance_uuid | attach_status | connector ??* * | ??* *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ??* *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | 67ea3a39-78b8-4d04-a280-166acdc90b8a | nfv1compute43.nfv1.o2.cz | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | attached | {"platform": "x86_64", "os_type": "linux??* *", "ip": "10.42.168.87", "host": "nfv1compute43.nfv1.o2.cz ", "multipath": false, "do_local_attach": false, "system uuid": "65917e4f-c8c4-a2af-ec11-fe353e13f4dd", "mountpoint": "/dev/vda"} | ??* *| d3278543-4920-42b7-b217-0858e986fcce | 67ea3a39-78b8-4d04-a280-166acdc90b8a | NULL | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | reserved | NULL ??* * | ??* *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ??* *2 rows in set (0.00 sec) * for e.g if I would like to unshelve this instance it wont work as it has a duplicate entry in cinder db for the attachment. So i have to delete it manually from db or via cli *root at master01:/home/hemant# cinder --os-volume-api-version 3.27 attachment-list --all | grep 67ea3a39-78b8-4d04-a280-166acdc90b8a ??* *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | 67ea3a39-78b8-4d04-a280-166acdc90b8a | attached | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | ??* *| d3278543-4920-42b7-b217-0858e986fcce | 67ea3a39-78b8-4d04-a280-166acdc90b8a** | reserved | 9266a2d7-9721-4994-a6b5-6b3290862dc6 |* *cinder --os-volume-api-version 3.27 attachment-delete 8daddacc-8fc8-4d2b-a738-d05deb20049f* this is the only choice I have if I would like to unshelve vm. But this is not a good approach for production envs. I hope you understand me. Please feel free to ask me anything if you don't understand. On Fri, 25 Nov 2022 at 13:20, Rajat Dhasmana wrote: > Hi Hemant, > > If your final goal is to delete the attachment entries in the cinder DB, > we have attachment APIs to perform these tasks. The command useful for you > is attachment list[1] and attachment delete[2]. > Make sure you pass the right microversion i.e. 3.27 to be able to execute > these operations. > > Eg: > cinder --os-volume-api-version 3.27 attachment-list > > [1] > https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list > [2] > https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete > > On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane > wrote: > >> Hello >> I am using wallaby release openstack and having issues with cinder >> volumes as once I try to delete, resize or unshelve the shelved vms the >> volume_attachement entries do not get deleted in cinder db and therefore >> the above mentioned operations fail every time. I have to delete these >> volume_attachement entries manually then it works. Is there any way to fix >> this issue ? >> >> nova-compute logs: >> >> cinderclient.exceptions.ClientException: Unable to update >> attachment.(Invalid volume: duplicate connectors detected on volume >> >> Help will be really appreciated Thanks ! >> -- >> Thanks and Regards, >> >> Hemant Sonawane >> >> -- Thanks and Regards, Hemant Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri Nov 25 13:49:24 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Nov 2022 13:49:24 +0000 Subject: [openstackclient] Autonegotiation of microversions (Was: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients) In-Reply-To: References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> <99d60554d77b14bfc5cdf798cfe625b239930422.camel@redhat.com> <884ebfd58c4fee24288ed1d64202e9b8fc86bd4b.camel@redhat.com> <2bf3b12d2bda1de4d037cd56f951be2be9ff1bda.camel@redhat.com> Message-ID: <68e2635f6c3f705e8a7f64739adc41d3fa673e13.camel@redhat.com> On Thu, 2022-11-24 at 17:57 +0000, Sean Mooney wrote: > On Thu, 2022-11-24 at 16:15 +0000, Stephen Finucane wrote: > > On Thu, 2022-11-24 at 13:36 +0000, Sean Mooney wrote: > > > On Thu, 2022-11-24 at 12:17 +0000, Stephen Finucane wrote: > > > > On Wed, 2022-11-23 at 14:44 +0000, Sean Mooney wrote: > > > > > On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > > > > > > ? > > > > > > > > > > > > tl;dr: $subject > > > > > > > > > > > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > > > > > > lengthy comment that I thought worthy of bringing up to a wider audience. The > > > > > > patch itself doesn't matter so much as what is was trying to achieve, namely > > > > > > modifying an existing OSC command to better match the structure of the > > > > > > equivalent legacy client command. The review provides more detail than I do here > > > > > > but the tl;dr: is that this is a big no-no and OSC will and must maintain > > > > > > consistency between OSC commands over consistency with legacy clients. As I > > > > > > noted in the review, consistency is one of the biggest advantages of OSC over > > > > > > the legacy clients: if you know the name of the resource type you wish to work > > > > > > with, you can pretty accurately guess the command and its structure. This is a > > > > > > thing that operators have consistently said they love about OSC and its one of > > > > > > the key reasons we're trying to get every command to provide full current API > > > > > > implementations in OSC (and SDK). > > > > > > > > > > > > Now I get that the way some of these consistent commands have been implemented > > > > > > has been the cause of contention in the past. I don't imagine it remains any > > > > > > less contentious today. However, these patterns are well-understood, well-known > > > > > > patterns that have for the most part worked just fine for close to a decade now. > > > > > > The kind of patterns I'm thinking about include: > > > > > > > > > > > > * The command to create a new resource should always take the format > > > > > > ' create > > > > > > * The command to modify some property of a resource should always take the > > > > > > format ' set --property=value ' > > > > > > * The command to list, fetch or delete resources should always take the format > > > > > > ' list', ' get ', and ' > > > > > > delete ', respectively. > > > > > you have listed ' get ' to fetch a resouce but in my experince > > > > > "show" is the more common action > > > > > > > > > > openstack server show > > > > > openstack image show > > > > > openstack volume show > > > > > > > > > > also network and port and subnet baiscaly all the resouce form the core services > > > > > > > > > > get does not really seam to be used. > > > > > > > > Whoops, typo. ' show ' is what I meant. > > > > > > > > > > ?* Boolean options should always take the form of flags with an alternate > > > > > > ?negative option like '--flag' and '--no-flag', rather than '-- > > > > > > ?flag=' > > > > > i personally dont like this but i agree with being consitant. i strongly > > > > > prefer the > > > > > '--flag=' approch as something that is more readble? > > > > > but its not the pattern in use in osc. i would prefer to keep things consitent > > > > > then change this at this point. > > > > > > > > > > > ?* And a couple of other things that we tend to highlight in reviews. > > > > > > > > > > > > We want to preserve this behavior, lest OSC lose this huge USP and devolve > > > > > > into > > > > > > a muddle mess of different ideas and various individuals'/teams' > > > > > > preferences. > > > > > > I'm more than happy to discuss and debate this stuff with anyone who's > > > > > > interested and we'll continue reviewing each patch on its merit and > > > > > > providing > > > > > > exceptions to these rules where they make sense, but it will remain an > > > > > > ongoing > > > > > > goal and it's something we'd like people to consider when working on OSC > > > > > > itself > > > > > > or any of its plugins. > > > > > > > > > > i agree with what you said in general but there is one digerance already that > > > > > we might need to reconsider. > > > > > > > > > > i hate that i bring this up but one of the design guidlines of OSC was > > > > > commands must not auto negociagte the latest micorverion. > > > > > that again was for consitency so that command would work the same across > > > > > different clouds with different api versions. > > > > > many plugins have broken this design requirement btu the core osc client still > > > > > maintains its orginal design. > > > > > > > > > > to level set osc intentionally does not support microverion negocaitation, it > > > > > was a desgin choice not an oversight. > > > > > > > > Like gtema, I'm not aware of any such design decision in OSC. Looking through > > > > the docs and git logs, I'm also unable to find any references to it. I _suspect_ > > > > that you might be confusing OSC with the legacy clients, where this behavior was > > > > very much a design choice. OSC has traditionally inherited this behavior > > > > owing to its use of the API bindings from the legacy clients but this wasn't > > > > intentional.? > > > > > > > that is not the case if you want to understand the history dean has captured it here https://youtu.be/D-4Avtxjby0?t=310 > > > i was stitting in the room at the time. the commitment ot provideing a stable comandline interface for puppet ansible and other > > > scripting was intoduced in the 1.0 release. > > > > > > this was one of the big departure form the project cleint that do not provide a stabel command line gurantee for scripting. > > > we try not to break people intentually but osc was ment to be the stable client that had consitenty behavior.# > > > > This is a different thing. What Dean is talking about there is the command > > structure itself. We still do this. For example, you can create a volume based > > on an existing volume snapshot. Older versions of OSC did this like so: > > > > openstack volume create --snapshot-id ... > > > > However, we allow users to specify either a name or ID for the snapshot so this > > name was misleading. As a result, at some point this option was renamed and > > you'd now create a volume from a snapshot like so: > > > > openstack volume create --snapshot ... > > > > Crucially though, we did not remove '--snapshot-id'. It's no longer emitted in > > the help message (we do this using 'help=argparse.SUPPRESS') but if you pass > > this, OSC will continue to honour it. > > > > There have been exceptions to this. The old '--live ' parameter for > > 'server migrate' jumps to mind, but that was removed because it was actively > > harmful: users almost never want to bypass the scheduler when live migrating. > > They are exceptions though. > > > > > > We must remember that OSC is designed for humans first and > > > > foremost, while machines should use SDK or the clients directly. > > > > > > > again that is wrong machine parsable output is a core part of the openstack client phiosophy > > > https://youtu.be/D-4Avtxjby0?t=911 > > > https://www.youtube.com/watch?v=EMy9IsRHY-o&t=1528s > > > > I'm not suggesting that machine parseable output isn't a concern and I've > > invested time in fixing bugs with the machine readable formats. However, the > > human-readable output is our primary concern since that's what most people that > > talk to us about OSC care about. I suspect most others are using something like > > Ansible nowadays... > > > > > > When auto- > > > > negotiation is done correctly (i.e. without the bugs that fungi highlighted), it > > > > allows a human to get the best possible functionality from their deployment (if > > > > we are to assume that each new microversion is an improvement on its > > > > predecessors) which ultimately results in a better user experience. > > > > > > > its a better user experince only if its correct and to have it be correct and consitent > > > to be both you would need to use the oldest microversoin that supports the parmaters you passed > > > using the oldest ensure the behavior is consitent across clouds. > > > > We have to ensure it's correct, yes, but I don't think it has to be consistent. > > We should strive to provide the best experience to a user and this means using > > the latest API versions. If a user wants to be consistent then they can > > explicitly request an API version. > > > > > > As we > > > > replace use of these clients with SDK, we are slowly fixing this in core OSC and > > > > we'd like to eventually see all commands in OSC auto-negotiating microversions, > > > > where this makes sense for the underlying service. > > > that will directly break existing users and documentation. > > > so if you eant to enabel that it needs to be a majory version as it will break the api gurantees of the openstack client. > > > im not arguring that auto negociation would not be a better ux but we dont get it for free. > > > the cost will be in consitent behavior across openstack clouds and versiosn. > > > > We're already way down the road and have released multiple major versions since > > we started using SDK (for glance initially, I think). '--os-{server}-api- > > version' is an option to. > sure so theere shoudl be no issue with doing another one if/when we start enabling version negocation in osc > > the sdk is currently working on its 1.0.0 release but it has already broken other consumer liek the ansible openstack collections > in advance of that release. > https://github.com/openstack/ansible-collections-openstack#breaking-backward-compatibility-warning > > fortunetly that also means that the ansibel collections shoudl not be affected by osc change since they shoudl already be using the sdk. > > for other integrations liek the openstack puppet moduels for creating opentack flavors > that is implemtned in ruby and cant use the sdk so they use the openstack client and that module will start having flavor extraspec validation > enabled by default if we negociate the latest micoversion. > https://github.com/openstack/puppet-nova/blob/master/lib/puppet/provider/nova_flavor/openstack.rb > > there service list inteface will start using uuid ids instead of int > https://github.com/openstack/puppet-nova/blob/master/lib/puppet/provider/nova_service/openstack.rb#L17 > > i honestly dont really read puppet/ruby and dont really know if any of those changes will break them but if we follow semver and treat > this as a breaking change in terms of the default behavor at least puppet can pin the osc version and take the same approch ansibel took > with pinning the sdk version to <0.99.0 to allow them to adapt. > > > the api for interacting with glance https://github.com/openstack/puppet-glance/blob/master/lib/puppet/provider/glance_image/openstack.rb via puppet > seam pretty small and it seams to still default to v2... but hopefully the were not broken the previous changes. > the did howeer need to adapt to osc v4.0.0 previously https://github.com/openstack/puppet-glance/commit/13653589788c6ebe4f0d129968161216fd53f161 > i would expect they will have to do something simialr if we adopt version autonegocation in general. > > > > > > > since many of the plugins have ignored that and implemnted it anyway i think it would be good to provide a way to opt into the > > > > > desired behavior. i.e. provide a --latest global flag or change the default for the --os-compute-api ectr command to latest in a major > > > > > version fo osc. > > > > > > > > As you know, you can manually set API versions in three ways: > > > > > > > > * Via command-line arguments > > > > * Via environment variables > > > > * Via clouds.yaml > > > > > > > > Any of these will override the version negotiation done by OSC. For a power user > > > > like yourself, I suspect this might be what you want and you're free to do it. > > > > Nothing changes with these. We'll just start doing auto-negotiation by default. > > > im concerned about the ecosystem of user and docs that have been built based on teh stable command line api gurenetee > > > that the openstack cleint team commited to in the 1.0 release. > > > > > > i will be fine in any case but you have reniforced my perspective that teh people that gave that guarnteee have left the proejct > > > and the fact that was a fundemental part fo the desgin has been lost for the knowlage of the current team. > > > > > > we use a lazy concent model in openstack as with may opensource proejct and if we decied to revoke that guarenttee and decied that as a comunity its > > > better to focuse on human users i am fine with that if its a deliberate decsion. i dont think we should sleep walk into that because > > > there is no one to advocate for the orginal design commitments. the oringal design has a user centric approch to ensuring that > > > all command are inutitve and hiding the implmantion detail of which project provide a given logic resouce. > > > > > > i woudl like it if my command just worked too but we need to be very very carful with this type of change. > > > > As noted above, I think we're talking about different things and I don't think > > we're planning on blindly removing commands themselves. A given command > > invocation should continue working for a long-time to come. > > coreect and im not talkign about command removal. > > what i dont think shoudl happen without a osc major version release is for the beahivor of an exsiting command to change from > defaulting to oldest microversion to latest. > > that would break things like rsa keypair generateion which is nolonger supported in the latest microverion > anythin that used to get stats form teh hypervior api would break, there are severall other examples. > > so if we do this all im really asking for is a major veriosn of the openstack client to signal to peopel > and project that there is semantic api breakage that is not backward compatible and that they shoudl pause before using it. > there may not be a syntaxtic breakage which woudl happen if a command was remvoed or renamemed but there is a sematic change. Ah, in that case sure, no problem. We're rather judicious with our major versions bumps so this will happen naturally even if we don't plan it. For scripting, I would suggest tools like puppet pass '--os-{service}-api-version' options to their commands to ensure things remain unchanged. Stephen > > > > > > Stephen > > > > > > > > > > > > > > we can provide a common impelmatiton in osc and the plugins can just reuse that instead of all of them that chose to suport it implemneting it them > > > > > selves. > > > > > > > > > > again this goes directly against the orginial design intent fo osc to provide a stable comandline interface across > > > > > clouds with differnt versions of openstack, however since most of the peopel that cared about that have now moved on form openstack > > > > > and osc and since the comunity seam to have change its mind in providing a stable api expirence we should proably adress this divergance. > > > > > > > > > > i see the fact that some plugins added micorversion negocation in direct breach of this design principal to be more problematic > > > > > form a consticy point of view then any other divergance. > > > > > > > > See above :) > > > > > > > > Stephen > > > > > > > > > > > > > > > > I will now get off my pedestal/ivory tower ? > > > > > > > > > > > > Thanks! > > > > > > Stephen > > > > > > > > > > > > PS: I'm talking here about the command themselves, not their > > > > > > implementations. We > > > > > > do somethings extra in OSC that are user helpful, like allowing users to > > > > > > identify resources by their name in addition to by their UUIDs. We also > > > > > > currently do things that no so user helpful, like crashing and burning if > > > > > > the > > > > > > name lookups fail (I'm thinking about the various Glance-related commands > > > > > > that > > > > > > error out if a project name/ID is passed to a command and the user can't > > > > > > look up > > > > > > that project). These are things we're more than willing to fix and will > > > > > > happily > > > > > > accept patches for :) > > > > > > > > > > > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > > > > > > > > > > From stephenfin at redhat.com Fri Nov 25 13:50:33 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 25 Nov 2022 13:50:33 +0000 Subject: [openstackclient] Consistency between OSC commands >>> consistency with legacy clients In-Reply-To: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> References: <7833de9b8507310f2e56e46b5ac5c2aa2afac38e.camel@redhat.com> Message-ID: On Wed, 2022-11-23 at 14:11 +0000, Stephen Finucane wrote: > ? > > tl;dr: $subject > > I reviewed a patch against openstackclient (OSC) today [1] and left a rather > lengthy comment that I thought worthy of bringing up to a wider audience. The > patch itself doesn't matter so much as what is was trying to achieve, namely > modifying an existing OSC command to better match the structure of the > equivalent legacy client command. The review provides more detail than I do here > but the tl;dr: is that this is a big no-no and OSC will and must maintain > consistency between OSC commands over consistency with legacy clients. As I > noted in the review, consistency is one of the biggest advantages of OSC over > the legacy clients: if you know the name of the resource type you wish to work > with, you can pretty accurately guess the command and its structure. This is a > thing that operators have consistently said they love about OSC and its one of > the key reasons we're trying to get every command to provide full current API > implementations in OSC (and SDK). > > Now I get that the way some of these consistent commands have been implemented > has been the cause of contention in the past. I don't imagine it remains any > less contentious today. However, these patterns are well-understood, well-known > patterns that have for the most part worked just fine for close to a decade now. > The kind of patterns I'm thinking about include: > > * The command to create a new resource should always take the format > ' create > * The command to modify some property of a resource should always take the > format ' set --property=value ' > * The command to list, fetch or delete resources should always take the format > ' list', ' get ', and ' > delete ', respectively. > * Boolean options should always take the form of flags with an alternate > negative option like '--flag' and '--no-flag', rather than '-- > flag=' > * And a couple of other things that we tend to highlight in reviews. > > We want to preserve this behavior, lest OSC lose this huge USP and devolve into > a muddle mess of different ideas and various individuals'/teams' preferences. > I'm more than happy to discuss and debate this stuff with anyone who's > interested and we'll continue reviewing each patch on its merit and providing > exceptions to these rules where they make sense, but it will remain an ongoing > goal and it's something we'd like people to consider when working on OSC itself > or any of its plugins. > > I will now get off my pedestal/ivory tower ? > > Thanks! > Stephen I went ahead and proposed a patch to add this information to the OSC documentation. If you're interested, I'd suggest taking a look. https://review.opendev.org/c/openstack/python-openstackclient/+/865690 https://review.opendev.org/c/openstack/python-openstackclient/+/865691 Stephen > PS: I'm talking here about the command themselves, not their implementations. We > do somethings extra in OSC that are user helpful, like allowing users to > identify resources by their name in addition to by their UUIDs. We also > currently do things that no so user helpful, like crashing and burning if the > name lookups fail (I'm thinking about the various Glance-related commands that > error out if a project name/ID is passed to a command and the user can't look up > that project). These are things we're more than willing to fix and will happily > accept patches for :) > > [1] https://review.opendev.org/c/openstack/python-openstackclient/+/865377 > From ralonsoh at redhat.com Fri Nov 25 14:36:28 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 25 Nov 2022 15:36:28 +0100 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> Message-ID: Hi Eugen: In Neutron we don't support contract operations since Newton. If you are in Victoria and you correctly finished the DB migration, your HEADs should be: * contract: 5c85685d616d (from Newton) * expand: I38991de2b4 (from the last DB change in Victoria, source_and_destination_ip_prefix_neutron_metering_rule) Please check what you have in the DB table neutron.alembic_version. The first register should be the expand number, the second the contract one. If not, update them with the ones I've provided. Before executing the migration tool again, be sure the DB schema matches the latest migration patch for your version. You can deploy a VM with devstack and run this version. Regards. On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: > Hi *, > > I'd like to ask you for advice on how to clean up my neutron db. At > some point (which I don't know exactly, probably train) my neutron > database got inconsistent, apparently one of the upgrades did not go > as planned. The interesting thing is that the database still works, I > just upgraded from ussuri to victoria where that issue popped up again > during 'neutron-db-manage upgrade --expand', I'll add the information > at the end of this email. Apparently, I have multiple heads, and one > of them is from train, it seems as if I never ran --contract (or it > failed and I didn't notice). > Just some additional information what I did with this database: this > cloud started out as a test environment with a single control node and > then became a production environment. About two and a half years ago > we decided to reinstall this cloud with version ussuri and import the > databases. I had a virtual machine in which I upgraded the database > dump from production to the latest versions at that time. That all > worked quite well, I only didn't notice that something was missing. > Now that I finished the U --> V upgrade I want to fix this > inconsistency, I just have no idea how to do it. As I'm not sure how > all the neutron-db-manage commands work exactly I'd like to ask for > some guidance. For example, could the "stamp" command possibly help? > Or how else can I get rid of the train head and/or how to get the > train revision to "contract" so I can finish the upgrade and contract > the victoria revision? I can paste the whole neutron-db history if > necessary (neutron-db-manage history), please let me know what > information would be required to get to the bottom of this. > Any help is greatly appreciated! > > Thanks! > Eugen > > > ---snip--- > controller01:~ # neutron-db-manage upgrade --expand > [...] > alembic.script.revision.MultipleHeads: Multiple heads are present for > given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 > > controller01:~ # neutron-db-manage current --verbose > Running current for neutron ... > INFO [alembic.runtime.migration] Context impl MySQLImpl. > INFO [alembic.runtime.migration] Will assume non-transactional DDL. > Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn > /neutron: > Rev: bebe95aae4d4 (head) > Parent: b5344a66e818 > Branch names: contract > Path: > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py > > Rev: 633d74ebbc4b (head) > Parent: 6c9eb0469914 > Branch names: expand > Path: > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > > Rev: I38991de2b4 (head) > Parent: 49d8622c5221 > Branch names: expand > Path: > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py > > OK > ---snip--- > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Nov 25 14:51:45 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 25 Nov 2022 14:51:45 +0000 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> Message-ID: <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> Hi, thanks for your quick response. > In Neutron we don't support contract operations since Newton. > > If you are in Victoria and you correctly finished the DB migration, your > HEADs should be: > * contract: 5c85685d616d (from Newton) > * expand: I38991de2b4 (from the last DB change in Victoria, > source_and_destination_ip_prefix_neutron_metering_rule) That explains why I saw the newton revision in a new victoria cluster :-) > Please check what you have in the DB table neutron.alembic_version. The > first register should be the expand number, the second the contract one. If > not, update them with the ones I've provided. The table alembic_versions contains the three versions I provided at the end of my email: MariaDB [neutron]> select * from alembic_version; +--------------+ | version_num | +--------------+ | 633d74ebbc4b | | bebe95aae4d4 | | I38991de2b4 | +--------------+ I already tried to manipulate the table so I would only have those two versions you already mentioned, but then the upgrade --expand command alternates the database again with the mentioned error message ("Multiple heads are present"). > Before executing the > migration tool again, be sure the DB schema matches the latest migration > patch for your version. You can deploy a VM with devstack and run this > version. That's what I wanted to try next, export only the db schema (no data) from a working victoria neutron database, then export only data from our production db and merge those, then import that into the production and try to run upgrade --expand and --contract again. But I didn't want to fiddle around too much in the production, that's why I wanted to ask for your guidance first. But IIUC even if I changed the table alembic_versions again and import the merged db, wouldn't upgrade --expand somehow try to alternate the table again? I don't see where the train revision comes from exactly, could you clarify, please? It seems like I always get back to square one when running the --expand command. Thanks! Eugen Zitat von Rodolfo Alonso Hernandez : > Hi Eugen: > > In Neutron we don't support contract operations since Newton. > > If you are in Victoria and you correctly finished the DB migration, your > HEADs should be: > * contract: 5c85685d616d (from Newton) > * expand: I38991de2b4 (from the last DB change in Victoria, > source_and_destination_ip_prefix_neutron_metering_rule) > > Please check what you have in the DB table neutron.alembic_version. The > first register should be the expand number, the second the contract one. If > not, update them with the ones I've provided. Before executing the > migration tool again, be sure the DB schema matches the latest migration > patch for your version. You can deploy a VM with devstack and run this > version. > > Regards. > > > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: > >> Hi *, >> >> I'd like to ask you for advice on how to clean up my neutron db. At >> some point (which I don't know exactly, probably train) my neutron >> database got inconsistent, apparently one of the upgrades did not go >> as planned. The interesting thing is that the database still works, I >> just upgraded from ussuri to victoria where that issue popped up again >> during 'neutron-db-manage upgrade --expand', I'll add the information >> at the end of this email. Apparently, I have multiple heads, and one >> of them is from train, it seems as if I never ran --contract (or it >> failed and I didn't notice). >> Just some additional information what I did with this database: this >> cloud started out as a test environment with a single control node and >> then became a production environment. About two and a half years ago >> we decided to reinstall this cloud with version ussuri and import the >> databases. I had a virtual machine in which I upgraded the database >> dump from production to the latest versions at that time. That all >> worked quite well, I only didn't notice that something was missing. >> Now that I finished the U --> V upgrade I want to fix this >> inconsistency, I just have no idea how to do it. As I'm not sure how >> all the neutron-db-manage commands work exactly I'd like to ask for >> some guidance. For example, could the "stamp" command possibly help? >> Or how else can I get rid of the train head and/or how to get the >> train revision to "contract" so I can finish the upgrade and contract >> the victoria revision? I can paste the whole neutron-db history if >> necessary (neutron-db-manage history), please let me know what >> information would be required to get to the bottom of this. >> Any help is greatly appreciated! >> >> Thanks! >> Eugen >> >> >> ---snip--- >> controller01:~ # neutron-db-manage upgrade --expand >> [...] >> alembic.script.revision.MultipleHeads: Multiple heads are present for >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 >> >> controller01:~ # neutron-db-manage current --verbose >> Running current for neutron ... >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. >> Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn >> /neutron: >> Rev: bebe95aae4d4 (head) >> Parent: b5344a66e818 >> Branch names: contract >> Path: >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py >> >> Rev: 633d74ebbc4b (head) >> Parent: 6c9eb0469914 >> Branch names: expand >> Path: >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> Rev: I38991de2b4 (head) >> Parent: 49d8622c5221 >> Branch names: expand >> Path: >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py >> >> OK >> ---snip--- >> >> >> From nguyenhuukhoinw at gmail.com Fri Nov 25 14:56:59 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Fri, 25 Nov 2022 21:56:59 +0700 Subject: [Magnum] ls /etc/cni/net.d/ is emty Message-ID: Hello guys. I use Magnum on Xena and I custom k8s cluster by labels. But My cluster is not ready and there is nothing in /etc/cni/net.d/ and my cluster said: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized And this is my labels kube_tag=v1.21.8-rancher1,container_runtime=containerd,containerd_version=1.6.10,containerd_tarball_sha256=507f47716d7b932e58aa1dc7e2b3f2b8779ee9a2988aa46ad58e09e2e47063d8,calico_tag=v3.21.2,hyperkube_prefix= docker.io/rancher/ Note: I use Fedora Core OS 31 for images. Thank you. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 25 16:56:54 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 25 Nov 2022 17:56:54 +0100 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> Message-ID: Hi Eugen: I don't know how it is possible that you have 3 registers in this table. And the first two are not IDs of any Neutron revision. I would suggest you to (1) check the DB schema deployed against a fresh deployed system (in Victoria version) and (2) fix this table to point to the correct revision numbers. Regards. On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: > Hi, > > thanks for your quick response. > > > In Neutron we don't support contract operations since Newton. > > > > If you are in Victoria and you correctly finished the DB migration, your > > HEADs should be: > > * contract: 5c85685d616d (from Newton) > > * expand: I38991de2b4 (from the last DB change in Victoria, > > source_and_destination_ip_prefix_neutron_metering_rule) > > That explains why I saw the newton revision in a new victoria cluster :-) > > > Please check what you have in the DB table neutron.alembic_version. The > > first register should be the expand number, the second the contract one. > If > > not, update them with the ones I've provided. > > The table alembic_versions contains the three versions I provided at > the end of my email: > > MariaDB [neutron]> select * from alembic_version; > +--------------+ > | version_num | > +--------------+ > | 633d74ebbc4b | > | bebe95aae4d4 | > | I38991de2b4 | > +--------------+ > > I already tried to manipulate the table so I would only have those two > versions you already mentioned, but then the upgrade --expand command > alternates the database again with the mentioned error message > ("Multiple heads are present"). > > > Before executing the > > migration tool again, be sure the DB schema matches the latest migration > > patch for your version. You can deploy a VM with devstack and run this > > version. > > That's what I wanted to try next, export only the db schema (no data) > from a working victoria neutron database, then export only data from > our production db and merge those, then import that into the > production and try to run upgrade --expand and --contract again. But I > didn't want to fiddle around too much in the production, that's why I > wanted to ask for your guidance first. > But IIUC even if I changed the table alembic_versions again and import > the merged db, wouldn't upgrade --expand somehow try to alternate the > table again? I don't see where the train revision comes from exactly, > could you clarify, please? It seems like I always get back to square > one when running the --expand command. > > Thanks! > Eugen > > Zitat von Rodolfo Alonso Hernandez : > > > Hi Eugen: > > > > In Neutron we don't support contract operations since Newton. > > > > If you are in Victoria and you correctly finished the DB migration, your > > HEADs should be: > > * contract: 5c85685d616d (from Newton) > > * expand: I38991de2b4 (from the last DB change in Victoria, > > source_and_destination_ip_prefix_neutron_metering_rule) > > > > Please check what you have in the DB table neutron.alembic_version. The > > first register should be the expand number, the second the contract one. > If > > not, update them with the ones I've provided. Before executing the > > migration tool again, be sure the DB schema matches the latest migration > > patch for your version. You can deploy a VM with devstack and run this > > version. > > > > Regards. > > > > > > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: > > > >> Hi *, > >> > >> I'd like to ask you for advice on how to clean up my neutron db. At > >> some point (which I don't know exactly, probably train) my neutron > >> database got inconsistent, apparently one of the upgrades did not go > >> as planned. The interesting thing is that the database still works, I > >> just upgraded from ussuri to victoria where that issue popped up again > >> during 'neutron-db-manage upgrade --expand', I'll add the information > >> at the end of this email. Apparently, I have multiple heads, and one > >> of them is from train, it seems as if I never ran --contract (or it > >> failed and I didn't notice). > >> Just some additional information what I did with this database: this > >> cloud started out as a test environment with a single control node and > >> then became a production environment. About two and a half years ago > >> we decided to reinstall this cloud with version ussuri and import the > >> databases. I had a virtual machine in which I upgraded the database > >> dump from production to the latest versions at that time. That all > >> worked quite well, I only didn't notice that something was missing. > >> Now that I finished the U --> V upgrade I want to fix this > >> inconsistency, I just have no idea how to do it. As I'm not sure how > >> all the neutron-db-manage commands work exactly I'd like to ask for > >> some guidance. For example, could the "stamp" command possibly help? > >> Or how else can I get rid of the train head and/or how to get the > >> train revision to "contract" so I can finish the upgrade and contract > >> the victoria revision? I can paste the whole neutron-db history if > >> necessary (neutron-db-manage history), please let me know what > >> information would be required to get to the bottom of this. > >> Any help is greatly appreciated! > >> > >> Thanks! > >> Eugen > >> > >> > >> ---snip--- > >> controller01:~ # neutron-db-manage upgrade --expand > >> [...] > >> alembic.script.revision.MultipleHeads: Multiple heads are present for > >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 > >> > >> controller01:~ # neutron-db-manage current --verbose > >> Running current for neutron ... > >> INFO [alembic.runtime.migration] Context impl MySQLImpl. > >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. > >> Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn > >> /neutron: > >> Rev: bebe95aae4d4 (head) > >> Parent: b5344a66e818 > >> Branch names: contract > >> Path: > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py > >> > >> Rev: 633d74ebbc4b (head) > >> Parent: 6c9eb0469914 > >> Branch names: expand > >> Path: > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> > >> Rev: I38991de2b4 (head) > >> Parent: 49d8622c5221 > >> Branch names: expand > >> Path: > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py > >> > >> OK > >> ---snip--- > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Nov 25 17:22:05 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 25 Nov 2022 18:22:05 +0100 Subject: [openstack-ansible] Designate: role seems trying to update DNS server pools before syncing database In-Reply-To: <27b50913162d497192325a9d65b1bed0@elca.ch> References: <27b50913162d497192325a9d65b1bed0@elca.ch> Message-ID: Hey, That looks like a totally valid bug and regression has been introduced in Wallaby. I've just placed a patch that should cover this issue [1] and it would be awesome if you could test it. [1] https://review.opendev.org/c/openstack/openstack-ansible-os_designate/+/865701 ??, 25 ????. 2022 ?. ? 12:46, Taltavull Jean-Fran?ois : > > Hello, > > During the first run, the playbook 'os-designate-install.yml' fails and the 'designate-manage pool update' command produces the log line below: > > 'Nov 25 11:50:06 pp3controller1a-designate-container-53d945bb designate-manage[2287]: 2022-11-25 11:50:06.518 2287 CRITICAL designate [designate-manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: Remote error: ProgrammingError (pymysql.err.ProgrammingError) (1146, "Table 'designate.pools' doesn't exist")' > > Looking at the 'os_designate' role code shows that the handler ` Perform Designate pools update` is flushed before tables are created in the 'designate' database. > > O.S.: Ubuntu 20.04 > OpenStack release: Wallaby > OSA tag: 23.2.0 > > Regards, > > Jean-Francois > From jean-francois.taltavull at elca.ch Fri Nov 25 17:33:29 2022 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Fri, 25 Nov 2022 17:33:29 +0000 Subject: [openstack-ansible] Designate: role seems trying to update DNS server pools before syncing database In-Reply-To: References: <27b50913162d497192325a9d65b1bed0@elca.ch> Message-ID: Hi Dmitriy, Sure, I will test your patch asap ! JF > -----Original Message----- > From: Dmitriy Rabotyagov > Sent: vendredi, 25 novembre 2022 18:22 > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Designate: role seems trying to update DNS > server pools before syncing database > > > > EXTERNAL MESSAGE - This email comes from outside ELCA companies. > > Hey, > > That looks like a totally valid bug and regression has been introduced in Wallaby. > I've just placed a patch that should cover this issue [1] and it would be awesome > if you could test it. > > [1] https://review.opendev.org/c/openstack/openstack-ansible- > os_designate/+/865701 > > ??, 25 ????. 2022 ?. ? 12:46, Taltavull Jean-Fran?ois > : > > > > Hello, > > > > During the first run, the playbook 'os-designate-install.yml' fails and the > 'designate-manage pool update' command produces the log line below: > > > > 'Nov 25 11:50:06 pp3controller1a-designate-container-53d945bb designate- > manage[2287]: 2022-11-25 11:50:06.518 2287 CRITICAL designate [designate- > manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: > Remote error: ProgrammingError (pymysql.err.ProgrammingError) (1146, "Table > 'designate.pools' doesn't exist")' > > > > Looking at the 'os_designate' role code shows that the handler ` Perform > Designate pools update` is flushed before tables are created in the 'designate' > database. > > > > O.S.: Ubuntu 20.04 > > OpenStack release: Wallaby > > OSA tag: 23.2.0 > > > > Regards, > > > > Jean-Francois > > From eblock at nde.ag Fri Nov 25 19:56:56 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 25 Nov 2022 19:56:56 +0000 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> Message-ID: <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> Hi, I believe they are neutron revisions, here's the output from yesterday's neutron-db-manage attempt: ---snip--- Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Context impl MySQLImpl. Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Will assume non-transactional DDL. Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Context impl MySQLImpl. Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Will assume non-transactional DDL. Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Running upgrade 5c85685d616d -> c43a0ddb6a03 Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> b5344a66e818 Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Running upgrade b5344a66e818 -> bebe95aae4d4 Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Running upgrade c613d0b82681 -> 6c9eb0469914 Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> 633d74ebbc4b Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running upgrade for neutron ... Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK ---snip--- And here's where they are located, apparently from train version: ---snip--- controller01:~ # grep -r 633d74ebbc4b /usr/lib/python3.6/site-packages/ ?bereinstimmungen in Bin?rdatei /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision ID: 633d74ebbc4b /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision = '633d74ebbc4b' ---snip--- Next week we'll try it with merging fresh victoria schema with our production data, then run the upgrade command again. Thanks, Eugen Zitat von Rodolfo Alonso Hernandez : > Hi Eugen: > > I don't know how it is possible that you have 3 registers in this table. > And the first two are not IDs of any Neutron revision. I would suggest you > to (1) check the DB schema deployed against a fresh deployed system (in > Victoria version) and (2) fix this table to point to the correct revision > numbers. > > Regards. > > > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: > >> Hi, >> >> thanks for your quick response. >> >> > In Neutron we don't support contract operations since Newton. >> > >> > If you are in Victoria and you correctly finished the DB migration, your >> > HEADs should be: >> > * contract: 5c85685d616d (from Newton) >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> That explains why I saw the newton revision in a new victoria cluster :-) >> >> > Please check what you have in the DB table neutron.alembic_version. The >> > first register should be the expand number, the second the contract one. >> If >> > not, update them with the ones I've provided. >> >> The table alembic_versions contains the three versions I provided at >> the end of my email: >> >> MariaDB [neutron]> select * from alembic_version; >> +--------------+ >> | version_num | >> +--------------+ >> | 633d74ebbc4b | >> | bebe95aae4d4 | >> | I38991de2b4 | >> +--------------+ >> >> I already tried to manipulate the table so I would only have those two >> versions you already mentioned, but then the upgrade --expand command >> alternates the database again with the mentioned error message >> ("Multiple heads are present"). >> >> > Before executing the >> > migration tool again, be sure the DB schema matches the latest migration >> > patch for your version. You can deploy a VM with devstack and run this >> > version. >> >> That's what I wanted to try next, export only the db schema (no data) >> from a working victoria neutron database, then export only data from >> our production db and merge those, then import that into the >> production and try to run upgrade --expand and --contract again. But I >> didn't want to fiddle around too much in the production, that's why I >> wanted to ask for your guidance first. >> But IIUC even if I changed the table alembic_versions again and import >> the merged db, wouldn't upgrade --expand somehow try to alternate the >> table again? I don't see where the train revision comes from exactly, >> could you clarify, please? It seems like I always get back to square >> one when running the --expand command. >> >> Thanks! >> Eugen >> >> Zitat von Rodolfo Alonso Hernandez : >> >> > Hi Eugen: >> > >> > In Neutron we don't support contract operations since Newton. >> > >> > If you are in Victoria and you correctly finished the DB migration, your >> > HEADs should be: >> > * contract: 5c85685d616d (from Newton) >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> > source_and_destination_ip_prefix_neutron_metering_rule) >> > >> > Please check what you have in the DB table neutron.alembic_version. The >> > first register should be the expand number, the second the contract one. >> If >> > not, update them with the ones I've provided. Before executing the >> > migration tool again, be sure the DB schema matches the latest migration >> > patch for your version. You can deploy a VM with devstack and run this >> > version. >> > >> > Regards. >> > >> > >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: >> > >> >> Hi *, >> >> >> >> I'd like to ask you for advice on how to clean up my neutron db. At >> >> some point (which I don't know exactly, probably train) my neutron >> >> database got inconsistent, apparently one of the upgrades did not go >> >> as planned. The interesting thing is that the database still works, I >> >> just upgraded from ussuri to victoria where that issue popped up again >> >> during 'neutron-db-manage upgrade --expand', I'll add the information >> >> at the end of this email. Apparently, I have multiple heads, and one >> >> of them is from train, it seems as if I never ran --contract (or it >> >> failed and I didn't notice). >> >> Just some additional information what I did with this database: this >> >> cloud started out as a test environment with a single control node and >> >> then became a production environment. About two and a half years ago >> >> we decided to reinstall this cloud with version ussuri and import the >> >> databases. I had a virtual machine in which I upgraded the database >> >> dump from production to the latest versions at that time. That all >> >> worked quite well, I only didn't notice that something was missing. >> >> Now that I finished the U --> V upgrade I want to fix this >> >> inconsistency, I just have no idea how to do it. As I'm not sure how >> >> all the neutron-db-manage commands work exactly I'd like to ask for >> >> some guidance. For example, could the "stamp" command possibly help? >> >> Or how else can I get rid of the train head and/or how to get the >> >> train revision to "contract" so I can finish the upgrade and contract >> >> the victoria revision? I can paste the whole neutron-db history if >> >> necessary (neutron-db-manage history), please let me know what >> >> information would be required to get to the bottom of this. >> >> Any help is greatly appreciated! >> >> >> >> Thanks! >> >> Eugen >> >> >> >> >> >> ---snip--- >> >> controller01:~ # neutron-db-manage upgrade --expand >> >> [...] >> >> alembic.script.revision.MultipleHeads: Multiple heads are present for >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 >> >> >> >> controller01:~ # neutron-db-manage current --verbose >> >> Running current for neutron ... >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. >> >> Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn >> >> /neutron: >> >> Rev: bebe95aae4d4 (head) >> >> Parent: b5344a66e818 >> >> Branch names: contract >> >> Path: >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py >> >> >> >> Rev: 633d74ebbc4b (head) >> >> Parent: 6c9eb0469914 >> >> Branch names: expand >> >> Path: >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> >> >> Rev: I38991de2b4 (head) >> >> Parent: 49d8622c5221 >> >> Branch names: expand >> >> Path: >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py >> >> >> >> OK >> >> ---snip--- >> >> >> >> >> >> >> >> >> >> From elod.illes at est.tech Fri Nov 25 20:55:11 2022 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Fri, 25 Nov 2022 20:55:11 +0000 Subject: [all][stable][ptl] Propose to EOL Queens series In-Reply-To: References: Message-ID: Hi, Since long time passed without any mail in this thread, I've generated the queens-eol patches [1] for all open projects on stable/queens. Release liaisons / PTLs please review and +1 them if we can proceed with the transition of the given deliverables. [1] https://review.opendev.org/q/topic:queens-eol Thanks in advance, El?d Ill?s irc: elodilles @ #openstack-stable / #openstack-release ________________________________ From: El?d Ill?s Sent: Friday, October 28, 2022 8:20 PM To: openstack-discuss at lists.openstack.org Subject: [all][stable][ptl] Propose to EOL Queens series Hi, As more and more teams decide about moving their Queens branches to End of Life, it looks like the time has come to transition the complete Queens stable release for every project. The reasons behind this are the following things: - gates are mostly broken - minimal number of bugfix backports are pushed to these branches - gate job definitions are still using the old, legacy zuul syntax - gate jobs are based on Ubuntu Xenial, which is also beyond its public maintenance window date and hard to maintain - lack of reviews / reviewers on this branch Based on the above, if no objection comes from teams, then I'll start the process of EOL'ing Queens stable series. Please let the community know what you think, or indicate if any of the projects' stable/queens branch should be kept open in Extended Maintenance. Thanks, El?d Ill?s irc: elodilles @ #openstack-stable / #openstack-release -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Nov 25 21:12:25 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 25 Nov 2022 13:12:25 -0800 Subject: [all][tc] What's happening in Technical Committee: summary 2022 Nov 25: Reading: 5 min Message-ID: <184b0a26c44.11f81a98b96165.5739984606283340669@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ *This week's meeting was cancelled. * Next TC weekly meeting will be on Nov 30 Wed at 16:00 UTC. Feel free to add the topic to the agenda[1] by Nov 29. 2. What we completed this week: ========================= * Nothing specific this week. 3. Activities In progress: ================== TC Tracker for 2023.1 cycle --------------------------------- * Current cycle working items and their progress are present in 2023.1 tracker etherpad[2]. Open Reviews ----------------- * Five open reviews for ongoing activities[3]. Technical Election changes (Extending Nomination and voting period) ---------------------------------------------------------------------------------- As discussed in TC PTG sessions, TC is extending the technical election (PTL as well as the TC election) nomination and voting period from 1 week to 2 weeks. The TC charter change[4] is up for review and the changes will be effective from the 2023.2 cycle technical elections. Renovate translation SIG i18 ---------------------------------- * As the next step, Brian is working on the weblate funding proposal and it will be discussed with the Foundation staff and Board members in Dec 6th board meeting. Adjutant situation (project is not active) ----------------------------------------------- The adjutant project is not active. The last changes merged was on Oct 26, 2021 (more than 1 year back), Gate is broken also. During July this year, TC was marking this project as inactive[5] but Dale smith volunteered as PTL to maintain it. But there is no improvement in the situation yet. TC will be discussing it in the next TC meeting and decide the next step. Project updates ------------------- * Add Skyline repository for OpenStack-Ansible[6] * Add the cinder-infinidat charm to Openstack charms[7] * Add the infinidat-tools subordinate charm to OpenStack charms[8] * Add the manila-infinidat charm to Openstack charms[9] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[10]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15:00 UTC [11] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031240.html [2] https://etherpad.opendev.org/p/tc-2023.1-tracker [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/865367 [5] https://review.opendev.org/c/openstack/governance/+/849153 [6] https://review.opendev.org/c/openstack/governance/+/863166 [7] https://review.opendev.org/c/openstack/governance/+/863958 [8] https://review.opendev.org/c/openstack/governance/+/864067 [9] https://review.opendev.org/c/openstack/governance/+/864068 [10] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [11] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From james.denton at rackspace.com Sat Nov 26 15:05:33 2022 From: james.denton at rackspace.com (James Denton) Date: Sat, 26 Nov 2022 15:05:33 +0000 Subject: [neutron] Switching the ML2 driver in-place from linuxbridge to OVN for an existing Cloud In-Reply-To: <45707ec7-4279-a691-ced5-1d6dd302a163@inovex.de> References: <2446920.D5JjJbiaP6@p1> <4318fbe5-f0f7-34eb-f852-15a6fb6810a6@inovex.de> <45707ec7-4279-a691-ced5-1d6dd302a163@inovex.de> Message-ID: Hi Christian, I documented this a few months ago here: https://www.jimmdenton.com/migrating-lxb-to-ovn/. It?s heavily geared towards OpenStack-Ansible, but you can probably extrapolate the steps for a vanilla deployment or other deployment tool. The details will vary. Highly recommend testing this in a lab environment that mirrors production, if possible. -- James Denton Principal Architect Rackspace Private Cloud - OpenStack james.denton at rackspace.com From: Christian Rohmann Date: Wednesday, November 23, 2022 at 4:27 AM To: James Denton , Slawek Kaplonski , openstack-discuss at lists.openstack.org Subject: Re: [neutron] Switching the ML2 driver in-place from linuxbridge to OVN for an existing Cloud CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hey James, I am really sorry I just get back to you now. On 29/08/2022 19:54, James Denton wrote: In my experience, it is possible to perform in-place migration from ML2/LXB -> ML2/OVN, albeit with a shutdown or hard reboot of the instance(s) to complete the VIF plugging and some other needed operations. I have a very rough outline of required steps if you?re interested, but they?re geared towards an openstack-ansible based deployment. I?ll try to put a writeup together in the next week or two demonstrating the process in a multi-node environment; the only one I have done recently was an all-in-one. James Denton Rackspace Private Cloud Thanks for replying, I'd really love to see your outline / list of steps. BTW, we are actively working on switching to openstack-ansible - so that would suit us well. We also came to the conclusion that a shutdown of all instances might be required. Question is, if that has to happen instantly or if one could do that on a project by project base. Our cloud is small enough to still make this feasible, but I suppose this topic is or will become more important to other, larger clouds as well. Regards Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon Nov 28 10:03:20 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 28 Nov 2022 11:03:20 +0100 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> Message-ID: Hi Eugen: Please check the code you have. Those revisions (633d74ebbc4b, bebe95aae4d4) do not exist in the Neutron repository. File [1] (or something similar with the same prefix) does not exist. Are you using a customized Neutron repository? Regards. [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: > Hi, > > I believe they are neutron revisions, here's the output from > yesterday's neutron-db-manage attempt: > > ---snip--- > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Context impl MySQLImpl. > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Will assume non-transactional DDL. > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Context impl MySQLImpl. > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Will assume non-transactional DDL. > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Running upgrade 5c85685d616d -> c43a0ddb6a03 > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> b5344a66e818 > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Running upgrade b5344a66e818 -> bebe95aae4d4 > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Running upgrade c613d0b82681 -> 6c9eb0469914 > Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> 633d74ebbc4b > Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running upgrade > for neutron ... > Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK > ---snip--- > > And here's where they are located, apparently from train version: > > ---snip--- > controller01:~ # grep -r 633d74ebbc4b /usr/lib/python3.6/site-packages/ > ?bereinstimmungen in Bin?rdatei > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision > ID: > 633d74ebbc4b > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision > = > '633d74ebbc4b' > ---snip--- > > Next week we'll try it with merging fresh victoria schema with our > production data, then run the upgrade command again. > > Thanks, > Eugen > Zitat von Rodolfo Alonso Hernandez : > > > Hi Eugen: > > > > I don't know how it is possible that you have 3 registers in this table. > > And the first two are not IDs of any Neutron revision. I would suggest > you > > to (1) check the DB schema deployed against a fresh deployed system (in > > Victoria version) and (2) fix this table to point to the correct revision > > numbers. > > > > Regards. > > > > > > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: > > > >> Hi, > >> > >> thanks for your quick response. > >> > >> > In Neutron we don't support contract operations since Newton. > >> > > >> > If you are in Victoria and you correctly finished the DB migration, > your > >> > HEADs should be: > >> > * contract: 5c85685d616d (from Newton) > >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> > >> That explains why I saw the newton revision in a new victoria cluster > :-) > >> > >> > Please check what you have in the DB table neutron.alembic_version. > The > >> > first register should be the expand number, the second the contract > one. > >> If > >> > not, update them with the ones I've provided. > >> > >> The table alembic_versions contains the three versions I provided at > >> the end of my email: > >> > >> MariaDB [neutron]> select * from alembic_version; > >> +--------------+ > >> | version_num | > >> +--------------+ > >> | 633d74ebbc4b | > >> | bebe95aae4d4 | > >> | I38991de2b4 | > >> +--------------+ > >> > >> I already tried to manipulate the table so I would only have those two > >> versions you already mentioned, but then the upgrade --expand command > >> alternates the database again with the mentioned error message > >> ("Multiple heads are present"). > >> > >> > Before executing the > >> > migration tool again, be sure the DB schema matches the latest > migration > >> > patch for your version. You can deploy a VM with devstack and run this > >> > version. > >> > >> That's what I wanted to try next, export only the db schema (no data) > >> from a working victoria neutron database, then export only data from > >> our production db and merge those, then import that into the > >> production and try to run upgrade --expand and --contract again. But I > >> didn't want to fiddle around too much in the production, that's why I > >> wanted to ask for your guidance first. > >> But IIUC even if I changed the table alembic_versions again and import > >> the merged db, wouldn't upgrade --expand somehow try to alternate the > >> table again? I don't see where the train revision comes from exactly, > >> could you clarify, please? It seems like I always get back to square > >> one when running the --expand command. > >> > >> Thanks! > >> Eugen > >> > >> Zitat von Rodolfo Alonso Hernandez : > >> > >> > Hi Eugen: > >> > > >> > In Neutron we don't support contract operations since Newton. > >> > > >> > If you are in Victoria and you correctly finished the DB migration, > your > >> > HEADs should be: > >> > * contract: 5c85685d616d (from Newton) > >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> > > >> > Please check what you have in the DB table neutron.alembic_version. > The > >> > first register should be the expand number, the second the contract > one. > >> If > >> > not, update them with the ones I've provided. Before executing the > >> > migration tool again, be sure the DB schema matches the latest > migration > >> > patch for your version. You can deploy a VM with devstack and run this > >> > version. > >> > > >> > Regards. > >> > > >> > > >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: > >> > > >> >> Hi *, > >> >> > >> >> I'd like to ask you for advice on how to clean up my neutron db. At > >> >> some point (which I don't know exactly, probably train) my neutron > >> >> database got inconsistent, apparently one of the upgrades did not go > >> >> as planned. The interesting thing is that the database still works, I > >> >> just upgraded from ussuri to victoria where that issue popped up > again > >> >> during 'neutron-db-manage upgrade --expand', I'll add the information > >> >> at the end of this email. Apparently, I have multiple heads, and one > >> >> of them is from train, it seems as if I never ran --contract (or it > >> >> failed and I didn't notice). > >> >> Just some additional information what I did with this database: this > >> >> cloud started out as a test environment with a single control node > and > >> >> then became a production environment. About two and a half years ago > >> >> we decided to reinstall this cloud with version ussuri and import the > >> >> databases. I had a virtual machine in which I upgraded the database > >> >> dump from production to the latest versions at that time. That all > >> >> worked quite well, I only didn't notice that something was missing. > >> >> Now that I finished the U --> V upgrade I want to fix this > >> >> inconsistency, I just have no idea how to do it. As I'm not sure how > >> >> all the neutron-db-manage commands work exactly I'd like to ask for > >> >> some guidance. For example, could the "stamp" command possibly help? > >> >> Or how else can I get rid of the train head and/or how to get the > >> >> train revision to "contract" so I can finish the upgrade and contract > >> >> the victoria revision? I can paste the whole neutron-db history if > >> >> necessary (neutron-db-manage history), please let me know what > >> >> information would be required to get to the bottom of this. > >> >> Any help is greatly appreciated! > >> >> > >> >> Thanks! > >> >> Eugen > >> >> > >> >> > >> >> ---snip--- > >> >> controller01:~ # neutron-db-manage upgrade --expand > >> >> [...] > >> >> alembic.script.revision.MultipleHeads: Multiple heads are present for > >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 > >> >> > >> >> controller01:~ # neutron-db-manage current --verbose > >> >> Running current for neutron ... > >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. > >> >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. > >> >> Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn > >> >> /neutron: > >> >> Rev: bebe95aae4d4 (head) > >> >> Parent: b5344a66e818 > >> >> Branch names: contract > >> >> Path: > >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py > >> >> > >> >> Rev: 633d74ebbc4b (head) > >> >> Parent: 6c9eb0469914 > >> >> Branch names: expand > >> >> Path: > >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> >> > >> >> Rev: I38991de2b4 (head) > >> >> Parent: 49d8622c5221 > >> >> Branch names: expand > >> >> Path: > >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py > >> >> > >> >> OK > >> >> ---snip--- > >> >> > >> >> > >> >> > >> > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Nov 28 10:31:15 2022 From: eblock at nde.ag (Eugen Block) Date: Mon, 28 Nov 2022 10:31:15 +0000 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> Message-ID: <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> Hi, not really, no. I have no explanation how those files got there, to be honest. We're using openSUSE Leap (currently 15.2) and the respective repos from openSUSE. By the way, I only see those files on one of the control nodes, that's irritating me even more. But if those files are not known, maybe I should just delete them and the contract directories as well? Because during the next upgrade I'll probably have the same issue again. So if I see it correctly the two "contract" directories should be removed /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract as well as this revision file: /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py Comparing with the "native" V installation (and the other control node) I should only keep two of these files: controller01:~ # ll /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/ insgesamt 16 -rw-r--r-- 1 root root 900 30. M?r 2021 633d74ebbc4b_.py <-- delete -rw-r--r-- 1 root root 1694 14. Nov 16:07 63fd95af7dcd_conntrack_helper.py -rw-r--r-- 1 root root 900 30. M?r 2021 6c9eb0469914_.py <-- delete -rw-r--r-- 1 root root 1134 14. Nov 16:07 c613d0b82681_subnet_force_network_id.py drwxr-xr-x 2 root root 312 23. Nov 11:09 __pycache__ I believe that should clean it up. Then I'll import the merged neutron database and run the upgrade commands again. Does that make sense? Thanks! Eugen Zitat von Rodolfo Alonso Hernandez : > Hi Eugen: > > Please check the code you have. Those revisions (633d74ebbc4b, > bebe95aae4d4) do not exist in the Neutron repository. File [1] (or > something similar with the same prefix) does not exist. Are you using a > customized Neutron repository? > > Regards. > > [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > > On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: > >> Hi, >> >> I believe they are neutron revisions, here's the output from >> yesterday's neutron-db-manage attempt: >> >> ---snip--- >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Context impl MySQLImpl. >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Will assume non-transactional DDL. >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Context impl MySQLImpl. >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Will assume non-transactional DDL. >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Running upgrade 5c85685d616d -> c43a0ddb6a03 >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> b5344a66e818 >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Running upgrade b5344a66e818 -> bebe95aae4d4 >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Running upgrade c613d0b82681 -> 6c9eb0469914 >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> 633d74ebbc4b >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running upgrade >> for neutron ... >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK >> ---snip--- >> >> And here's where they are located, apparently from train version: >> >> ---snip--- >> controller01:~ # grep -r 633d74ebbc4b /usr/lib/python3.6/site-packages/ >> ?bereinstimmungen in Bin?rdatei >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision >> ID: >> 633d74ebbc4b >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision >> = >> '633d74ebbc4b' >> ---snip--- >> >> Next week we'll try it with merging fresh victoria schema with our >> production data, then run the upgrade command again. >> >> Thanks, >> Eugen >> Zitat von Rodolfo Alonso Hernandez : >> >> > Hi Eugen: >> > >> > I don't know how it is possible that you have 3 registers in this table. >> > And the first two are not IDs of any Neutron revision. I would suggest >> you >> > to (1) check the DB schema deployed against a fresh deployed system (in >> > Victoria version) and (2) fix this table to point to the correct revision >> > numbers. >> > >> > Regards. >> > >> > >> > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: >> > >> >> Hi, >> >> >> >> thanks for your quick response. >> >> >> >> > In Neutron we don't support contract operations since Newton. >> >> > >> >> > If you are in Victoria and you correctly finished the DB migration, >> your >> >> > HEADs should be: >> >> > * contract: 5c85685d616d (from Newton) >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> >> >> That explains why I saw the newton revision in a new victoria cluster >> :-) >> >> >> >> > Please check what you have in the DB table neutron.alembic_version. >> The >> >> > first register should be the expand number, the second the contract >> one. >> >> If >> >> > not, update them with the ones I've provided. >> >> >> >> The table alembic_versions contains the three versions I provided at >> >> the end of my email: >> >> >> >> MariaDB [neutron]> select * from alembic_version; >> >> +--------------+ >> >> | version_num | >> >> +--------------+ >> >> | 633d74ebbc4b | >> >> | bebe95aae4d4 | >> >> | I38991de2b4 | >> >> +--------------+ >> >> >> >> I already tried to manipulate the table so I would only have those two >> >> versions you already mentioned, but then the upgrade --expand command >> >> alternates the database again with the mentioned error message >> >> ("Multiple heads are present"). >> >> >> >> > Before executing the >> >> > migration tool again, be sure the DB schema matches the latest >> migration >> >> > patch for your version. You can deploy a VM with devstack and run this >> >> > version. >> >> >> >> That's what I wanted to try next, export only the db schema (no data) >> >> from a working victoria neutron database, then export only data from >> >> our production db and merge those, then import that into the >> >> production and try to run upgrade --expand and --contract again. But I >> >> didn't want to fiddle around too much in the production, that's why I >> >> wanted to ask for your guidance first. >> >> But IIUC even if I changed the table alembic_versions again and import >> >> the merged db, wouldn't upgrade --expand somehow try to alternate the >> >> table again? I don't see where the train revision comes from exactly, >> >> could you clarify, please? It seems like I always get back to square >> >> one when running the --expand command. >> >> >> >> Thanks! >> >> Eugen >> >> >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> > Hi Eugen: >> >> > >> >> > In Neutron we don't support contract operations since Newton. >> >> > >> >> > If you are in Victoria and you correctly finished the DB migration, >> your >> >> > HEADs should be: >> >> > * contract: 5c85685d616d (from Newton) >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> > >> >> > Please check what you have in the DB table neutron.alembic_version. >> The >> >> > first register should be the expand number, the second the contract >> one. >> >> If >> >> > not, update them with the ones I've provided. Before executing the >> >> > migration tool again, be sure the DB schema matches the latest >> migration >> >> > patch for your version. You can deploy a VM with devstack and run this >> >> > version. >> >> > >> >> > Regards. >> >> > >> >> > >> >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: >> >> > >> >> >> Hi *, >> >> >> >> >> >> I'd like to ask you for advice on how to clean up my neutron db. At >> >> >> some point (which I don't know exactly, probably train) my neutron >> >> >> database got inconsistent, apparently one of the upgrades did not go >> >> >> as planned. The interesting thing is that the database still works, I >> >> >> just upgraded from ussuri to victoria where that issue popped up >> again >> >> >> during 'neutron-db-manage upgrade --expand', I'll add the information >> >> >> at the end of this email. Apparently, I have multiple heads, and one >> >> >> of them is from train, it seems as if I never ran --contract (or it >> >> >> failed and I didn't notice). >> >> >> Just some additional information what I did with this database: this >> >> >> cloud started out as a test environment with a single control node >> and >> >> >> then became a production environment. About two and a half years ago >> >> >> we decided to reinstall this cloud with version ussuri and import the >> >> >> databases. I had a virtual machine in which I upgraded the database >> >> >> dump from production to the latest versions at that time. That all >> >> >> worked quite well, I only didn't notice that something was missing. >> >> >> Now that I finished the U --> V upgrade I want to fix this >> >> >> inconsistency, I just have no idea how to do it. As I'm not sure how >> >> >> all the neutron-db-manage commands work exactly I'd like to ask for >> >> >> some guidance. For example, could the "stamp" command possibly help? >> >> >> Or how else can I get rid of the train head and/or how to get the >> >> >> train revision to "contract" so I can finish the upgrade and contract >> >> >> the victoria revision? I can paste the whole neutron-db history if >> >> >> necessary (neutron-db-manage history), please let me know what >> >> >> information would be required to get to the bottom of this. >> >> >> Any help is greatly appreciated! >> >> >> >> >> >> Thanks! >> >> >> Eugen >> >> >> >> >> >> >> >> >> ---snip--- >> >> >> controller01:~ # neutron-db-manage upgrade --expand >> >> >> [...] >> >> >> alembic.script.revision.MultipleHeads: Multiple heads are present for >> >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 >> >> >> >> >> >> controller01:~ # neutron-db-manage current --verbose >> >> >> Running current for neutron ... >> >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> >> >> INFO [alembic.runtime.migration] Will assume non-transactional DDL. >> >> >> Current revision(s) for mysql+pymysql://neutron:XXXXX at controller.fqdn >> >> >> /neutron: >> >> >> Rev: bebe95aae4d4 (head) >> >> >> Parent: b5344a66e818 >> >> >> Branch names: contract >> >> >> Path: >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py >> >> >> >> >> >> Rev: 633d74ebbc4b (head) >> >> >> Parent: 6c9eb0469914 >> >> >> Branch names: expand >> >> >> Path: >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> >> >> >> >> Rev: I38991de2b4 (head) >> >> >> Parent: 49d8622c5221 >> >> >> Branch names: expand >> >> >> Path: >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py >> >> >> >> >> >> OK >> >> >> ---snip--- >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From ralonsoh at redhat.com Mon Nov 28 10:36:04 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 28 Nov 2022 11:36:04 +0100 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> Message-ID: Yes, but you should also be sure what is the status of the DB schema. That means to check what is the latest migration file applied and set that revision ID on the "neutron.alembic_version" table. On Mon, Nov 28, 2022 at 11:31 AM Eugen Block wrote: > Hi, > > not really, no. I have no explanation how those files got there, to be > honest. We're using openSUSE Leap (currently 15.2) and the respective > repos from openSUSE. By the way, I only see those files on one of the > control nodes, that's irritating me even more. > But if those files are not known, maybe I should just delete them and > the contract directories as well? Because during the next upgrade I'll > probably have the same issue again. So if I see it correctly the two > "contract" directories should be removed > > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract > > as well as this revision file: > > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > > Comparing with the "native" V installation (and the other control > node) I should only keep two of these files: > > controller01:~ # ll > > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/ > insgesamt 16 > -rw-r--r-- 1 root root 900 30. M?r 2021 633d74ebbc4b_.py <-- delete > -rw-r--r-- 1 root root 1694 14. Nov 16:07 63fd95af7dcd_conntrack_helper.py > -rw-r--r-- 1 root root 900 30. M?r 2021 6c9eb0469914_.py <-- delete > -rw-r--r-- 1 root root 1134 14. Nov 16:07 > c613d0b82681_subnet_force_network_id.py > drwxr-xr-x 2 root root 312 23. Nov 11:09 __pycache__ > > I believe that should clean it up. Then I'll import the merged neutron > database and run the upgrade commands again. Does that make sense? > > Thanks! > Eugen > > Zitat von Rodolfo Alonso Hernandez : > > > Hi Eugen: > > > > Please check the code you have. Those revisions (633d74ebbc4b, > > bebe95aae4d4) do not exist in the Neutron repository. File [1] (or > > something similar with the same prefix) does not exist. Are you using a > > customized Neutron repository? > > > > Regards. > > > > > [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > > > > On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: > > > >> Hi, > >> > >> I believe they are neutron revisions, here's the output from > >> yesterday's neutron-db-manage attempt: > >> > >> ---snip--- > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Context impl MySQLImpl. > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Will assume non-transactional DDL. > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Context impl MySQLImpl. > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Will assume non-transactional DDL. > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Running upgrade 5c85685d616d -> c43a0ddb6a03 > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> b5344a66e818 > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Running upgrade b5344a66e818 -> bebe95aae4d4 > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Running upgrade c613d0b82681 -> 6c9eb0469914 > >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> 633d74ebbc4b > >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running upgrade > >> for neutron ... > >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK > >> ---snip--- > >> > >> And here's where they are located, apparently from train version: > >> > >> ---snip--- > >> controller01:~ # grep -r 633d74ebbc4b /usr/lib/python3.6/site-packages/ > >> ?bereinstimmungen in Bin?rdatei > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision > >> ID: > >> 633d74ebbc4b > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision > >> = > >> '633d74ebbc4b' > >> ---snip--- > >> > >> Next week we'll try it with merging fresh victoria schema with our > >> production data, then run the upgrade command again. > >> > >> Thanks, > >> Eugen > >> Zitat von Rodolfo Alonso Hernandez : > >> > >> > Hi Eugen: > >> > > >> > I don't know how it is possible that you have 3 registers in this > table. > >> > And the first two are not IDs of any Neutron revision. I would suggest > >> you > >> > to (1) check the DB schema deployed against a fresh deployed system > (in > >> > Victoria version) and (2) fix this table to point to the correct > revision > >> > numbers. > >> > > >> > Regards. > >> > > >> > > >> > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: > >> > > >> >> Hi, > >> >> > >> >> thanks for your quick response. > >> >> > >> >> > In Neutron we don't support contract operations since Newton. > >> >> > > >> >> > If you are in Victoria and you correctly finished the DB migration, > >> your > >> >> > HEADs should be: > >> >> > * contract: 5c85685d616d (from Newton) > >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> >> > >> >> That explains why I saw the newton revision in a new victoria cluster > >> :-) > >> >> > >> >> > Please check what you have in the DB table neutron.alembic_version. > >> The > >> >> > first register should be the expand number, the second the contract > >> one. > >> >> If > >> >> > not, update them with the ones I've provided. > >> >> > >> >> The table alembic_versions contains the three versions I provided at > >> >> the end of my email: > >> >> > >> >> MariaDB [neutron]> select * from alembic_version; > >> >> +--------------+ > >> >> | version_num | > >> >> +--------------+ > >> >> | 633d74ebbc4b | > >> >> | bebe95aae4d4 | > >> >> | I38991de2b4 | > >> >> +--------------+ > >> >> > >> >> I already tried to manipulate the table so I would only have those > two > >> >> versions you already mentioned, but then the upgrade --expand command > >> >> alternates the database again with the mentioned error message > >> >> ("Multiple heads are present"). > >> >> > >> >> > Before executing the > >> >> > migration tool again, be sure the DB schema matches the latest > >> migration > >> >> > patch for your version. You can deploy a VM with devstack and run > this > >> >> > version. > >> >> > >> >> That's what I wanted to try next, export only the db schema (no data) > >> >> from a working victoria neutron database, then export only data from > >> >> our production db and merge those, then import that into the > >> >> production and try to run upgrade --expand and --contract again. But > I > >> >> didn't want to fiddle around too much in the production, that's why I > >> >> wanted to ask for your guidance first. > >> >> But IIUC even if I changed the table alembic_versions again and > import > >> >> the merged db, wouldn't upgrade --expand somehow try to alternate the > >> >> table again? I don't see where the train revision comes from exactly, > >> >> could you clarify, please? It seems like I always get back to square > >> >> one when running the --expand command. > >> >> > >> >> Thanks! > >> >> Eugen > >> >> > >> >> Zitat von Rodolfo Alonso Hernandez : > >> >> > >> >> > Hi Eugen: > >> >> > > >> >> > In Neutron we don't support contract operations since Newton. > >> >> > > >> >> > If you are in Victoria and you correctly finished the DB migration, > >> your > >> >> > HEADs should be: > >> >> > * contract: 5c85685d616d (from Newton) > >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> >> > > >> >> > Please check what you have in the DB table neutron.alembic_version. > >> The > >> >> > first register should be the expand number, the second the contract > >> one. > >> >> If > >> >> > not, update them with the ones I've provided. Before executing the > >> >> > migration tool again, be sure the DB schema matches the latest > >> migration > >> >> > patch for your version. You can deploy a VM with devstack and run > this > >> >> > version. > >> >> > > >> >> > Regards. > >> >> > > >> >> > > >> >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: > >> >> > > >> >> >> Hi *, > >> >> >> > >> >> >> I'd like to ask you for advice on how to clean up my neutron db. > At > >> >> >> some point (which I don't know exactly, probably train) my neutron > >> >> >> database got inconsistent, apparently one of the upgrades did not > go > >> >> >> as planned. The interesting thing is that the database still > works, I > >> >> >> just upgraded from ussuri to victoria where that issue popped up > >> again > >> >> >> during 'neutron-db-manage upgrade --expand', I'll add the > information > >> >> >> at the end of this email. Apparently, I have multiple heads, and > one > >> >> >> of them is from train, it seems as if I never ran --contract (or > it > >> >> >> failed and I didn't notice). > >> >> >> Just some additional information what I did with this database: > this > >> >> >> cloud started out as a test environment with a single control node > >> and > >> >> >> then became a production environment. About two and a half years > ago > >> >> >> we decided to reinstall this cloud with version ussuri and import > the > >> >> >> databases. I had a virtual machine in which I upgraded the > database > >> >> >> dump from production to the latest versions at that time. That all > >> >> >> worked quite well, I only didn't notice that something was > missing. > >> >> >> Now that I finished the U --> V upgrade I want to fix this > >> >> >> inconsistency, I just have no idea how to do it. As I'm not sure > how > >> >> >> all the neutron-db-manage commands work exactly I'd like to ask > for > >> >> >> some guidance. For example, could the "stamp" command possibly > help? > >> >> >> Or how else can I get rid of the train head and/or how to get the > >> >> >> train revision to "contract" so I can finish the upgrade and > contract > >> >> >> the victoria revision? I can paste the whole neutron-db history if > >> >> >> necessary (neutron-db-manage history), please let me know what > >> >> >> information would be required to get to the bottom of this. > >> >> >> Any help is greatly appreciated! > >> >> >> > >> >> >> Thanks! > >> >> >> Eugen > >> >> >> > >> >> >> > >> >> >> ---snip--- > >> >> >> controller01:~ # neutron-db-manage upgrade --expand > >> >> >> [...] > >> >> >> alembic.script.revision.MultipleHeads: Multiple heads are present > for > >> >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 > >> >> >> > >> >> >> controller01:~ # neutron-db-manage current --verbose > >> >> >> Running current for neutron ... > >> >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. > >> >> >> INFO [alembic.runtime.migration] Will assume non-transactional > DDL. > >> >> >> Current revision(s) for > mysql+pymysql://neutron:XXXXX at controller.fqdn > >> >> >> /neutron: > >> >> >> Rev: bebe95aae4d4 (head) > >> >> >> Parent: b5344a66e818 > >> >> >> Branch names: contract > >> >> >> Path: > >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py > >> >> >> > >> >> >> Rev: 633d74ebbc4b (head) > >> >> >> Parent: 6c9eb0469914 > >> >> >> Branch names: expand > >> >> >> Path: > >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> >> >> > >> >> >> Rev: I38991de2b4 (head) > >> >> >> Parent: 49d8622c5221 > >> >> >> Branch names: expand > >> >> >> Path: > >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py > >> >> >> > >> >> >> OK > >> >> >> ---snip--- > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanguangyu2 at gmail.com Mon Nov 28 10:45:40 2022 From: hanguangyu2 at gmail.com (=?UTF-8?B?6Z+p5YWJ5a6H?=) Date: Mon, 28 Nov 2022 18:45:40 +0800 Subject: Does openstack have an official automatic evacuation instance scheme Message-ID: Hi all, I would like to ask whether the openstack community has an official automatic evacuation instance scheme? I have a openstack cluster, and all of glance, nova and cinder has used ceph. So I have a basic of instance HA. I want to imply automatic evacuation instance, but I'm a newer in this. Does the community have any related projects or recommended open source solutions? Any help is greatly appreciated! Thanks! Han From yipikai7 at gmail.com Mon Nov 28 11:01:48 2022 From: yipikai7 at gmail.com (Cedric) Date: Mon, 28 Nov 2022 12:01:48 +0100 Subject: Does openstack have an official automatic evacuation instance scheme In-Reply-To: References: Message-ID: Hello, Maybe the Masakari project is what you are looking for: https://docs.openstack.org/masakari/latest/install/overview.html C?dric On Mon, Nov 28, 2022, 11:48 ??? wrote: > Hi all, > > I would like to ask whether the openstack community has an official > automatic evacuation instance scheme? > > I have a openstack cluster, and all of glance, nova and cinder has > used ceph. So I have a basic of instance HA. I want to imply automatic > evacuation instance, but I'm a newer in this. > > Does the community have any related projects or recommended open > source solutions? > > Any help is greatly appreciated! > > Thanks! > Han > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Mon Nov 28 11:03:05 2022 From: eblock at nde.ag (Eugen Block) Date: Mon, 28 Nov 2022 11:03:05 +0000 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> Message-ID: <20221128110305.Horde.Tdxes4QBYWiWeWzOGCowoNe@webmail.nde.ag> How do I check what the latest applied migration file was? Zitat von Rodolfo Alonso Hernandez : > Yes, but you should also be sure what is the status of the DB schema. That > means to check what is the latest migration file applied and set that > revision ID on the "neutron.alembic_version" table. > > On Mon, Nov 28, 2022 at 11:31 AM Eugen Block wrote: > >> Hi, >> >> not really, no. I have no explanation how those files got there, to be >> honest. We're using openSUSE Leap (currently 15.2) and the respective >> repos from openSUSE. By the way, I only see those files on one of the >> control nodes, that's irritating me even more. >> But if those files are not known, maybe I should just delete them and >> the contract directories as well? Because during the next upgrade I'll >> probably have the same issue again. So if I see it correctly the two >> "contract" directories should be removed >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract >> >> as well as this revision file: >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> Comparing with the "native" V installation (and the other control >> node) I should only keep two of these files: >> >> controller01:~ # ll >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/ >> insgesamt 16 >> -rw-r--r-- 1 root root 900 30. M?r 2021 633d74ebbc4b_.py <-- delete >> -rw-r--r-- 1 root root 1694 14. Nov 16:07 63fd95af7dcd_conntrack_helper.py >> -rw-r--r-- 1 root root 900 30. M?r 2021 6c9eb0469914_.py <-- delete >> -rw-r--r-- 1 root root 1134 14. Nov 16:07 >> c613d0b82681_subnet_force_network_id.py >> drwxr-xr-x 2 root root 312 23. Nov 11:09 __pycache__ >> >> I believe that should clean it up. Then I'll import the merged neutron >> database and run the upgrade commands again. Does that make sense? >> >> Thanks! >> Eugen >> >> Zitat von Rodolfo Alonso Hernandez : >> >> > Hi Eugen: >> > >> > Please check the code you have. Those revisions (633d74ebbc4b, >> > bebe95aae4d4) do not exist in the Neutron repository. File [1] (or >> > something similar with the same prefix) does not exist. Are you using a >> > customized Neutron repository? >> > >> > Regards. >> > >> > >> [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> > >> > On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: >> > >> >> Hi, >> >> >> >> I believe they are neutron revisions, here's the output from >> >> yesterday's neutron-db-manage attempt: >> >> >> >> ---snip--- >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Context impl MySQLImpl. >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Will assume non-transactional DDL. >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Context impl MySQLImpl. >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Will assume non-transactional DDL. >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Running upgrade 5c85685d616d -> c43a0ddb6a03 >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> b5344a66e818 >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Running upgrade b5344a66e818 -> bebe95aae4d4 >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Running upgrade c613d0b82681 -> 6c9eb0469914 >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> 633d74ebbc4b >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running upgrade >> >> for neutron ... >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK >> >> ---snip--- >> >> >> >> And here's where they are located, apparently from train version: >> >> >> >> ---snip--- >> >> controller01:~ # grep -r 633d74ebbc4b /usr/lib/python3.6/site-packages/ >> >> ?bereinstimmungen in Bin?rdatei >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision >> >> ID: >> >> 633d74ebbc4b >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision >> >> = >> >> '633d74ebbc4b' >> >> ---snip--- >> >> >> >> Next week we'll try it with merging fresh victoria schema with our >> >> production data, then run the upgrade command again. >> >> >> >> Thanks, >> >> Eugen >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> > Hi Eugen: >> >> > >> >> > I don't know how it is possible that you have 3 registers in this >> table. >> >> > And the first two are not IDs of any Neutron revision. I would suggest >> >> you >> >> > to (1) check the DB schema deployed against a fresh deployed system >> (in >> >> > Victoria version) and (2) fix this table to point to the correct >> revision >> >> > numbers. >> >> > >> >> > Regards. >> >> > >> >> > >> >> > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> thanks for your quick response. >> >> >> >> >> >> > In Neutron we don't support contract operations since Newton. >> >> >> > >> >> >> > If you are in Victoria and you correctly finished the DB migration, >> >> your >> >> >> > HEADs should be: >> >> >> > * contract: 5c85685d616d (from Newton) >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> >> >> >> >> That explains why I saw the newton revision in a new victoria cluster >> >> :-) >> >> >> >> >> >> > Please check what you have in the DB table neutron.alembic_version. >> >> The >> >> >> > first register should be the expand number, the second the contract >> >> one. >> >> >> If >> >> >> > not, update them with the ones I've provided. >> >> >> >> >> >> The table alembic_versions contains the three versions I provided at >> >> >> the end of my email: >> >> >> >> >> >> MariaDB [neutron]> select * from alembic_version; >> >> >> +--------------+ >> >> >> | version_num | >> >> >> +--------------+ >> >> >> | 633d74ebbc4b | >> >> >> | bebe95aae4d4 | >> >> >> | I38991de2b4 | >> >> >> +--------------+ >> >> >> >> >> >> I already tried to manipulate the table so I would only have those >> two >> >> >> versions you already mentioned, but then the upgrade --expand command >> >> >> alternates the database again with the mentioned error message >> >> >> ("Multiple heads are present"). >> >> >> >> >> >> > Before executing the >> >> >> > migration tool again, be sure the DB schema matches the latest >> >> migration >> >> >> > patch for your version. You can deploy a VM with devstack and run >> this >> >> >> > version. >> >> >> >> >> >> That's what I wanted to try next, export only the db schema (no data) >> >> >> from a working victoria neutron database, then export only data from >> >> >> our production db and merge those, then import that into the >> >> >> production and try to run upgrade --expand and --contract again. But >> I >> >> >> didn't want to fiddle around too much in the production, that's why I >> >> >> wanted to ask for your guidance first. >> >> >> But IIUC even if I changed the table alembic_versions again and >> import >> >> >> the merged db, wouldn't upgrade --expand somehow try to alternate the >> >> >> table again? I don't see where the train revision comes from exactly, >> >> >> could you clarify, please? It seems like I always get back to square >> >> >> one when running the --expand command. >> >> >> >> >> >> Thanks! >> >> >> Eugen >> >> >> >> >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> >> >> > Hi Eugen: >> >> >> > >> >> >> > In Neutron we don't support contract operations since Newton. >> >> >> > >> >> >> > If you are in Victoria and you correctly finished the DB migration, >> >> your >> >> >> > HEADs should be: >> >> >> > * contract: 5c85685d616d (from Newton) >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> >> > >> >> >> > Please check what you have in the DB table neutron.alembic_version. >> >> The >> >> >> > first register should be the expand number, the second the contract >> >> one. >> >> >> If >> >> >> > not, update them with the ones I've provided. Before executing the >> >> >> > migration tool again, be sure the DB schema matches the latest >> >> migration >> >> >> > patch for your version. You can deploy a VM with devstack and run >> this >> >> >> > version. >> >> >> > >> >> >> > Regards. >> >> >> > >> >> >> > >> >> >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block wrote: >> >> >> > >> >> >> >> Hi *, >> >> >> >> >> >> >> >> I'd like to ask you for advice on how to clean up my neutron db. >> At >> >> >> >> some point (which I don't know exactly, probably train) my neutron >> >> >> >> database got inconsistent, apparently one of the upgrades did not >> go >> >> >> >> as planned. The interesting thing is that the database still >> works, I >> >> >> >> just upgraded from ussuri to victoria where that issue popped up >> >> again >> >> >> >> during 'neutron-db-manage upgrade --expand', I'll add the >> information >> >> >> >> at the end of this email. Apparently, I have multiple heads, and >> one >> >> >> >> of them is from train, it seems as if I never ran --contract (or >> it >> >> >> >> failed and I didn't notice). >> >> >> >> Just some additional information what I did with this database: >> this >> >> >> >> cloud started out as a test environment with a single control node >> >> and >> >> >> >> then became a production environment. About two and a half years >> ago >> >> >> >> we decided to reinstall this cloud with version ussuri and import >> the >> >> >> >> databases. I had a virtual machine in which I upgraded the >> database >> >> >> >> dump from production to the latest versions at that time. That all >> >> >> >> worked quite well, I only didn't notice that something was >> missing. >> >> >> >> Now that I finished the U --> V upgrade I want to fix this >> >> >> >> inconsistency, I just have no idea how to do it. As I'm not sure >> how >> >> >> >> all the neutron-db-manage commands work exactly I'd like to ask >> for >> >> >> >> some guidance. For example, could the "stamp" command possibly >> help? >> >> >> >> Or how else can I get rid of the train head and/or how to get the >> >> >> >> train revision to "contract" so I can finish the upgrade and >> contract >> >> >> >> the victoria revision? I can paste the whole neutron-db history if >> >> >> >> necessary (neutron-db-manage history), please let me know what >> >> >> >> information would be required to get to the bottom of this. >> >> >> >> Any help is greatly appreciated! >> >> >> >> >> >> >> >> Thanks! >> >> >> >> Eugen >> >> >> >> >> >> >> >> >> >> >> >> ---snip--- >> >> >> >> controller01:~ # neutron-db-manage upgrade --expand >> >> >> >> [...] >> >> >> >> alembic.script.revision.MultipleHeads: Multiple heads are present >> for >> >> >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 >> >> >> >> >> >> >> >> controller01:~ # neutron-db-manage current --verbose >> >> >> >> Running current for neutron ... >> >> >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> >> >> >> INFO [alembic.runtime.migration] Will assume non-transactional >> DDL. >> >> >> >> Current revision(s) for >> mysql+pymysql://neutron:XXXXX at controller.fqdn >> >> >> >> /neutron: >> >> >> >> Rev: bebe95aae4d4 (head) >> >> >> >> Parent: b5344a66e818 >> >> >> >> Branch names: contract >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py >> >> >> >> >> >> >> >> Rev: 633d74ebbc4b (head) >> >> >> >> Parent: 6c9eb0469914 >> >> >> >> Branch names: expand >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> >> >> >> >> >> >> Rev: I38991de2b4 (head) >> >> >> >> Parent: 49d8622c5221 >> >> >> >> Branch names: expand >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py >> >> >> >> >> >> >> >> OK >> >> >> >> ---snip--- >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From katonalala at gmail.com Mon Nov 28 11:12:22 2022 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 28 Nov 2022 12:12:22 +0100 Subject: [neutron] Bug deputy report, Nov. 21. - Nov. 27. Message-ID: Hi Neutron Team I was the bug deputy in neutron last week, please check my summary. Needs attention ================= * networking-ovn-dsvm-functional-py27 job killed on timeout in stable/train branch (https://bugs.launchpad.net/neutron/+bug/1997262) *Stable only* * [ovn-octavia-provider] Octavia LB stuck in PENDING_UPDATE after creation (https://bugs.launchpad.net/neutron/+bug/1997567 ) *HIGH* * neutron_lib.exceptions.InvalidInput: Invalid input for operation: Segmentation ID should be lower or equal to 4095 ( https://bugs.launchpad.net/neutron/+bug/1997955 ) *Incomplete* * related doc bug: Manual install & Configuration in Neutron incorrect vni_ranges leads to error (https://bugs.launchpad.net/neutron/+bug/1998085 ) *LOW* * Prevent initializing "ovn-router" service if OVN mech driver is not called (https://bugs.launchpad.net/neutron/+bug/1997970 ) *MEDIUM* * [fullstack] Error in "test_logging" ( https://bugs.launchpad.net/neutron/+bug/1997965 ) *High* In Progress ================= * [ovn-octavia-provider] HM not working for FIPs ( https://bugs.launchpad.net/neutron/+bug/1997418 ) * Neutron server doesn't wait for port DHCP provisioning while VM creation ( https://bugs.launchpad.net/neutron/+bug/1997492 ): https://review.opendev.org/c/openstack/neutron/+/865470 * "convert_to_sanitized_mac_address" shoudl accept netaddr.EUI type values ( https://bugs.launchpad.net/neutron/+bug/1997680 ): https://review.opendev.org/c/openstack/neutron-lib/+/865517 * after restart of a ovn-controller the agent is still down ( https://bugs.launchpad.net/neutron/+bug/1997982 ): https://review.opendev.org/c/openstack/neutron/+/865697 Already merged ================ * [DHCP] Error in "call_driver" method ( https://bugs.launchpad.net/neutron/+bug/1997964): https://review.opendev.org/c/openstack/neutron/+/840421 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Nov 28 11:23:53 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 28 Nov 2022 12:23:53 +0100 Subject: [neutron][devstack][qa] Dropping lib/neutron module Message-ID: <18352719.yBdmKBht2i@p1> Hi, As You maybe know (or not as this was very long time ago) there are 2 modules to deploy Neutron in devstack: * old one called lib/neutron-legacy * new one called lib/neutron The problem is that new module lib/neutron was really never finished and used widely and still everyone is using (and should use) old one lib/neutron-legacy. We discussed that few times during PTGs and we finally decided to drop "new" module lib/neutron and have then rename old "lib/neutron-legacy" to be "lib/neutron" again. Decision was made because old module works fine and do its job, and as there is nobody who would like to finish really new module. Also having 2 modules from which "legacy" one is the only one which really works can be confusing and we want to avoid that confusion. So I proposed patches [1] and [2] to drop this unfinished module. I also proposed DNM patches for Neutron [3] and Tempest [4] to test that all jobs will work fine. But if You are maybe relaying on the Neutron modules from Devstack, please propose some test patch in Your project too and check if everything works fine for You. In patch [2] I didn't really removed "lib/neutron-legacy" yet because there are some projects which are sourcing that file in their devstack module. As [1] and [2] will be merged I will be proposing patches to change that for those projects but any help with that is welcome :) [1] https://review.opendev.org/c/openstack/devstack/+/865014 [2] https://review.opendev.org/c/openstack/devstack/+/865015 [3] https://review.opendev.org/c/openstack/neutron/+/865822 [4] https://review.opendev.org/c/openstack/tempest/+/865821 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ralonsoh at redhat.com Mon Nov 28 12:15:40 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 28 Nov 2022 13:15:40 +0100 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: <20221128110305.Horde.Tdxes4QBYWiWeWzOGCowoNe@webmail.nde.ag> References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> <20221128110305.Horde.Tdxes4QBYWiWeWzOGCowoNe@webmail.nde.ag> Message-ID: Use "neutron-db-manage history" to check what your alembic migration current status is. On Mon, Nov 28, 2022 at 12:03 PM Eugen Block wrote: > How do I check what the latest applied migration file was? > > Zitat von Rodolfo Alonso Hernandez : > > > Yes, but you should also be sure what is the status of the DB schema. > That > > means to check what is the latest migration file applied and set that > > revision ID on the "neutron.alembic_version" table. > > > > On Mon, Nov 28, 2022 at 11:31 AM Eugen Block wrote: > > > >> Hi, > >> > >> not really, no. I have no explanation how those files got there, to be > >> honest. We're using openSUSE Leap (currently 15.2) and the respective > >> repos from openSUSE. By the way, I only see those files on one of the > >> control nodes, that's irritating me even more. > >> But if those files are not known, maybe I should just delete them and > >> the contract directories as well? Because during the next upgrade I'll > >> probably have the same issue again. So if I see it correctly the two > >> "contract" directories should be removed > >> > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract > >> > >> as well as this revision file: > >> > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> > >> Comparing with the "native" V installation (and the other control > >> node) I should only keep two of these files: > >> > >> controller01:~ # ll > >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/ > >> insgesamt 16 > >> -rw-r--r-- 1 root root 900 30. M?r 2021 633d74ebbc4b_.py <-- delete > >> -rw-r--r-- 1 root root 1694 14. Nov 16:07 > 63fd95af7dcd_conntrack_helper.py > >> -rw-r--r-- 1 root root 900 30. M?r 2021 6c9eb0469914_.py <-- delete > >> -rw-r--r-- 1 root root 1134 14. Nov 16:07 > >> c613d0b82681_subnet_force_network_id.py > >> drwxr-xr-x 2 root root 312 23. Nov 11:09 __pycache__ > >> > >> I believe that should clean it up. Then I'll import the merged neutron > >> database and run the upgrade commands again. Does that make sense? > >> > >> Thanks! > >> Eugen > >> > >> Zitat von Rodolfo Alonso Hernandez : > >> > >> > Hi Eugen: > >> > > >> > Please check the code you have. Those revisions (633d74ebbc4b, > >> > bebe95aae4d4) do not exist in the Neutron repository. File [1] (or > >> > something similar with the same prefix) does not exist. Are you using > a > >> > customized Neutron repository? > >> > > >> > Regards. > >> > > >> > > >> > [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> > > >> > On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: > >> > > >> >> Hi, > >> >> > >> >> I believe they are neutron revisions, here's the output from > >> >> yesterday's neutron-db-manage attempt: > >> >> > >> >> ---snip--- > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Context impl MySQLImpl. > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Will assume non-transactional DDL. > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Context impl MySQLImpl. > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Will assume non-transactional DDL. > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Running upgrade 5c85685d616d -> > c43a0ddb6a03 > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> > b5344a66e818 > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Running upgrade b5344a66e818 -> > bebe95aae4d4 > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Running upgrade c613d0b82681 -> > 6c9eb0469914 > >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO > >> >> [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> > 633d74ebbc4b > >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running > upgrade > >> >> for neutron ... > >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK > >> >> ---snip--- > >> >> > >> >> And here's where they are located, apparently from train version: > >> >> > >> >> ---snip--- > >> >> controller01:~ # grep -r 633d74ebbc4b > /usr/lib/python3.6/site-packages/ > >> >> ?bereinstimmungen in Bin?rdatei > >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision > >> >> ID: > >> >> 633d74ebbc4b > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision > >> >> = > >> >> '633d74ebbc4b' > >> >> ---snip--- > >> >> > >> >> Next week we'll try it with merging fresh victoria schema with our > >> >> production data, then run the upgrade command again. > >> >> > >> >> Thanks, > >> >> Eugen > >> >> Zitat von Rodolfo Alonso Hernandez : > >> >> > >> >> > Hi Eugen: > >> >> > > >> >> > I don't know how it is possible that you have 3 registers in this > >> table. > >> >> > And the first two are not IDs of any Neutron revision. I would > suggest > >> >> you > >> >> > to (1) check the DB schema deployed against a fresh deployed system > >> (in > >> >> > Victoria version) and (2) fix this table to point to the correct > >> revision > >> >> > numbers. > >> >> > > >> >> > Regards. > >> >> > > >> >> > > >> >> > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: > >> >> > > >> >> >> Hi, > >> >> >> > >> >> >> thanks for your quick response. > >> >> >> > >> >> >> > In Neutron we don't support contract operations since Newton. > >> >> >> > > >> >> >> > If you are in Victoria and you correctly finished the DB > migration, > >> >> your > >> >> >> > HEADs should be: > >> >> >> > * contract: 5c85685d616d (from Newton) > >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> >> >> > >> >> >> That explains why I saw the newton revision in a new victoria > cluster > >> >> :-) > >> >> >> > >> >> >> > Please check what you have in the DB table > neutron.alembic_version. > >> >> The > >> >> >> > first register should be the expand number, the second the > contract > >> >> one. > >> >> >> If > >> >> >> > not, update them with the ones I've provided. > >> >> >> > >> >> >> The table alembic_versions contains the three versions I provided > at > >> >> >> the end of my email: > >> >> >> > >> >> >> MariaDB [neutron]> select * from alembic_version; > >> >> >> +--------------+ > >> >> >> | version_num | > >> >> >> +--------------+ > >> >> >> | 633d74ebbc4b | > >> >> >> | bebe95aae4d4 | > >> >> >> | I38991de2b4 | > >> >> >> +--------------+ > >> >> >> > >> >> >> I already tried to manipulate the table so I would only have those > >> two > >> >> >> versions you already mentioned, but then the upgrade --expand > command > >> >> >> alternates the database again with the mentioned error message > >> >> >> ("Multiple heads are present"). > >> >> >> > >> >> >> > Before executing the > >> >> >> > migration tool again, be sure the DB schema matches the latest > >> >> migration > >> >> >> > patch for your version. You can deploy a VM with devstack and > run > >> this > >> >> >> > version. > >> >> >> > >> >> >> That's what I wanted to try next, export only the db schema (no > data) > >> >> >> from a working victoria neutron database, then export only data > from > >> >> >> our production db and merge those, then import that into the > >> >> >> production and try to run upgrade --expand and --contract again. > But > >> I > >> >> >> didn't want to fiddle around too much in the production, that's > why I > >> >> >> wanted to ask for your guidance first. > >> >> >> But IIUC even if I changed the table alembic_versions again and > >> import > >> >> >> the merged db, wouldn't upgrade --expand somehow try to alternate > the > >> >> >> table again? I don't see where the train revision comes from > exactly, > >> >> >> could you clarify, please? It seems like I always get back to > square > >> >> >> one when running the --expand command. > >> >> >> > >> >> >> Thanks! > >> >> >> Eugen > >> >> >> > >> >> >> Zitat von Rodolfo Alonso Hernandez : > >> >> >> > >> >> >> > Hi Eugen: > >> >> >> > > >> >> >> > In Neutron we don't support contract operations since Newton. > >> >> >> > > >> >> >> > If you are in Victoria and you correctly finished the DB > migration, > >> >> your > >> >> >> > HEADs should be: > >> >> >> > * contract: 5c85685d616d (from Newton) > >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, > >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) > >> >> >> > > >> >> >> > Please check what you have in the DB table > neutron.alembic_version. > >> >> The > >> >> >> > first register should be the expand number, the second the > contract > >> >> one. > >> >> >> If > >> >> >> > not, update them with the ones I've provided. Before executing > the > >> >> >> > migration tool again, be sure the DB schema matches the latest > >> >> migration > >> >> >> > patch for your version. You can deploy a VM with devstack and > run > >> this > >> >> >> > version. > >> >> >> > > >> >> >> > Regards. > >> >> >> > > >> >> >> > > >> >> >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block > wrote: > >> >> >> > > >> >> >> >> Hi *, > >> >> >> >> > >> >> >> >> I'd like to ask you for advice on how to clean up my neutron > db. > >> At > >> >> >> >> some point (which I don't know exactly, probably train) my > neutron > >> >> >> >> database got inconsistent, apparently one of the upgrades did > not > >> go > >> >> >> >> as planned. The interesting thing is that the database still > >> works, I > >> >> >> >> just upgraded from ussuri to victoria where that issue popped > up > >> >> again > >> >> >> >> during 'neutron-db-manage upgrade --expand', I'll add the > >> information > >> >> >> >> at the end of this email. Apparently, I have multiple heads, > and > >> one > >> >> >> >> of them is from train, it seems as if I never ran --contract > (or > >> it > >> >> >> >> failed and I didn't notice). > >> >> >> >> Just some additional information what I did with this database: > >> this > >> >> >> >> cloud started out as a test environment with a single control > node > >> >> and > >> >> >> >> then became a production environment. About two and a half > years > >> ago > >> >> >> >> we decided to reinstall this cloud with version ussuri and > import > >> the > >> >> >> >> databases. I had a virtual machine in which I upgraded the > >> database > >> >> >> >> dump from production to the latest versions at that time. That > all > >> >> >> >> worked quite well, I only didn't notice that something was > >> missing. > >> >> >> >> Now that I finished the U --> V upgrade I want to fix this > >> >> >> >> inconsistency, I just have no idea how to do it. As I'm not > sure > >> how > >> >> >> >> all the neutron-db-manage commands work exactly I'd like to ask > >> for > >> >> >> >> some guidance. For example, could the "stamp" command possibly > >> help? > >> >> >> >> Or how else can I get rid of the train head and/or how to get > the > >> >> >> >> train revision to "contract" so I can finish the upgrade and > >> contract > >> >> >> >> the victoria revision? I can paste the whole neutron-db > history if > >> >> >> >> necessary (neutron-db-manage history), please let me know what > >> >> >> >> information would be required to get to the bottom of this. > >> >> >> >> Any help is greatly appreciated! > >> >> >> >> > >> >> >> >> Thanks! > >> >> >> >> Eugen > >> >> >> >> > >> >> >> >> > >> >> >> >> ---snip--- > >> >> >> >> controller01:~ # neutron-db-manage upgrade --expand > >> >> >> >> [...] > >> >> >> >> alembic.script.revision.MultipleHeads: Multiple heads are > present > >> for > >> >> >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 > >> >> >> >> > >> >> >> >> controller01:~ # neutron-db-manage current --verbose > >> >> >> >> Running current for neutron ... > >> >> >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. > >> >> >> >> INFO [alembic.runtime.migration] Will assume non-transactional > >> DDL. > >> >> >> >> Current revision(s) for > >> mysql+pymysql://neutron:XXXXX at controller.fqdn > >> >> >> >> /neutron: > >> >> >> >> Rev: bebe95aae4d4 (head) > >> >> >> >> Parent: b5344a66e818 > >> >> >> >> Branch names: contract > >> >> >> >> Path: > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py > >> >> >> >> > >> >> >> >> Rev: 633d74ebbc4b (head) > >> >> >> >> Parent: 6c9eb0469914 > >> >> >> >> Branch names: expand > >> >> >> >> Path: > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py > >> >> >> >> > >> >> >> >> Rev: I38991de2b4 (head) > >> >> >> >> Parent: 49d8622c5221 > >> >> >> >> Branch names: expand > >> >> >> >> Path: > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> > >> > /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py > >> >> >> >> > >> >> >> >> OK > >> >> >> >> ---snip--- > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> >> > >> >> > >> >> > >> >> > >> >> > >> > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Nov 28 13:28:01 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 28 Nov 2022 14:28:01 +0100 Subject: [nova][placement][tempest] Hold your rechecks Message-ID: Sorry folks, that's kind of an email I hate writing but let's be honest : our gate is busted. Until we figure out a correct path for resolution, I hereby ask you to *NOT* recheck in order to not spill our precious CI resources for tests that are certain to fail. Long story story, there are currently two problems : #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and nova-next jobs 100% fail due to a port remaining in down state. #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% fails due to a volume detach failure probably due to QEMU #1 is currently investigated by the Neutron team meanwhile a patch [1] has been proposed against Zuul to skip the failing tests. Unfortunately, this patch [1] is unable to merge due to #2. #2 has a Tempest patch that's being worked on [2] but the current state of this patch is WIP. We somehow need to have an agreement on the way forward during this afternoon (UTC) to identify whether we can reasonably progress on [2] or skip the failing tests on nova-lvm. Again, sorry about the bad news and I'll keep you informed. -Sylvain [1] https://review.opendev.org/c/openstack/nova/+/865658/ [2] https://review.opendev.org/c/openstack/tempest/+/842240 -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Nov 28 15:09:29 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 28 Nov 2022 16:09:29 +0100 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors Message-ID: Hi, I have a magnum cluster stack which contains errors in its constituents, some of the VMs (minions) that belong to that cluster do longer exist. When I try to delete the stack it fails, and I get DELETE aborted (Task delete from ResourceGroup "kube_minions" [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack "testcluter01-puf45b6dxmrn" [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) Is there a way to force the deletion to proceed even with those errors? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Nov 28 15:18:42 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Mon, 28 Nov 2022 22:18:42 +0700 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: References: Message-ID: Do u use multi master. If yes,u need delete octavia. On Mon, Nov 28, 2022, 10:16 PM wodel youchi wrote: > Hi, > > I have a magnum cluster stack which contains errors in its constituents, > some of the VMs (minions) that belong to that cluster do longer exist. > When I try to delete the stack it fails, and I get > > DELETE aborted (Task delete from ResourceGroup "kube_minions" > [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack "testcluter01-puf45b6dxmrn" > [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) > > Is there a way to force the deletion to proceed even with those errors? > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Mon Nov 28 15:33:09 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Mon, 28 Nov 2022 16:33:09 +0100 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: References: Message-ID: Hi, I do have both, simple and multi-master, and I can't get rid of them. What do you mean by delete octavia? delete the LB VMs manually? Regards. Le lun. 28 nov. 2022 ? 16:18, Nguy?n H?u Kh?i a ?crit : > Do u use multi master. If yes,u need delete octavia. > > On Mon, Nov 28, 2022, 10:16 PM wodel youchi > wrote: > >> Hi, >> >> I have a magnum cluster stack which contains errors in its constituents, >> some of the VMs (minions) that belong to that cluster do longer exist. >> When I try to delete the stack it fails, and I get >> >> DELETE aborted (Task delete from ResourceGroup "kube_minions" >> [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack "testcluter01-puf45b6dxmrn" >> [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) >> >> Is there a way to force the deletion to proceed even with those errors? >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Mon Nov 28 17:11:49 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Mon, 28 Nov 2022 17:11:49 +0000 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hi Hemant, Thanks for reporting this issue on the bug tracker https://bugs.launchpad.net/cinder/+bug/1998083 I did a quick search and no problems with shelving operations have been reported for at least the last two years.I'll bring this bug to the cinder bug meeting this week. Thanks Sofia On Fri, Nov 25, 2022 at 1:15 PM Hemant Sonawane wrote: > Hi Rajat, > It's not about deleting attachments entries but the normal operations from > horizon or via cli does not work because of that. So it really needs to be > fixed to perform resize, shelve unshelve operations. > > Here are the detailed attachment entries you can see for the shelved > instance. > > > > +--------------------------------------+--------------------------------------+--------------------------+---------------------------------- > *----+---------------+-----------------------------------------**??* > > *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > ??* > > *| id | volume_id > | attached_host | instance_uuid | > attach_status | connector ??* > > * > > | ??* > > > *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* > > *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > ??* > > *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | > 67ea3a39-78b8-4d04-a280-166acdc90b8a | nfv1compute43.nfv1.o2.cz > | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | > attached | {"platform": "x86_64", "os_type": "linux??* > > *", "ip": "10.42.168.87", "host": "nfv1compute43.nfv1.o2.cz > ", "multipath": false, "do_local_attach": > false, "system uuid": "65917e4f-c8c4-a2af-ec11-fe353e13f4dd", "mountpoint": > "/dev/vda"} | ??* > > *| d3278543-4920-42b7-b217-0858e986fcce | > 67ea3a39-78b8-4d04-a280-166acdc90b8a | NULL | > 9266a2d7-9721-4994-a6b5-6b3290862dc6 | reserved | NULL > ??* > > * > > | ??* > > > *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* > > *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ > ??* > > *2 rows in set (0.00 sec) * > > > for e.g if I would like to unshelve this instance it wont work as it has a > duplicate entry in cinder db for the attachment. So i have to delete it > manually from db or via cli > > *root at master01:/home/hemant# cinder --os-volume-api-version 3.27 > attachment-list --all | grep 67ea3a39-78b8-4d04-a280-166acdc90b8a > ??* > > *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | > 67ea3a39-78b8-4d04-a280-166acdc90b8a | attached | > 9266a2d7-9721-4994-a6b5-6b3290862dc6 | > ??* > > *| d3278543-4920-42b7-b217-0858e986fcce | > 67ea3a39-78b8-4d04-a280-166acdc90b8a** | reserved | > 9266a2d7-9721-4994-a6b5-6b3290862dc6 |* > > *cinder --os-volume-api-version 3.27 > attachment-delete 8daddacc-8fc8-4d2b-a738-d05deb20049f* > > this is the only choice I have if I would like to unshelve vm. But this is > not a good approach for production envs. I hope you understand me. Please > feel free to ask me anything if you don't understand. > > > > On Fri, 25 Nov 2022 at 13:20, Rajat Dhasmana wrote: > >> Hi Hemant, >> >> If your final goal is to delete the attachment entries in the cinder DB, >> we have attachment APIs to perform these tasks. The command useful for you >> is attachment list[1] and attachment delete[2]. >> Make sure you pass the right microversion i.e. 3.27 to be able to execute >> these operations. >> >> Eg: >> cinder --os-volume-api-version 3.27 attachment-list >> >> [1] >> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list >> [2] >> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete >> >> On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane >> wrote: >> >>> Hello >>> I am using wallaby release openstack and having issues with cinder >>> volumes as once I try to delete, resize or unshelve the shelved vms the >>> volume_attachement entries do not get deleted in cinder db and therefore >>> the above mentioned operations fail every time. I have to delete these >>> volume_attachement entries manually then it works. Is there any way to fix >>> this issue ? >>> >>> nova-compute logs: >>> >>> cinderclient.exceptions.ClientException: Unable to update >>> attachment.(Invalid volume: duplicate connectors detected on volume >>> >>> Help will be really appreciated Thanks ! >>> -- >>> Thanks and Regards, >>> >>> Hemant Sonawane >>> >>> > > -- > Thanks and Regards, > > Hemant Sonawane > > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Nov 28 20:11:22 2022 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 28 Nov 2022 14:11:22 -0600 Subject: [OVB - openstack-virtual-baremetal] - Douglas Viroel and Chandan Kumar as core In-Reply-To: References: Message-ID: <482ece1c-c6bc-6e41-02c2-fe2f6dfab3aa@nemebean.com> Although I'm not sure my vote should count at this point since I haven't been keeping up with reviews myself, +1. On 11/24/22 03:31, Harald Jensas wrote: > Hi, > > After discussions with Douglas, Chandan and Ronelle Landy I would like > to suggest adding Douglas and Chandan to the OVB core team. The > repository have very little activity, i.e there is not a lot of review > history to base the decision on. I did work with both individuals when > onboarding new clouds to run TripleO CI jobs utilizing OVB, they have a > good understanding of how the thing works. > > If there are no objections, I will add them to them as core reviewers > next week. > > > Regards, > Harald > > From openstack at nemebean.com Mon Nov 28 20:17:27 2022 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 28 Nov 2022 14:17:27 -0600 Subject: [oslo] New driver for oslo.messaging In-Reply-To: References: <4eddcca5.3347.18432712271.Coremail.wangkuntian1994@163.com> Message-ID: <2209be3c-7fc1-5d44-ad2e-09b790bc5bbe@nemebean.com> I'm not really active in Oslo these days, but I can provide a couple of general answers below. On 11/8/22 03:38, ??? wrote: > Hi, > > Wang and me are colleagues in a team. > > I would like to ask, put aside the working process of NATS and look at > Rocketmq independently, if we want to do the work of adding > Rocketmq[1] drivers, is the community welcome? > > We have already seen the oslo driver policy[2] in the documentation. I > also want to ask, if the community is willing to accept the Rocketmq > driver, whether we need to do other efforts besides the development > task itself. For example, I see that "Must have at least two > individuals from the community committed to triaging and fixing bugs, > and responding to test failures in a timely manner". > > I want to ask: > ?1?Is the current policy still If it's still in the docs then it is. If policy changes are desired then a patch should be proposed to the docs. > ?2?Are there community members willing to take responsibility for > this, or is it okay if we commit to triaging and fixing bugs, and > responding to test failures in a timely manner by ourselves The policy was primarily intended to ensure there was enough support behind a driver that it wouldn't bitrot and become a maintenance burden on the wider Oslo team. If you're willing to support the driver I think that satisfies the requirements. > > Cheers, > Han > > [1] https://github.com/apache/rocketmq > [2] https://docs.openstack.org/oslo.messaging/latest/contributor/supported-messaging-drivers.html > > > Christian Rohmann ?2022?11?2??? 18:20??? >> >> On 01/11/2022 10:06, ??? wrote: >> >> I want to develop a new driver for oslo.messaging to use rocketmq in openstack environment. I wonder if the community need this new driver? >> >> >> There is a larger discussion around adding a driver for NATS (https://lists.openstack.org/pipermail/openstack-discuss/2022-August/030179.html). >> Maybe the reasoning, discussion and also the PoC there is helpful to answer your question. I suppose you are also "not happy" with using RabbitMQ? >> >> >> >> Regards >> >> >> Christian > From ces.eduardo98 at gmail.com Mon Nov 28 21:29:09 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Mon, 28 Nov 2022 18:29:09 -0300 Subject: [manila][release] Proposing to EOL Stein Message-ID: Hello! Recently in a weekly meeting we chatted about EOLing stable/stein and all the attendants were found to be in favor of this [0]. As the procedures go [1], we need to make it formal through a post in this mailing list, and see if there are objections. In case of concerns or objections, please reach out through email or the #openstack-manila IRC channel. This will impact all branched manila repositories (manila, python-manilaclient and manila-ui). If there aren't objections or strong concerns, I will be proposing the patches to EOL stable/stein within one week. [0] https://meetings.opendev.org/meetings/manila/2022/manila.2022-11-03-15.00.log.html#l-20 [1] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life Thanks, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Nov 28 23:08:41 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 28 Nov 2022 15:08:41 -0800 Subject: [all] Cargo Culted Tox install_command overrides Message-ID: A subset of tempest jobs recently began to exhibit some interesting behavior. The tox installation was finding the global pip installation instead of pip in the tox virtualenv which led to an attempt to install packages globally which failed. Eventually, I discovered the reason this happened was that virtualenv created a broken venv installation for tox, but this was made far more confusing by two tox settings in tempest's tox.ini: install_command and allowlist_externals. Tempest had overridden install_command to ` pip install {opts} {packages}` and set allowlist_externals to `*`. This allowed tox to find and use global pip without complaint. Tempest has since cleaned these up with this change [0], which should result in nicer error messages in the future if we have similar problems. I have noticed that many projects have cargo culted this configuration, particularly for install_command [1]. The default install_command for tox is `python -m pip install {opts} {packages}` [2]. This is almost equivalent to our overrides except that it uses pip as a module. This is important in this case because we don't install python2 regularly which means there is no `python` command except for in the created virtualenv where `python` == `python3`. The resulting behavior difference here would have been helpful to have when debugging the underlying issue in tempest. All that to say, I would suggest that those who have install_command set similarly to tempest remove this unnecessary configuration. Separately, setting allowlist_externals to `*` allows all global commands to run without complaint or an indication they are being used. Far fewer projects have this problem though [3]. Those that do should consider updating their config to explicitly list the commands they actually need from outside the virtualenv. Neither of these changes is critical, but I wanted people to be aware of this as we seem to have copied it all over the place. [0] https://review.opendev.org/c/openstack/tempest/+/865314 [1] https://codesearch.opendev.org/?q=install_command&i=nope&literal=nope&files=tox.ini&excludeFiles=&repos= [2] https://tox.wiki/en/latest/config.html#conf-install_command [3] https://codesearch.opendev.org/?q=allowlist_externals%20%3D%20%5C*&i=nope&literal=nope&files=tox.ini&excludeFiles=&repos= From gmann at ghanshyammann.com Mon Nov 28 23:22:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 28 Nov 2022 15:22:16 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 30 at 1600 UTC Message-ID: <184c08c60bf.c93f9178124766.3104235029583117277@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 2022 Nov 30, at 1600 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Tuesday, Nov 29 at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From sbauza at redhat.com Tue Nov 29 08:32:15 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 Nov 2022 09:32:15 +0100 Subject: [nova][placement][tempest] Hold your rechecks In-Reply-To: References: Message-ID: Le lun. 28 nov. 2022 ? 14:28, Sylvain Bauza a ?crit : > Sorry folks, that's kind of an email I hate writing but let's be honest : > our gate is busted. > Until we figure out a correct path for resolution, I hereby ask you to > *NOT* recheck in order to not spill our precious CI resources for tests > that are certain to fail. > > Long story story, there are currently two problems : > #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and nova-next > jobs 100% fail due to a port remaining in down state. > #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% fails > due to a volume detach failure probably due to QEMU > > > Today's update : > #1 is currently investigated by the Neutron team meanwhile a patch [1] has > been proposed against Zuul to skip the failing tests. > Unfortunately, this patch [1] is unable to merge due to #2. > > Good news, kudos to the Neutron team which delivered a bugfix against the rootcause, which is always better than just skipping tests (and lacking then coverage). https://review.opendev.org/c/openstack/neutron/+/837780/18 Accordingly, [1] is no longer necessary and has been abandoned after a recheck to verify the job runs. > #2 has a Tempest patch that's being worked on [2] but the current state of > this patch is WIP. > We somehow need to have an agreement on the way forward during this > afternoon (UTC) to identify whether we can reasonably progress on [2] or > skip the failing tests on nova-lvm. > > Given [2] is hard to write, gmann proposed a patch [3] for skipping some nova-lvm tests. Reviews of [3] ongoing, should be hopefully merged today around noon UTC. Once [3] is merged, the gate should be unblocked. Again, an email will be sent once we progress on [3]. -S > Again, sorry about the bad news and I'll keep you informed. > -Sylvain > > [1] https://review.opendev.org/c/openstack/nova/+/865658/ > [2] https://review.opendev.org/c/openstack/tempest/+/842240 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Nov 29 08:33:03 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 Nov 2022 09:33:03 +0100 Subject: [nova][placement][tempest] Hold your rechecks In-Reply-To: References: Message-ID: (early morning, needing a coffee apparently) Le mar. 29 nov. 2022 ? 09:32, Sylvain Bauza a ?crit : > > > Le lun. 28 nov. 2022 ? 14:28, Sylvain Bauza a ?crit : > >> Sorry folks, that's kind of an email I hate writing but let's be honest : >> our gate is busted. >> Until we figure out a correct path for resolution, I hereby ask you to >> *NOT* recheck in order to not spill our precious CI resources for tests >> that are certain to fail. >> >> Long story story, there are currently two problems : >> #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and nova-next >> jobs 100% fail due to a port remaining in down state. >> #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% fails >> due to a volume detach failure probably due to QEMU >> >> >> > Today's update : > > >> #1 is currently investigated by the Neutron team meanwhile a patch [1] >> has been proposed against Zuul to skip the failing tests. >> Unfortunately, this patch [1] is unable to merge due to #2. >> >> > Good news, kudos to the Neutron team which delivered a bugfix against the > rootcause, which is always better than just skipping tests (and lacking > then coverage). > https://review.opendev.org/c/openstack/neutron/+/837780/18 > > Accordingly, [1] is no longer necessary and has been abandoned after a > recheck to verify the job runs. > > > >> #2 has a Tempest patch that's being worked on [2] but the current state >> of this patch is WIP. >> We somehow need to have an agreement on the way forward during this >> afternoon (UTC) to identify whether we can reasonably progress on [2] or >> skip the failing tests on nova-lvm. >> >> > Given [2] is hard to write, gmann proposed a patch [3] for skipping some > nova-lvm tests. Reviews of [3] ongoing, should be hopefully merged today > around noon UTC. > > Once [3] is merged, the gate should be unblocked. > Again, an email will be sent once we progress on [3]. > -S > > >> Again, sorry about the bad news and I'll keep you informed. >> -Sylvain >> >> [1] https://review.opendev.org/c/openstack/nova/+/865658/ >> [2] https://review.opendev.org/c/openstack/tempest/+/842240 >> > [3] https://review.opendev.org/c/openstack/nova/+/865922 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Tue Nov 29 08:35:32 2022 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Tue, 29 Nov 2022 08:35:32 +0000 Subject: [openstack-ansible] Designate: role seems trying to update DNS server pools before syncing database In-Reply-To: References: <27b50913162d497192325a9d65b1bed0@elca.ch> Message-ID: Hi Dmitriy, I applied your patch manually to my OSA 23.2.0 environment and I deployed designate from scratch without any error on my Wallaby staging platform. I run the playbook a second time without any config change and it worked, and a third time with a small change in pools definition and it worked to. So, your patch looks good to me ? Thanks a lot ! JF > -----Original Message----- > From: Dmitriy Rabotyagov > Sent: vendredi, 25 novembre 2022 18:22 > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Designate: role seems trying to update DNS > server pools before syncing database > > > > EXTERNAL MESSAGE - This email comes from outside ELCA companies. > > Hey, > > That looks like a totally valid bug and regression has been introduced in Wallaby. > I've just placed a patch that should cover this issue [1] and it would be awesome > if you could test it. > > [1] https://review.opendev.org/c/openstack/openstack-ansible- > os_designate/+/865701 > > ??, 25 ????. 2022 ?. ? 12:46, Taltavull Jean-Fran?ois > : > > > > Hello, > > > > During the first run, the playbook 'os-designate-install.yml' fails and the > 'designate-manage pool update' command produces the log line below: > > > > 'Nov 25 11:50:06 pp3controller1a-designate-container-53d945bb designate- > manage[2287]: 2022-11-25 11:50:06.518 2287 CRITICAL designate [designate- > manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: > Remote error: ProgrammingError (pymysql.err.ProgrammingError) (1146, "Table > 'designate.pools' doesn't exist")' > > > > Looking at the 'os_designate' role code shows that the handler ` Perform > Designate pools update` is flushed before tables are created in the 'designate' > database. > > > > O.S.: Ubuntu 20.04 > > OpenStack release: Wallaby > > OSA tag: 23.2.0 > > > > Regards, > > > > Jean-Francois > > From hemant.sonawane at itera.io Tue Nov 29 08:37:36 2022 From: hemant.sonawane at itera.io (Hemant Sonawane) Date: Tue, 29 Nov 2022 09:37:36 +0100 Subject: [cinder] volume_attachement entries are not getting deleted from DB In-Reply-To: References: Message-ID: Hello Sofia, Thank you for taking it into consideration. Do let me know if you have any questions and updates on the same. On Mon, 28 Nov 2022 at 18:12, Sofia Enriquez wrote: > Hi Hemant, > > Thanks for reporting this issue on the bug tracker > https://bugs.launchpad.net/cinder/+bug/1998083 > > I did a quick search and no problems with shelving operations have been > reported for at least the last two years.I'll bring this bug to the cinder > bug meeting this week. > > Thanks > Sofia > > On Fri, Nov 25, 2022 at 1:15 PM Hemant Sonawane > wrote: > >> Hi Rajat, >> It's not about deleting attachments entries but the normal operations >> from horizon or via cli does not work because of that. So it really needs >> to be fixed to perform resize, shelve unshelve operations. >> >> Here are the detailed attachment entries you can see for the shelved >> instance. >> >> >> >> +--------------------------------------+--------------------------------------+--------------------------+---------------------------------- >> *----+---------------+-----------------------------------------**??* >> >> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> ??* >> >> *| id | volume_id >> | attached_host | instance_uuid | >> attach_status | connector ??* >> >> * >> >> | ??* >> >> >> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >> >> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> ??* >> >> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >> 67ea3a39-78b8-4d04-a280-166acdc90b8a | nfv1compute43.nfv1.o2.cz >> | 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >> attached | {"platform": "x86_64", "os_type": "linux??* >> >> *", "ip": "10.42.168.87", "host": "nfv1compute43.nfv1.o2.cz >> ", "multipath": false, "do_local_attach": >> false, "system uuid": "65917e4f-c8c4-a2af-ec11-fe353e13f4dd", "mountpoint": >> "/dev/vda"} | ??* >> >> *| d3278543-4920-42b7-b217-0858e986fcce | >> 67ea3a39-78b8-4d04-a280-166acdc90b8a | NULL | >> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | reserved | NULL >> ??* >> >> * >> >> | ??* >> >> >> *+--------------------------------------+--------------------------------------+--------------------------+--------------------------------------+---------------+-----------------------------------------??* >> >> *--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ >> ??* >> >> *2 rows in set (0.00 sec) * >> >> >> for e.g if I would like to unshelve this instance it wont work as it has >> a duplicate entry in cinder db for the attachment. So i have to delete it >> manually from db or via cli >> >> *root at master01:/home/hemant# cinder --os-volume-api-version 3.27 >> attachment-list --all | grep 67ea3a39-78b8-4d04-a280-166acdc90b8a >> ??* >> >> *| 8daddacc-8fc8-4d2b-a738-d05deb20049f | >> 67ea3a39-78b8-4d04-a280-166acdc90b8a | attached | >> 9266a2d7-9721-4994-a6b5-6b3290862dc6 | >> ??* >> >> *| d3278543-4920-42b7-b217-0858e986fcce | >> 67ea3a39-78b8-4d04-a280-166acdc90b8a** | reserved | >> 9266a2d7-9721-4994-a6b5-6b3290862dc6 |* >> >> *cinder --os-volume-api-version 3.27 >> attachment-delete 8daddacc-8fc8-4d2b-a738-d05deb20049f* >> >> this is the only choice I have if I would like to unshelve vm. But this >> is not a good approach for production envs. I hope you understand me. >> Please feel free to ask me anything if you don't understand. >> >> >> >> On Fri, 25 Nov 2022 at 13:20, Rajat Dhasmana wrote: >> >>> Hi Hemant, >>> >>> If your final goal is to delete the attachment entries in the cinder DB, >>> we have attachment APIs to perform these tasks. The command useful for you >>> is attachment list[1] and attachment delete[2]. >>> Make sure you pass the right microversion i.e. 3.27 to be able to >>> execute these operations. >>> >>> Eg: >>> cinder --os-volume-api-version 3.27 attachment-list >>> >>> [1] >>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-list >>> [2] >>> https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-attachment-delete >>> >>> On Fri, Nov 25, 2022 at 5:44 PM Hemant Sonawane < >>> hemant.sonawane at itera.io> wrote: >>> >>>> Hello >>>> I am using wallaby release openstack and having issues with cinder >>>> volumes as once I try to delete, resize or unshelve the shelved vms the >>>> volume_attachement entries do not get deleted in cinder db and therefore >>>> the above mentioned operations fail every time. I have to delete these >>>> volume_attachement entries manually then it works. Is there any way to fix >>>> this issue ? >>>> >>>> nova-compute logs: >>>> >>>> cinderclient.exceptions.ClientException: Unable to update >>>> attachment.(Invalid volume: duplicate connectors detected on volume >>>> >>>> Help will be really appreciated Thanks ! >>>> -- >>>> Thanks and Regards, >>>> >>>> Hemant Sonawane >>>> >>>> >> >> -- >> Thanks and Regards, >> >> Hemant Sonawane >> >> > > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -- Thanks and Regards, Hemant Sonawane -------------- next part -------------- An HTML attachment was scrubbed... URL: From jake.yip at ardc.edu.au Tue Nov 29 09:31:50 2022 From: jake.yip at ardc.edu.au (Jake Yip) Date: Tue, 29 Nov 2022 20:31:50 +1100 Subject: [Magnum] ls /etc/cni/net.d/ is emty In-Reply-To: References: Message-ID: Hi, Is it possible to get to Yoga, and try FCOS35 and k8s v1.23? We are running that in Prod and it works well. If you have to use Xena, maybe try without containerd? Regards, Jake On 26/11/2022 1:56 am, Nguy?n H?u Kh?i wrote: > Hello guys. > I use Magnum on Xena and I custom k8s cluster by labels. But My cluster > is not ready and there is nothing in /etc/cni/net.d/ and my cluster said: > > container runtime network not ready: NetworkReady=false > reason:NetworkPluginNotReady message:Network plugin returns error: cni > plugin not initialized > > And this is my labels > > kube_tag=v1.21.8-rancher1,container_runtime=containerd,containerd_version=1.6.10,containerd_tarball_sha256=507f47716d7b932e58aa1dc7e2b3f2b8779ee9a2988aa46ad58e09e2e47063d8,calico_tag=v3.21.2,hyperkube_prefix=docker.io/rancher/ > > Note: I use Fedora Core OS 31 for images. > > Thank you. > > > Nguyen Huu Khoi From jake.yip at ardc.edu.au Tue Nov 29 09:36:33 2022 From: jake.yip at ardc.edu.au (Jake Yip) Date: Tue, 29 Nov 2022 20:36:33 +1100 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: References: Message-ID: <963f2f34-baba-4ebc-74a9-f4dacef64f5c@ardc.edu.au> Hi, Can you see what resource it is failing at with `openstack stack resource list -n5 `? You can also abandon the stack with `openstack stack abandon`. That will leave stray resources lying around though. Regards, Jake On 29/11/2022 2:09 am, wodel youchi wrote: > Hi, > > I have a magnum cluster stack which contains errors in its constituents, > some of the VMs (minions) that belong to that cluster do longer exist. > When I try to delete the stack it fails, and I get > > DELETE aborted (Task delete from ResourceGroup "kube_minions" > [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack "testcluter01-puf45b6dxmrn" > [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) > > Is there a way to force the deletion to proceed even with those errors? > > Regards. From eblock at nde.ag Tue Nov 29 10:10:21 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 29 Nov 2022 10:10:21 +0000 Subject: [neutron] neutron-db-manage multiple heads In-Reply-To: References: <20221125125752.Horde.zo4M3KjkRUtEQ5cGx0sKDjv@webmail.nde.ag> <20221125145145.Horde.XfDn1LvyK2AIN76ZlNGTjqZ@webmail.nde.ag> <20221125195656.Horde.HUgznZWx9ug640CD7yJQveQ@webmail.nde.ag> <20221128103115.Horde.sH0mgYCjfPt8QmSbs_7vzNm@webmail.nde.ag> <20221128110305.Horde.Tdxes4QBYWiWeWzOGCowoNe@webmail.nde.ag> Message-ID: <20221129101021.Horde.Z-jgboX2Cpra2WMNK87EzoD@webmail.nde.ag> Hi Rodolfo, thanks again for your assistance, I appreciate it! I managed to recreate the neutron database in our production cloud by merging the schema from a "native" victoria cloud and our data dump. The services came up successfully and some test networks + instances started successfully, dhcp is working, so all seems good now. Hopefully, this won't be such an issue during the next upgrade. Thanks! Eugen Zitat von Rodolfo Alonso Hernandez : > Use "neutron-db-manage history" to check what your alembic migration > current status is. > > On Mon, Nov 28, 2022 at 12:03 PM Eugen Block wrote: > >> How do I check what the latest applied migration file was? >> >> Zitat von Rodolfo Alonso Hernandez : >> >> > Yes, but you should also be sure what is the status of the DB schema. >> That >> > means to check what is the latest migration file applied and set that >> > revision ID on the "neutron.alembic_version" table. >> > >> > On Mon, Nov 28, 2022 at 11:31 AM Eugen Block wrote: >> > >> >> Hi, >> >> >> >> not really, no. I have no explanation how those files got there, to be >> >> honest. We're using openSUSE Leap (currently 15.2) and the respective >> >> repos from openSUSE. By the way, I only see those files on one of the >> >> control nodes, that's irritating me even more. >> >> But if those files are not known, maybe I should just delete them and >> >> the contract directories as well? Because during the next upgrade I'll >> >> probably have the same issue again. So if I see it correctly the two >> >> "contract" directories should be removed >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/contract >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract >> >> >> >> as well as this revision file: >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> >> >> Comparing with the "native" V installation (and the other control >> >> node) I should only keep two of these files: >> >> >> >> controller01:~ # ll >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/ >> >> insgesamt 16 >> >> -rw-r--r-- 1 root root 900 30. M?r 2021 633d74ebbc4b_.py <-- delete >> >> -rw-r--r-- 1 root root 1694 14. Nov 16:07 >> 63fd95af7dcd_conntrack_helper.py >> >> -rw-r--r-- 1 root root 900 30. M?r 2021 6c9eb0469914_.py <-- delete >> >> -rw-r--r-- 1 root root 1134 14. Nov 16:07 >> >> c613d0b82681_subnet_force_network_id.py >> >> drwxr-xr-x 2 root root 312 23. Nov 11:09 __pycache__ >> >> >> >> I believe that should clean it up. Then I'll import the merged neutron >> >> database and run the upgrade commands again. Does that make sense? >> >> >> >> Thanks! >> >> Eugen >> >> >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> > Hi Eugen: >> >> > >> >> > Please check the code you have. Those revisions (633d74ebbc4b, >> >> > bebe95aae4d4) do not exist in the Neutron repository. File [1] (or >> >> > something similar with the same prefix) does not exist. Are you using >> a >> >> > customized Neutron repository? >> >> > >> >> > Regards. >> >> > >> >> > >> >> >> [1]/usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> > >> >> > On Fri, Nov 25, 2022 at 8:57 PM Eugen Block wrote: >> >> > >> >> >> Hi, >> >> >> >> >> >> I believe they are neutron revisions, here's the output from >> >> >> yesterday's neutron-db-manage attempt: >> >> >> >> >> >> ---snip--- >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Context impl MySQLImpl. >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Will assume non-transactional DDL. >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Context impl MySQLImpl. >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Will assume non-transactional DDL. >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Running upgrade 5c85685d616d -> >> c43a0ddb6a03 >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Running upgrade c43a0ddb6a03 -> >> b5344a66e818 >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Running upgrade b5344a66e818 -> >> bebe95aae4d4 >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Running upgrade c613d0b82681 -> >> 6c9eb0469914 >> >> >> Nov 23 12:51:51 controller01 neutron-db-manage[25913]: INFO >> >> >> [alembic.runtime.migration] Running upgrade 6c9eb0469914 -> >> 633d74ebbc4b >> >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: Running >> upgrade >> >> >> for neutron ... >> >> >> Nov 23 12:51:52 controller01 neutron-db-manage[25913]: OK >> >> >> ---snip--- >> >> >> >> >> >> And here's where they are located, apparently from train version: >> >> >> >> >> >> ---snip--- >> >> >> controller01:~ # grep -r 633d74ebbc4b >> /usr/lib/python3.6/site-packages/ >> >> >> ?bereinstimmungen in Bin?rdatei >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/__pycache__/633d74ebbc4b_.cpython-36.pyc >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:Revision >> >> >> ID: >> >> >> 633d74ebbc4b >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py:revision >> >> >> = >> >> >> '633d74ebbc4b' >> >> >> ---snip--- >> >> >> >> >> >> Next week we'll try it with merging fresh victoria schema with our >> >> >> production data, then run the upgrade command again. >> >> >> >> >> >> Thanks, >> >> >> Eugen >> >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> >> >> > Hi Eugen: >> >> >> > >> >> >> > I don't know how it is possible that you have 3 registers in this >> >> table. >> >> >> > And the first two are not IDs of any Neutron revision. I would >> suggest >> >> >> you >> >> >> > to (1) check the DB schema deployed against a fresh deployed system >> >> (in >> >> >> > Victoria version) and (2) fix this table to point to the correct >> >> revision >> >> >> > numbers. >> >> >> > >> >> >> > Regards. >> >> >> > >> >> >> > >> >> >> > On Fri, Nov 25, 2022 at 3:51 PM Eugen Block wrote: >> >> >> > >> >> >> >> Hi, >> >> >> >> >> >> >> >> thanks for your quick response. >> >> >> >> >> >> >> >> > In Neutron we don't support contract operations since Newton. >> >> >> >> > >> >> >> >> > If you are in Victoria and you correctly finished the DB >> migration, >> >> >> your >> >> >> >> > HEADs should be: >> >> >> >> > * contract: 5c85685d616d (from Newton) >> >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> >> >> >> >> >> >> That explains why I saw the newton revision in a new victoria >> cluster >> >> >> :-) >> >> >> >> >> >> >> >> > Please check what you have in the DB table >> neutron.alembic_version. >> >> >> The >> >> >> >> > first register should be the expand number, the second the >> contract >> >> >> one. >> >> >> >> If >> >> >> >> > not, update them with the ones I've provided. >> >> >> >> >> >> >> >> The table alembic_versions contains the three versions I provided >> at >> >> >> >> the end of my email: >> >> >> >> >> >> >> >> MariaDB [neutron]> select * from alembic_version; >> >> >> >> +--------------+ >> >> >> >> | version_num | >> >> >> >> +--------------+ >> >> >> >> | 633d74ebbc4b | >> >> >> >> | bebe95aae4d4 | >> >> >> >> | I38991de2b4 | >> >> >> >> +--------------+ >> >> >> >> >> >> >> >> I already tried to manipulate the table so I would only have those >> >> two >> >> >> >> versions you already mentioned, but then the upgrade --expand >> command >> >> >> >> alternates the database again with the mentioned error message >> >> >> >> ("Multiple heads are present"). >> >> >> >> >> >> >> >> > Before executing the >> >> >> >> > migration tool again, be sure the DB schema matches the latest >> >> >> migration >> >> >> >> > patch for your version. You can deploy a VM with devstack and >> run >> >> this >> >> >> >> > version. >> >> >> >> >> >> >> >> That's what I wanted to try next, export only the db schema (no >> data) >> >> >> >> from a working victoria neutron database, then export only data >> from >> >> >> >> our production db and merge those, then import that into the >> >> >> >> production and try to run upgrade --expand and --contract again. >> But >> >> I >> >> >> >> didn't want to fiddle around too much in the production, that's >> why I >> >> >> >> wanted to ask for your guidance first. >> >> >> >> But IIUC even if I changed the table alembic_versions again and >> >> import >> >> >> >> the merged db, wouldn't upgrade --expand somehow try to alternate >> the >> >> >> >> table again? I don't see where the train revision comes from >> exactly, >> >> >> >> could you clarify, please? It seems like I always get back to >> square >> >> >> >> one when running the --expand command. >> >> >> >> >> >> >> >> Thanks! >> >> >> >> Eugen >> >> >> >> >> >> >> >> Zitat von Rodolfo Alonso Hernandez : >> >> >> >> >> >> >> >> > Hi Eugen: >> >> >> >> > >> >> >> >> > In Neutron we don't support contract operations since Newton. >> >> >> >> > >> >> >> >> > If you are in Victoria and you correctly finished the DB >> migration, >> >> >> your >> >> >> >> > HEADs should be: >> >> >> >> > * contract: 5c85685d616d (from Newton) >> >> >> >> > * expand: I38991de2b4 (from the last DB change in Victoria, >> >> >> >> > source_and_destination_ip_prefix_neutron_metering_rule) >> >> >> >> > >> >> >> >> > Please check what you have in the DB table >> neutron.alembic_version. >> >> >> The >> >> >> >> > first register should be the expand number, the second the >> contract >> >> >> one. >> >> >> >> If >> >> >> >> > not, update them with the ones I've provided. Before executing >> the >> >> >> >> > migration tool again, be sure the DB schema matches the latest >> >> >> migration >> >> >> >> > patch for your version. You can deploy a VM with devstack and >> run >> >> this >> >> >> >> > version. >> >> >> >> > >> >> >> >> > Regards. >> >> >> >> > >> >> >> >> > >> >> >> >> > On Fri, Nov 25, 2022 at 1:58 PM Eugen Block >> wrote: >> >> >> >> > >> >> >> >> >> Hi *, >> >> >> >> >> >> >> >> >> >> I'd like to ask you for advice on how to clean up my neutron >> db. >> >> At >> >> >> >> >> some point (which I don't know exactly, probably train) my >> neutron >> >> >> >> >> database got inconsistent, apparently one of the upgrades did >> not >> >> go >> >> >> >> >> as planned. The interesting thing is that the database still >> >> works, I >> >> >> >> >> just upgraded from ussuri to victoria where that issue popped >> up >> >> >> again >> >> >> >> >> during 'neutron-db-manage upgrade --expand', I'll add the >> >> information >> >> >> >> >> at the end of this email. Apparently, I have multiple heads, >> and >> >> one >> >> >> >> >> of them is from train, it seems as if I never ran --contract >> (or >> >> it >> >> >> >> >> failed and I didn't notice). >> >> >> >> >> Just some additional information what I did with this database: >> >> this >> >> >> >> >> cloud started out as a test environment with a single control >> node >> >> >> and >> >> >> >> >> then became a production environment. About two and a half >> years >> >> ago >> >> >> >> >> we decided to reinstall this cloud with version ussuri and >> import >> >> the >> >> >> >> >> databases. I had a virtual machine in which I upgraded the >> >> database >> >> >> >> >> dump from production to the latest versions at that time. That >> all >> >> >> >> >> worked quite well, I only didn't notice that something was >> >> missing. >> >> >> >> >> Now that I finished the U --> V upgrade I want to fix this >> >> >> >> >> inconsistency, I just have no idea how to do it. As I'm not >> sure >> >> how >> >> >> >> >> all the neutron-db-manage commands work exactly I'd like to ask >> >> for >> >> >> >> >> some guidance. For example, could the "stamp" command possibly >> >> help? >> >> >> >> >> Or how else can I get rid of the train head and/or how to get >> the >> >> >> >> >> train revision to "contract" so I can finish the upgrade and >> >> contract >> >> >> >> >> the victoria revision? I can paste the whole neutron-db >> history if >> >> >> >> >> necessary (neutron-db-manage history), please let me know what >> >> >> >> >> information would be required to get to the bottom of this. >> >> >> >> >> Any help is greatly appreciated! >> >> >> >> >> >> >> >> >> >> Thanks! >> >> >> >> >> Eugen >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ---snip--- >> >> >> >> >> controller01:~ # neutron-db-manage upgrade --expand >> >> >> >> >> [...] >> >> >> >> >> alembic.script.revision.MultipleHeads: Multiple heads are >> present >> >> for >> >> >> >> >> given argument 'expand at head'; 633d74ebbc4b, I38991de2b4 >> >> >> >> >> >> >> >> >> >> controller01:~ # neutron-db-manage current --verbose >> >> >> >> >> Running current for neutron ... >> >> >> >> >> INFO [alembic.runtime.migration] Context impl MySQLImpl. >> >> >> >> >> INFO [alembic.runtime.migration] Will assume non-transactional >> >> DDL. >> >> >> >> >> Current revision(s) for >> >> mysql+pymysql://neutron:XXXXX at controller.fqdn >> >> >> >> >> /neutron: >> >> >> >> >> Rev: bebe95aae4d4 (head) >> >> >> >> >> Parent: b5344a66e818 >> >> >> >> >> Branch names: contract >> >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/ussuri/contract/bebe95aae4d4_.py >> >> >> >> >> >> >> >> >> >> Rev: 633d74ebbc4b (head) >> >> >> >> >> Parent: 6c9eb0469914 >> >> >> >> >> Branch names: expand >> >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/train/expand/633d74ebbc4b_.py >> >> >> >> >> >> >> >> >> >> Rev: I38991de2b4 (head) >> >> >> >> >> Parent: 49d8622c5221 >> >> >> >> >> Branch names: expand >> >> >> >> >> Path: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> /usr/lib/python3.6/site-packages/neutron/db/migration/alembic_migrations/versions/victoria/expand/I38991de2b4_source_and_destination_ip_prefix_neutron_metering_rule.py >> >> >> >> >> >> >> >> >> >> OK >> >> >> >> >> ---snip--- >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From noonedeadpunk at gmail.com Tue Nov 29 10:11:04 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Tue, 29 Nov 2022 11:11:04 +0100 Subject: [openstack-ansible] Designate: role seems trying to update DNS server pools before syncing database In-Reply-To: References: <27b50913162d497192325a9d65b1bed0@elca.ch> Message-ID: Awesome! Thanks for taking time and effort to test it out, much appreciated! ??, 29 ????. 2022 ?., 09:35 Taltavull Jean-Fran?ois < jean-francois.taltavull at elca.ch>: > Hi Dmitriy, > > I applied your patch manually to my OSA 23.2.0 environment and I deployed > designate from scratch without any error on my Wallaby staging platform. > I run the playbook a second time without any config change and it worked, > and a third time with a small change in pools definition and it worked to. > > So, your patch looks good to me ? > > Thanks a lot ! > > JF > > > -----Original Message----- > > From: Dmitriy Rabotyagov > > Sent: vendredi, 25 novembre 2022 18:22 > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [openstack-ansible] Designate: role seems trying to update > DNS > > server pools before syncing database > > > > > > > > EXTERNAL MESSAGE - This email comes from outside ELCA companies. > > > > Hey, > > > > That looks like a totally valid bug and regression has been introduced > in Wallaby. > > I've just placed a patch that should cover this issue [1] and it would > be awesome > > if you could test it. > > > > [1] https://review.opendev.org/c/openstack/openstack-ansible- > > os_designate/+/865701 > > > > ??, 25 ????. 2022 ?. ? 12:46, Taltavull Jean-Fran?ois > > : > > > > > > Hello, > > > > > > During the first run, the playbook 'os-designate-install.yml' fails > and the > > 'designate-manage pool update' command produces the log line below: > > > > > > 'Nov 25 11:50:06 pp3controller1a-designate-container-53d945bb > designate- > > manage[2287]: 2022-11-25 11:50:06.518 2287 CRITICAL designate [designate- > > manage - - - - -] Unhandled error: oslo_messaging.rpc.client.RemoteError: > > Remote error: ProgrammingError (pymysql.err.ProgrammingError) (1146, > "Table > > 'designate.pools' doesn't exist")' > > > > > > Looking at the 'os_designate' role code shows that the handler ` > Perform > > Designate pools update` is flushed before tables are created in the > 'designate' > > database. > > > > > > O.S.: Ubuntu 20.04 > > > OpenStack release: Wallaby > > > OSA tag: 23.2.0 > > > > > > Regards, > > > > > > Jean-Francois > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 29 10:32:14 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 29 Nov 2022 11:32:14 +0100 Subject: [Kolla-ansible][Neutron] VMs not getting public IPs if attached directly to public subnet Message-ID: Hi, We have an HCI deployment with 3 controllers and 9 compute/storage nodes. Two of the controllers have the role of neutron server. The platform uses two bonded interfaces : bond1 : is used for : *neutron_external_interface* bond0 : with many vlans on top of it to segregate the rest of the networks : - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) - bond0.10 : vlan 10 ceph public - bond0.20 : vlan 20 ceph cluster - bond0.30 : vlan 30 API - bond0.40 : vlan 40 tunnel * - bond0.50 : vlan 50 Public network, here are the public IPs of the 03 controllers, the public horizon VIP interface is created here.* In our configuration we have *"enable_neutron_provider_networks = yes"*, which means that an instance can have a public IP directly without using a virtual-router + NAT. But it does not work. If we create and instance with a private network, then we attach to it a floating IP, the VM is reachable from the Internet, but if we attach the VM directly to the public network, it does not get an IP address from the public pool, we think it's a dhcp problem but we could not find the source, we think it's the *Vlan part.* The controllers are in Vlan 50, if we create a virtual-router it gets its public IP without any problem. But if we are not mistaken, if an instance is plugged directly into the public network, it uses bond1 to send its dhcp requests, but since this interface is not in vlan 50, the requests don't get to the controllers, is this right? If yes, is there a solution? can we use bond1.50 as an interface for kolla's *neutron_external_interface * instead? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue Nov 29 10:58:17 2022 From: eblock at nde.ag (Eugen Block) Date: Tue, 29 Nov 2022 10:58:17 +0000 Subject: [Kolla-ansible][Neutron] VMs not getting public IPs if attached directly to public subnet In-Reply-To: Message-ID: <20221129105817.Horde.4NBFCNBCVxFNmwRDcHzcr1h@webmail.nde.ag> Hi, this question has been asked multiple times, you should be able to find a couple of threads. We use config-drive for provider networks to inject the metadata (ip, gateway, etc.) into the instances. Regards, Eugen Zitat von wodel youchi : > Hi, > > We have an HCI deployment with 3 controllers and 9 compute/storage nodes. > Two of the controllers have the role of neutron server. > The platform uses two bonded interfaces : > bond1 : is used for : *neutron_external_interface* > > bond0 : with many vlans on top of it to segregate the rest of the networks : > - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) > - bond0.10 : vlan 10 ceph public > - bond0.20 : vlan 20 ceph cluster > - bond0.30 : vlan 30 API > - bond0.40 : vlan 40 tunnel > * - bond0.50 : vlan 50 Public network, here are the public IPs of the > 03 controllers, the public horizon VIP interface is created here.* > > In our configuration we have *"enable_neutron_provider_networks = yes"*, > which means that an instance can have a public IP directly without using a > virtual-router + NAT. But it does not work. > > If we create and instance with a private network, then we attach to it a > floating IP, the VM is reachable from the Internet, but if we attach the VM > directly to the public network, it does not get an IP address from the > public pool, we think it's a dhcp problem but we could not find the source, > we think it's the *Vlan part.* > > The controllers are in Vlan 50, if we create a virtual-router it gets its > public IP without any problem. But if we are not mistaken, if an instance > is plugged directly into the public network, it uses bond1 to send its dhcp > requests, but since this interface is not in vlan 50, the requests don't > get to the controllers, is this right? If yes, is there a solution? can we > use bond1.50 as an interface for kolla's *neutron_external_interface * > instead? > > > > Regards. From nguyenhuukhoinw at gmail.com Tue Nov 29 11:48:27 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 29 Nov 2022 18:48:27 +0700 Subject: [Magnum] ls /etc/cni/net.d/ is emty In-Reply-To: References: Message-ID: I run with the default config from magnum then it is ok. I just want to new containerd version. I will tell you if I resolved it. Nguyen Huu Khoi On Tue, Nov 29, 2022 at 4:31 PM Jake Yip wrote: > Hi, > > Is it possible to get to Yoga, and try FCOS35 and k8s v1.23? > > We are running that in Prod and it works well. > > If you have to use Xena, maybe try without containerd? > > Regards, > Jake > > On 26/11/2022 1:56 am, Nguy?n H?u Kh?i wrote: > > Hello guys. > > I use Magnum on Xena and I custom k8s cluster by labels. But My cluster > > is not ready and there is nothing in /etc/cni/net.d/ and my cluster said: > > > > container runtime network not ready: NetworkReady=false > > reason:NetworkPluginNotReady message:Network plugin returns error: cni > > plugin not initialized > > > > And this is my labels > > > > > kube_tag=v1.21.8-rancher1,container_runtime=containerd,containerd_version=1.6.10,containerd_tarball_sha256=507f47716d7b932e58aa1dc7e2b3f2b8779ee9a2988aa46ad58e09e2e47063d8,calico_tag=v3.21.2,hyperkube_prefix= > docker.io/rancher/ > > > > Note: I use Fedora Core OS 31 for images. > > > > Thank you. > > > > > > Nguyen Huu Khoi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wodel.youchi at gmail.com Tue Nov 29 12:07:28 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 29 Nov 2022 13:07:28 +0100 Subject: [Kolla-ansible][Neutron] VMs not getting public IPs if attached directly to public subnet In-Reply-To: <20221129105817.Horde.4NBFCNBCVxFNmwRDcHzcr1h@webmail.nde.ag> References: <20221129105817.Horde.4NBFCNBCVxFNmwRDcHzcr1h@webmail.nde.ag> Message-ID: Hi, Thanks for the reply. Is the analysis of the problem correct? We tried this, we created an instance with an interface in the public network, the interface did not get initialized, then we did : 1 - fix a public IP address on the interface : the instance did not connect to the internet. 2 - create a vlan interface (vlan 50) with a public ip : the instance did not connect to the internet. it seems that the analysis is wrong or we are missing something!!!? Regards. Le mar. 29 nov. 2022 ? 12:02, Eugen Block a ?crit : > Hi, > > this question has been asked multiple times, you should be able to > find a couple of threads. We use config-drive for provider networks to > inject the metadata (ip, gateway, etc.) into the instances. > > Regards, > Eugen > > Zitat von wodel youchi : > > > Hi, > > > > We have an HCI deployment with 3 controllers and 9 compute/storage nodes. > > Two of the controllers have the role of neutron server. > > The platform uses two bonded interfaces : > > bond1 : is used for : *neutron_external_interface* > > > > bond0 : with many vlans on top of it to segregate the rest of the > networks : > > - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) > > - bond0.10 : vlan 10 ceph public > > - bond0.20 : vlan 20 ceph cluster > > - bond0.30 : vlan 30 API > > - bond0.40 : vlan 40 tunnel > > * - bond0.50 : vlan 50 Public network, here are the public IPs of the > > 03 controllers, the public horizon VIP interface is created here.* > > > > In our configuration we have *"enable_neutron_provider_networks = yes"*, > > which means that an instance can have a public IP directly without using > a > > virtual-router + NAT. But it does not work. > > > > If we create and instance with a private network, then we attach to it a > > floating IP, the VM is reachable from the Internet, but if we attach the > VM > > directly to the public network, it does not get an IP address from the > > public pool, we think it's a dhcp problem but we could not find the > source, > > we think it's the *Vlan part.* > > > > The controllers are in Vlan 50, if we create a virtual-router it gets its > > public IP without any problem. But if we are not mistaken, if an instance > > is plugged directly into the public network, it uses bond1 to send its > dhcp > > requests, but since this interface is not in vlan 50, the requests don't > > get to the controllers, is this right? If yes, is there a solution? can > we > > use bond1.50 as an interface for kolla's *neutron_external_interface * > > instead? > > > > > > > > Regards. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Nov 29 15:16:42 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 Nov 2022 16:16:42 +0100 Subject: [nova][placement][tempest] Hold your rechecks In-Reply-To: References: Message-ID: Le mar. 29 nov. 2022 ? 09:33, Sylvain Bauza a ?crit : > (early morning, needing a coffee apparently) > > Le mar. 29 nov. 2022 ? 09:32, Sylvain Bauza a ?crit : > >> >> >> Le lun. 28 nov. 2022 ? 14:28, Sylvain Bauza a ?crit : >> >>> Sorry folks, that's kind of an email I hate writing but let's be honest >>> : our gate is busted. >>> Until we figure out a correct path for resolution, I hereby ask you to >>> *NOT* recheck in order to not spill our precious CI resources for tests >>> that are certain to fail. >>> >>> Long story story, there are currently two problems : >>> #1 https://launchpad.net/bugs/1940425 nova-ovs-hybrid-plug and >>> nova-next jobs 100% fail due to a port remaining in down state. >>> #2 https://bugs.launchpad.net/nova/+bug/1960346 nova-lvm job 100% fails >>> due to a volume detach failure probably due to QEMU >>> >>> >>> >> Today's update : >> >> >>> #1 is currently investigated by the Neutron team meanwhile a patch [1] >>> has been proposed against Zuul to skip the failing tests. >>> Unfortunately, this patch [1] is unable to merge due to #2. >>> >>> >> Good news, kudos to the Neutron team which delivered a bugfix against the >> rootcause, which is always better than just skipping tests (and lacking >> then coverage). >> https://review.opendev.org/c/openstack/neutron/+/837780/18 >> >> Accordingly, [1] is no longer necessary and has been abandoned after a >> recheck to verify the job runs. >> >> >> >>> #2 has a Tempest patch that's being worked on [2] but the current state >>> of this patch is WIP. >>> We somehow need to have an agreement on the way forward during this >>> afternoon (UTC) to identify whether we can reasonably progress on [2] or >>> skip the failing tests on nova-lvm. >>> >>> >> Given [2] is hard to write, gmann proposed a patch [3] for skipping some >> nova-lvm tests. Reviews of [3] ongoing, should be hopefully merged today >> around noon UTC. >> >> Once [3] is merged, the gate should be unblocked. >> Again, an email will be sent once we progress on [3]. >> > [3] is merged, so now the gate is back \o/ Thanks all folks who helped on those issues ! > -S >> >> >>> Again, sorry about the bad news and I'll keep you informed. >>> -Sylvain >>> >>> [1] https://review.opendev.org/c/openstack/nova/+/865658/ >>> [2] https://review.opendev.org/c/openstack/tempest/+/842240 >>> >> [3] https://review.opendev.org/c/openstack/nova/+/865922 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Tue Nov 29 21:03:37 2022 From: hjensas at redhat.com (Harald Jensas) Date: Tue, 29 Nov 2022 22:03:37 +0100 Subject: [OVB - openstack-virtual-baremetal] - Douglas Viroel and Chandan Kumar as core In-Reply-To: <482ece1c-c6bc-6e41-02c2-fe2f6dfab3aa@nemebean.com> References: <482ece1c-c6bc-6e41-02c2-fe2f6dfab3aa@nemebean.com> Message-ID: 2x +1 and no objections. I have now added Chandan and Douglas as core reviewers. On 11/28/22 21:11, Ben Nemec wrote: > Although I'm not sure my vote should count at this point since I haven't > been keeping up with reviews myself, +1. > > On 11/24/22 03:31, Harald Jensas wrote: >> Hi, >> >> After discussions with Douglas, Chandan and Ronelle Landy I would like >> to suggest adding Douglas and Chandan to the OVB core team. The >> repository have very little activity, i.e there is not a lot of review >> history to base the decision on. I did work with both individuals when >> onboarding new clouds to run TripleO CI jobs utilizing OVB, they have >> a good understanding of how the thing works. >> >> If there are no objections, I will add them to them as core reviewers >> next week. >> >> >> Regards, >> Harald >> >> > From tobias at caktusgroup.com Wed Nov 30 00:54:36 2022 From: tobias at caktusgroup.com (Tobias McNulty) Date: Tue, 29 Nov 2022 19:54:36 -0500 Subject: [Kolla-ansible][Neutron] VMs not getting public IPs if attached directly to public subnet In-Reply-To: References: <20221129105817.Horde.4NBFCNBCVxFNmwRDcHzcr1h@webmail.nde.ag> Message-ID: I asked a similar question recently and included a summary of my conclusions in the last post: https://lists.openstack.org/pipermail/openstack-discuss/2022-November/031230.html If you must create an instance in the public subnet for some reason (rather than assign a floating IP), the workaround I found was to disable port security entirely (I do not recommend this). Tobias McNulty Chief Executive Officer www.caktusgroup.com On Tue, Nov 29, 2022 at 7:08 AM wodel youchi wrote: > > Hi, > > Thanks for the reply. > > Is the analysis of the problem correct? > > We tried this, we created an instance with an interface in the public network, the interface did not get initialized, then we did : > 1 - fix a public IP address on the interface : the instance did not connect to the internet. > 2 - create a vlan interface (vlan 50) with a public ip : the instance did not connect to the internet. > > it seems that the analysis is wrong or we are missing something!!!? > > Regards. > > Le mar. 29 nov. 2022 ? 12:02, Eugen Block a ?crit : >> >> Hi, >> >> this question has been asked multiple times, you should be able to >> find a couple of threads. We use config-drive for provider networks to >> inject the metadata (ip, gateway, etc.) into the instances. >> >> Regards, >> Eugen >> >> Zitat von wodel youchi : >> >> > Hi, >> > >> > We have an HCI deployment with 3 controllers and 9 compute/storage nodes. >> > Two of the controllers have the role of neutron server. >> > The platform uses two bonded interfaces : >> > bond1 : is used for : *neutron_external_interface* >> > >> > bond0 : with many vlans on top of it to segregate the rest of the networks : >> > - bond0 : vlan natif used for nodes deployment (dhcp, tftp, pxeboot) >> > - bond0.10 : vlan 10 ceph public >> > - bond0.20 : vlan 20 ceph cluster >> > - bond0.30 : vlan 30 API >> > - bond0.40 : vlan 40 tunnel >> > * - bond0.50 : vlan 50 Public network, here are the public IPs of the >> > 03 controllers, the public horizon VIP interface is created here.* >> > >> > In our configuration we have *"enable_neutron_provider_networks = yes"*, >> > which means that an instance can have a public IP directly without using a >> > virtual-router + NAT. But it does not work. >> > >> > If we create and instance with a private network, then we attach to it a >> > floating IP, the VM is reachable from the Internet, but if we attach the VM >> > directly to the public network, it does not get an IP address from the >> > public pool, we think it's a dhcp problem but we could not find the source, >> > we think it's the *Vlan part.* >> > >> > The controllers are in Vlan 50, if we create a virtual-router it gets its >> > public IP without any problem. But if we are not mistaken, if an instance >> > is plugged directly into the public network, it uses bond1 to send its dhcp >> > requests, but since this interface is not in vlan 50, the requests don't >> > get to the controllers, is this right? If yes, is there a solution? can we >> > use bond1.50 as an interface for kolla's *neutron_external_interface * >> > instead? >> > >> > >> > >> > Regards. >> >> >> >> From gmann at ghanshyammann.com Wed Nov 30 02:40:29 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 29 Nov 2022 18:40:29 -0800 Subject: [all][tc] Technical Committee next weekly meeting on 2022 Nov 30 at 1600 UTC In-Reply-To: <184c08c60bf.c93f9178124766.3104235029583117277@ghanshyammann.com> References: <184c08c60bf.c93f9178124766.3104235029583117277@ghanshyammann.com> Message-ID: <184c6683770.c347f43a222736.1600172522953557690@ghanshyammann.com> Hello Everyone, Below is the agenda for the TC meeting scheduled on Nov 30 at 1600 UTC. Location:' IRC #openstack-tc Details: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting * Roll call * Follow up on past action items ** (tonyb) During the PTG there was discussions about altering the election timing and clarifying the charter. *** Please move forward on updating the charter. For reference 865353 and 862387 show the current timing. *** Please move forward on updating the election team/SIG/sub-commitee. NOTE: I (tonyb) volunteer to be the "head official" *** (gmann) TC Charter change: https://review.opendev.org/c/openstack/governance/+/865367 * Gate health check * 2023.1 TC tracker checks: ** https://etherpad.opendev.org/p/tc-2023.1-tracker *FIPS testing on ubuntu paid subscription ** https://review.opendev.org/c/openstack/project-config/+/861457 *Adjutant situation (not active) ** Last change merged Oct 26, 2021 (more than 1 year back) ** Gate is broken ** https://review.opendev.org/c/openstack/governance/+/849153 * Recurring tasks check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 28 Nov 2022 15:22:16 -0800 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 2022 Nov 30, at 1600 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Tuesday, Nov 29 at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From eblock at nde.ag Wed Nov 30 08:27:12 2022 From: eblock at nde.ag (Eugen Block) Date: Wed, 30 Nov 2022 08:27:12 +0000 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: References: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> Message-ID: <20221130082712.Horde.kAKVcEDnoqnje6ZTI7PfsEB@webmail.nde.ag> So you did modify the default security-group, ports 8000 and 8080 are not open by default. Anyway, can you please clarify what doesn't work exactly? Does the instance have an IP in the public network but the router is not pingeable (but that is not necessarily an issue) and you can't access it via which protocols? Does SSH work? Is the access blocked by a http_proxy? Zitat von vincent lee : > To add on to my previous email, I have attached an image of my security > group as shown below. > > Best regards, > Vincent > > On Tue, Nov 22, 2022 at 3:58 AM Eugen Block wrote: > >> Just one more thing to check, did you edit the security-group rules to >> allow access to the outside world? >> >> Zitat von Adivya Singh : >> >> > it should be missing a default route most of the time. >> > or check IP tables on router namespace the DNAT and SNAT are working >> > properly >> > >> > >> > >> > On Tue, Nov 22, 2022 at 9:40 AM Tobias McNulty >> > wrote: >> > >> >> On Mon, Nov 21, 2022 at 7:39 PM vincent lee >> >> wrote: >> >> >> >>> After reviewing the post you shared, I believe that we have the correct >> >>> subnet. Besides, we did not modify anything related to the cloud-init >> for >> >>> openstack. >> >>> >> >> >> >> I didn't either. But I found it's a good test of the network! If you are >> >> using an image that doesn't rely on it you might not notice (but I >> >> would not recommend that). >> >> >> >> >> >>> After launching the instances, we are able to ping between the >> instances >> >>> of the same subnet. However, we are not able to receive any internet >> >>> connection within those instances. From the instance, we are able to >> ping >> >>> the router IP addresses 10.42.0.56 and 10.0.0.1. >> >>> >> >> >> >> To make sure I understand: >> >> - 10.42.0.56 is the IP of the router external to OpenStack that provides >> >> internet access >> >> - This router is tested and working for devices outside of OpenStack >> >> - OpenStack compute instances can ping this router >> >> - OpenStack compute instances cannot reach the internet >> >> >> >> If that is correct, it does not sound like an OpenStack issue >> necessarily, >> >> but perhaps a missing default route on your compute instances. I would >> >> check that DHCP is enabled on the internal subnet and that it's >> providing >> >> everything necessary for an internet connection to the instances. >> >> >> >> Tobias >> >> >> >> >> >> >> >> >> >> >> > > -- > thanks you. > vincentleezihong > 2garnet > form2 From ralonsoh at redhat.com Wed Nov 30 10:33:19 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Wed, 30 Nov 2022 11:33:19 +0100 Subject: [neutron][releases] Proposing to EOL Train networking-ovn Message-ID: Hello: The transition of this branch to EM was done 1,5 years ago [1]. During this time the maintenance of Train networking-ovn has continued. However, due to the lack of time and personal resources, we can't properly maintain it. That leads to problems like [1]. The ML2/OVN driver was successfully merged in the Neutron repository in Ussury. We continue the development and improvement of this ML2 mechanism driver. A patch to mark this branch as EOL will be pushed in two weeks. If you have any inconvenience, please let me know in this mail chain or in IRC (ralonsoh, #openstack-neutron channel). You can also contact any Neutron core reviewer in the IRC channel. Regards. [1]https://review.opendev.org/c/openstack/releases/+/790760 [2]https://bugs.launchpad.net/neutron/+bug/1997262 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Nov 30 11:04:13 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 30 Nov 2022 16:34:13 +0530 Subject: [cinder] 2023.1 R-16 virtual mid cycle on 30th November (today) Message-ID: Hello Argonauts, The first 2023.1 (Antelope) mid cycle R-16 will be held on 30th November (today) with the following details: Date: 30th November 2022 Time: 1400-1600 UTC Meeting link: https://bluejeans.com/556681290 Etherpad: https://etherpad.opendev.org/p/cinder-antelope-midcycles Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Nov 30 11:09:57 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 30 Nov 2022 11:09:57 +0000 Subject: [cinder] Bug report from 11-23-2022 to 11-30-2022 Message-ID: No meeting today due to Antelope Midcycle-1 (R-18). This is a bug report from 11-23-2022 to 11-30-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1998083 "volume_attachement entries are not getting deleted from DB." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1997980 "rbd: 'error updating features for image' when enabling multi-attach." Fix proposed to master. Low - https://bugs.launchpad.net/cinder/+bug/1997876 "Ceph backup: Improve error message when creating an incremental backup of a non-RBD volume." Unassigned. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Wed Nov 30 11:20:45 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Wed, 30 Nov 2022 21:20:45 +1000 Subject: [heat][release] Proposing to EOL Rocky and Stein Message-ID: <0DAB2EDB-BCE1-401E-BCAD-00BDDB7DF76D@redhat.com> Hi, We were discussing some of the older branches we have and thought it was about time we start moving some of them to EOL. Initially, I would like to move Rocky and Stein to EOL and have done so here: https://review.opendev.org/c/openstack/releases/+/866135? We would also like to move Train to EOL as well, pending a few changes being merged. I wanted to reach out and ensure there is no objections here to any of these branches being EOL?d. Feel free to voice any concerns, otherwise I will move forward with that next week. Cheers, Brendan Shephard Senior Software Engineer Red Hat Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon.ico Type: image/vnd.microsoft.icon Size: 5430 bytes Desc: not available URL: From bshephar at redhat.com Wed Nov 30 11:20:45 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Wed, 30 Nov 2022 21:20:45 +1000 Subject: [heat][release] Proposing to EOL Rocky and Stein Message-ID: <0DAB2EDB-BCE1-401E-BCAD-00BDDB7DF76D@redhat.com> Hi, We were discussing some of the older branches we have and thought it was about time we start moving some of them to EOL. Initially, I would like to move Rocky and Stein to EOL and have done so here: https://review.opendev.org/c/openstack/releases/+/866135? We would also like to move Train to EOL as well, pending a few changes being merged. I wanted to reach out and ensure there is no objections here to any of these branches being EOL?d. Feel free to voice any concerns, otherwise I will move forward with that next week. Cheers, Brendan Shephard Senior Software Engineer Red Hat Australia -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: favicon.ico Type: image/vnd.microsoft.icon Size: 5430 bytes Desc: not available URL: From wodel.youchi at gmail.com Tue Nov 29 10:04:27 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 29 Nov 2022 11:04:27 +0100 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: <963f2f34-baba-4ebc-74a9-f4dacef64f5c@ardc.edu.au> References: <963f2f34-baba-4ebc-74a9-f4dacef64f5c@ardc.edu.au> Message-ID: Hi, Here are two examples of stack which fail to be deleted Stack 1: (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 c1801318-c0b8-4438-9c85-41cf5c7812f1 +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- ------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- ------------------------------+ *| kube_minions | ffc19ead-94f0-4ff3-b496-b454a156f1f7 | OS::Heat::ResourceGroup | DELETE_FAILED | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | * | api_address_lb_switch | d0b7f19c-7b66-4c07-a60a-cb6e62c48876 | Magnum::ApiGatewaySwitcher | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | etcd_address_lb_switch | 8c2a7952-e2aa-42d5-9da9-3a9881e69211 | Magnum::ApiGatewaySwitcher | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | worker_nodes_server_group | cf3601c0-3876-4d91-8fa0-0f87e6b43eca | OS::Nova::ServerGroup | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | kube_masters | ebbccf6a-40bf-4cbc-badd-fbafbb255337 | OS::Heat::ResourceGroup | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | etcd_lb | 65fd17e5-5e54-406d-bbba-7d56a8d4eb93 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | master_nodes_server_group | b0c9c49a-9160-4b33-af90-c2c16392b22e | OS::Nova::ServerGroup | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | api_lb | b48e0ade-52ec-409f-9212-ce231318d4df | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | network | 29549cda-d294-4031-90c0-a8e50cdf31dc | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/network.yaml | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | secgroup_kube_minion | 1123a7cd-080c-49a9-a7e8-81e162f07fa0 | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | | secgroup_kube_master | 5e2286d4-eadc-46ee-91db-ac79485d92b2 | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z | *| 1 | 713fd534-f3ac-4fc4-a3e9-9a8a67bfe549 | file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml | DELETE_FAILED | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | | 0 | 009bf512-7d67-4db0-9e63-264a320d23ab | file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml | DELETE_FAILED | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | * | docker_volume_attach | 07e2e175-bf0b-498c-8f0d-531f49ec183c | Magnum::Optional::Cinder::VolumeAttac hment | DELETE_FAILED | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-1-oq7v4edmmj4e | | kube-minion | eca4b475-12f7-4a67-8192-ac9d99db1a65 | OS::Nova::Server | CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-1-oq7v4edmmj4e | | kube_minion_eth0 | 17c7474d-8a63-43f7-a9cd-30ae22b4b153 | OS::Neutron::Port | CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-1-oq7v4edmmj4e | | docker_volume | 07e2e175-bf0b-498c-8f0d-531f49ec183c | Magnum::Optional::Cinder::Volume | CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-1-oq7v4edmmj4e | | agent_config | 09d6d11f-0623-49ab-a12e-653241d54d0b | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-1-oq7v4edmmj4e | | docker_volume_attach | c9efd934-679c-4e7d-8258-e7c1c65b2b95 | Magnum::Optional::Cinder::VolumeAttac hment | DELETE_FAILED | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-0-3zue4xpnyzep | | kube-minion | 37e7921a-03c8-40a2-ad80-2b006fb79c22 | OS::Nova::Server | CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-0-3zue4xpnyzep | | agent_config | 05fcc72e-0561-497d-98e2-6b92c109db0f | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-0-3zue4xpnyzep | | kube_minion_eth0 | beb8bf56-e5e1-4c7a-94ce-fe650e5902c6 | OS::Neutron::Port | CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-0-3zue4xpnyzep | | docker_volume | c9efd934-679c-4e7d-8258-e7c1c65b2b95 | Magnum::Optional::Cinder::Volume | CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion s-wlu4j5as6u4v-0-3zue4xpnyzep | | 0 | ff2d9839-9584-4f25-b4dd-869456635201 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml | CREATE_COMPLETE | 2022-05-08T12:00:11Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz | | 1 | e57c70ea-c165-4f0a-b680-a4344b97779d | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml | CREATE_COMPLETE | 2022-05-08T12:00:11Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz | | etcd_pool_member | cc587c78-a756-4720-9683-8f2564a5031d | Magnum::Optional::Neutron::LBaaS::Poo lMember | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | etcd_volume_attach | ce25bf02-e20f-458e-b8e5-f3e3eb94064b | Magnum::Optional::Etcd::VolumeAttachm ent | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | api_pool_member | 8b406579-f31c-4eed-8d88-4fa829db62af | Magnum::Optional::Neutron::LBaaS::Poo lMember | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | upgrade_kubernetes_deployment | | OS::Heat::SoftwareDeployment | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | upgrade_kubernetes | c66e6546-1cb5-4d13-a617-c2e3fd502f97 | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | docker_volume_attach | 440b7e09-b45a-4abb-b814-2bf52a262d06 | Magnum::Optional::Cinder::VolumeAttac hment | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | master_config_deployment | 40f798e8-e505-431e-af92-fe04d3a60c17 | OS::Heat::SoftwareDeployment | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | master_config | 706cead9-113a-40f5-b76c-6c24ad825720 | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | docker_volume | 440b7e09-b45a-4abb-b814-2bf52a262d06 | Magnum::Optional::Cinder::Volume | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | etcd_volume | b36dcb9c-6960-42c9-86d9-c56f41e45f50 | Magnum::Optional::Etcd::Volume | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | api_address_switch | 9d76cd95-9c17-418b-9eeb-6ab5b6824995 | Magnum::ApiGatewaySwitcher | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | kube_master_floating | db987622-fa06-4ee4-ab4a-70c710552afb | Magnum::Optional::KubeMaster::Neutron ::FloatingIP | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | kube-master | 2baede5b-335b-4509-a165-45e742316fe4 | OS::Nova::Server | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | kube_master_eth0 | 84ce8ff0-e0de-4b6c-885d-28dc7f2becc1 | OS::Neutron::Port | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | agent_config | 0456f77d-f9c5-4db2-82bd-db72dd49c023 | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-0-r4srfjnthzxr | | etcd_pool_member | c5f9cc21-c951-43c4-b35c-53dc1b579697 | Magnum::Optional::Neutron::LBaaS::Poo lMember | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | etcd_volume_attach | 92d65782-a3ef-4938-9d9b-7ca8f965dc76 | Magnum::Optional::Etcd::VolumeAttachm ent | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | api_pool_member | c86a539d-863c-4f73-a8fb-976593f78110 | Magnum::Optional::Neutron::LBaaS::Poo lMember | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | upgrade_kubernetes_deployment | | OS::Heat::SoftwareDeployment | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | upgrade_kubernetes | 0ee8cb5b-deb1-48c2-9323-1558c31c7dfc | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | docker_volume_attach | 02430b0e-c9d0-455d-9f3f-084b808f9ddf | Magnum::Optional::Cinder::VolumeAttac hment | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | master_config_deployment | 0c17ecc1-b676-4af5-a86c-1f86971080cf | OS::Heat::SoftwareDeployment | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | master_config | 1eb9d15d-ed09-417a-8fac-1c4e1424ed6a | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | docker_volume | 02430b0e-c9d0-455d-9f3f-084b808f9ddf | Magnum::Optional::Cinder::Volume | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | etcd_volume | 5fe87023-8ff4-4381-b543-4f15edadc61b | Magnum::Optional::Etcd::Volume | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | api_address_switch | 5bbab773-1c9e-4104-ad2f-39f52401ab05 | Magnum::ApiGatewaySwitcher | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | kube_master_floating | 6fd94160-d49a-41b7-b936-8f682466c614 | Magnum::Optional::KubeMaster::Neutron ::FloatingIP | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | kube-master | 41c198bb-114a-4baa-9a56-322f8f345fcd | OS::Nova::Server | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | kube_master_eth0 | 7be6a71a-4448-4f8f-97fb-56e739a457db | OS::Neutron::Port | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | agent_config | ad2d8d64-3919-4b61-9a51-71cc81345220 | OS::Heat::SoftwareConfig | CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master s-vfy2f6qbw2lz-1-phcfygifbg3o | | monitor | 73a7e0b6-4d3d-497c-9ab1-cd9fcaee398a | Magnum::Optional::Neutron::LBaaS::Hea lthMonitor | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp rw4iculpj | | pool | 061c047c-8990-45b1-8ef3-86f1f8207fa0 | Magnum::Optional::Neutron::LBaaS::Poo l | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp rw4iculpj | | listener | 112c04c4-a0a7-4770-84fb-df3ef630bb51 | Magnum::Optional::Neutron::LBaaS::Lis tener | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp rw4iculpj | | loadbalancer | 6ce77c54-d7a5-41ac-af14-ee9993c255d6 | Magnum::Optional::Neutron::LBaaS::Loa dBalancer | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp rw4iculpj | | floating | 06ab733d-45d4-42e8-9aa2-e9ecc00e3d44 | Magnum::Optional::Neutron::LBaaS::Flo atingIP | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 aup7jtig | | monitor | 51cdc4dd-d0f0-4361-9c86-1f519951c957 | Magnum::Optional::Neutron::LBaaS::Hea lthMonitor | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 aup7jtig | | pool | 67f7372d-8d60-4693-8809-56075a6ad326 | Magnum::Optional::Neutron::LBaaS::Poo l | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 aup7jtig | | listener | 7193aa59-43f9-45a8-aa7e-c5a5f86beced | Magnum::Optional::Neutron::LBaaS::Lis tener | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 aup7jtig | | loadbalancer | 04bc48ad-0884-4eae-b6ee-48b4a1b9cab3 | Magnum::Optional::Neutron::LBaaS::Loa dBalancer | CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 aup7jtig | | extrouter_inside | 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd:subnet_id=a704ec6f-1f29-431f-88c4-aae3c328ce2e | Magnum::Optional::Neutron::RouterInte rface | CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 tldcepeoi | | extrouter | 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd | Magnum::Optional::Neutron::Router | CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 tldcepeoi | | network_switch | bc89bc1b-cf84-4760-83e9-e73c1cb1d786 | Magnum::NetworkSwitcher | CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 tldcepeoi | | private_subnet | a704ec6f-1f29-431f-88c4-aae3c328ce2e | Magnum::Optional::Neutron::Subnet | CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 tldcepeoi | | private_network | 8cea85b5-9fb0-4578-9295-8d362526da2e | Magnum::Optional::Neutron::Net | CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 tldcepeoi | +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- ------------------------------+ Stack2: (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 85b6ce8b-5c4c-4cab-b69f-aed69a96018f +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- -------------------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | stack_name | +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- -------------------------------+ *| kube_minions | 225fb4e7-8ae0-42a8-95a5-9dbe63f54650 | OS::Heat::ResourceGroup | DELETE_FAILED | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | * | etcd_address_lb_switch | 762b3454-629f-4f21-b8fb-9ae896a3a010 | Magnum::ApiGatewaySwitcher | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | api_address_lb_switch | 8a8935d4-8762-4fbc-bf25-4866ce0a53a7 | Magnum::ApiGatewaySwitcher | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | kube_masters | d724aa3d-780b-414e-a080-3a501becdaae | OS::Heat::ResourceGroup | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | etcd_lb | 2281b6f9-46e0-411b-833e-853a4994ad96 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | api_lb | 586477b3-8a25-4740-865c-5d682f3fb4f3 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | network | b747ad8b-b592-4a8f-afcd-b76017a1f68c | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/common/templates/network.yaml | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | master_nodes_server_group | d0e3eea2-7776-4e57-9651-2799751a3dbc | OS::Nova::ServerGroup | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | secgroup_kube_minion | 320ace66-73af-4518-8852-abcac7233e0c | OS::Neutron::SecurityGroup | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | secgroup_kube_master | 3c0778ef-2fb0-4de6-97a2-603e4e753c58 | OS::Neutron::SecurityGroup | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | | worker_nodes_server_group | 75d9263d-3187-4256-9506-732cee98799a | OS::Nova::ServerGroup | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq | *| 1 | 6d31d728-61f7-4383-95e5-24800de91162 | file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml | DELETE_FAILED | 2022-05-11T10:40:16Z | testmulti-r37u5recibdq-kube_minions-uakw7w4mnepn * | | docker_volume_attach | 63cd097c-9877-40c0-968f-52b58bdaedf9 | Magnum::Optional::Cinder::VolumeAttac hment | DELETE_FAILED | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio ns-uakw7w4mnepn-1-pkoi6mf32khj | | docker_volume | 63cd097c-9877-40c0-968f-52b58bdaedf9 | Magnum::Optional::Cinder::Volume | SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio ns-uakw7w4mnepn-1-pkoi6mf32khj | | kube-minion | 1a605e7b-aafc-4113-90a7-98d8d8fe5f96 | OS::Nova::Server | SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio ns-uakw7w4mnepn-1-pkoi6mf32khj | | kube_minion_eth0 | 05bf8eb4-f2ca-4234-8497-c5b13c977a32 | OS::Neutron::Port | SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio ns-uakw7w4mnepn-1-pkoi6mf32khj | | agent_config | ad560f63-94b2-4287-8a4c-6ac66042cc35 | OS::Heat::SoftwareConfig | SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio ns-uakw7w4mnepn-1-pkoi6mf32khj | | 1 | 3f4cff1f-28f4-4276-b676-e707a42465b8 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml | CHECK_COMPLETE | 2022-05-11T10:36:12Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g | | 0 | 412844d4-a864-47c2-b9cf-6eecee186338 | file:///var/lib/kolla/venv/lib/python 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml | CHECK_COMPLETE | 2022-05-11T10:36:12Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g | | api_pool_member | dd1c1021-67fd-4fe9-b2a0-e481f906c79d | Magnum::Optional::Neutron::LBaaS::Poo lMember | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | etcd_pool_member | b11fae15-1869-4d49-b939-c63d683d1a9f | Magnum::Optional::Neutron::LBaaS::Poo lMember | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | upgrade_kubernetes_deployment | | OS::Heat::SoftwareDeployment | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | upgrade_kubernetes | 5ef86e19-94c5-48fe-88ab-5962ef03f63a | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | docker_volume_attach | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 | Magnum::Optional::Cinder::VolumeAttac hment | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | etcd_volume_attach | 4be01f1a-b386-421e-963a-f75425bff4b3 | Magnum::Optional::Etcd::VolumeAttachm ent | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | master_config_deployment | d4b740e2-1cfc-4569-a1d5-eabbf1285229 | OS::Heat::SoftwareDeployment | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | master_config | 574d9330-5ac0-4f7b-8bea-8b46b4966e14 | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | docker_volume | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 | Magnum::Optional::Cinder::Volume | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | etcd_volume | a47316ff-16ba-4fc9-b2d0-840d691400aa | Magnum::Optional::Etcd::Volume | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | api_address_switch | dbeb0fff-b974-4fe7-9451-d0c2eabffc03 | Magnum::ApiGatewaySwitcher | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | kube_master_floating | 99047ae9-39dd-4196-a507-fbc82dc4375a | Magnum::Optional::KubeMaster::Neutron ::FloatingIP | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | kube-master | 4fde1552-ce13-4546-9d8f-1e9ea25352a5 | OS::Nova::Server | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | kube_master_eth0 | d442fcef-43e6-4fe2-9390-991bf827c221 | OS::Neutron::Port | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | agent_config | 802300e7-8f19-4694-b2fc-2cb5ad55eb25 | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-1-gfoenmdqijvt | | api_pool_member | 98264ea6-ea88-4132-8cca-7f0a049d8cee | Magnum::Optional::Neutron::LBaaS::Poo lMember | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | etcd_pool_member | 458d8d42-6994-4dc6-8517-d3486ad42d67 | Magnum::Optional::Neutron::LBaaS::Poo lMember | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | docker_volume_attach | 37f3721a-81b7-41b6-beb1-9860e9265f6c | Magnum::Optional::Cinder::VolumeAttac hment | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | upgrade_kubernetes_deployment | | OS::Heat::SoftwareDeployment | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | upgrade_kubernetes | 725193f9-c222-4ad9-8097-df88656e5d9f | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | master_config_deployment | 3994e5a7-c7df-48e7-a562-70d690310d06 | OS::Heat::SoftwareDeployment | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | master_config | 94978b0e-4d00-4ced-9212-eb19da22875e | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | api_address_switch | 17692180-d5cc-4f1b-80a5-7a64c61d512a | Magnum::ApiGatewaySwitcher | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | docker_volume | 37f3721a-81b7-41b6-beb1-9860e9265f6c | Magnum::Optional::Cinder::Volume | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | kube_master_floating | 6ffa46aa-4cc9-43a8-9dbe-4246d4d167d0 | Magnum::Optional::KubeMaster::Neutron ::FloatingIP | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | etcd_volume_attach | 50d479d5-ea54-40b1-927a-6287fbef277d | Magnum::Optional::Etcd::VolumeAttachm ent | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | etcd_volume | 6e32c307-8645-4a93-bd68-f6a5172a54b8 | Magnum::Optional::Etcd::Volume | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | kube-master | 6515ad5d-cc35-46d9-b554-31836e41d59e | OS::Nova::Server | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | kube_master_eth0 | 3c9390d2-5d07-4a02-9110-6551e743ee32 | OS::Neutron::Port | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | agent_config | 25cdf836-a317-45ff-aa18-262b933c3e4b | OS::Heat::SoftwareConfig | CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste rs-2rwfudiqoi5g-0-omoqeuv5oreq | | monitor | eed43fb4-b6fa-41c8-b612-e15aae185488 | Magnum::Optional::Neutron::LBaaS::Hea lthMonitor | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k tdtic4o56a | | pool | cdf18e7c-b823-4c45-9f79-400c0f5f15bd | Magnum::Optional::Neutron::LBaaS::Poo l | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k tdtic4o56a | | listener | 4437d207-03fb-4c75-8a6c-eb6a4946f6cd | Magnum::Optional::Neutron::LBaaS::Lis tener | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k tdtic4o56a | | loadbalancer | 075c3e23-79be-4a95-8875-7235db051d24 | Magnum::Optional::Neutron::LBaaS::Loa dBalancer | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k tdtic4o56a | | floating | 3ffc5f32-a1fe-4948-857b-d304a56ee48c | Magnum::Optional::Neutron::LBaaS::Flo atingIP | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 ckf7fomri | | monitor | 6fde29bd-a607-464c-a54b-7c38132efb2d | Magnum::Optional::Neutron::LBaaS::Hea lthMonitor | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 ckf7fomri | | pool | 62f92f0e-51dd-4a34-b9b8-f267b745ebd1 | Magnum::Optional::Neutron::LBaaS::Poo l | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 ckf7fomri | | listener | 7d805d71-e811-422d-b81a-8941ece42bff | Magnum::Optional::Neutron::LBaaS::Lis tener | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 ckf7fomri | | loadbalancer | c7599c6f-b9c0-4ed5-9b20-ed4cd3ee1326 | Magnum::Optional::Neutron::LBaaS::Loa dBalancer | CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 ckf7fomri | | network_switch | 0dda301f-9d35-432b-a010-618b8ca62f3b | Magnum::NetworkSwitcher | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh l3pe5z4pqq | | extrouter_inside | e2f8b157-60b5-40ab-aa0a-9e7888900cd9:subnet_id=3f85b8a4-43fc-43a3-abc2-58982006a1c4 | Magnum::Optional::Neutron::RouterInte rface | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh l3pe5z4pqq | | private_subnet | 3f85b8a4-43fc-43a3-abc2-58982006a1c4 | Magnum::Optional::Neutron::Subnet | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh l3pe5z4pqq | | private_network | eaedb3a2-1619-42af-9fd7-81e02e6537df | Magnum::Optional::Neutron::Net | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh l3pe5z4pqq | | extrouter | e2f8b157-60b5-40ab-aa0a-9e7888900cd9 | Magnum::Optional::Neutron::Router | CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh l3pe5z4pqq | +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- -------------------------------+ We had a problem on the platform, and some VMs have gone to error state, in reality they have disappeared, we tried to delete the stack and resources that no longer exists are making these errors. We are searching for a way to delete the stack even if the corresponding resource does not exist anymore. Regards. Le mar. 29 nov. 2022 ? 10:36, Jake Yip a ?crit : > Hi, > > Can you see what resource it is failing at with `openstack stack > resource list -n5 `? > > You can also abandon the stack with `openstack stack abandon`. That will > leave stray resources lying around though. > > Regards, > Jake > > On 29/11/2022 2:09 am, wodel youchi wrote: > > Hi, > > > > I have a magnum cluster stack which contains errors in its constituents, > > some of the VMs (minions) that belong to that cluster do longer exist. > > When I try to delete the stack it fails, and I get > > > > DELETE aborted (Task delete from ResourceGroup "kube_minions" > > [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack "testcluter01-puf45b6dxmrn" > > [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) > > > > Is there a way to force the deletion to proceed even with those errors? > > > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincentlee676 at gmail.com Tue Nov 29 06:23:33 2022 From: vincentlee676 at gmail.com (vincent lee) Date: Tue, 29 Nov 2022 00:23:33 -0600 Subject: Unable to access Internet from an instance and accessing instance using floating-point IPs from external network In-Reply-To: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> References: <20221122094959.Horde._DNW37_4CRcsBFAHQUP7ZG_@webmail.nde.ag> Message-ID: Hi guys, sorry for the late reply. I noticed that the gateway I gave was not pingable and I will try it tomorrow and let you all know if it works out. Before I posted this discussion, I did a fresh installation of openstack. I have only modified the external network in init-runonce script. Will that cause a problem? I have attached an image of the modification I made before running the script as shown below. Other than that I have not made any changes to the configuration. Besides, I have not make any changes to the security-group rules regardless of the internet access. [image: image.png] Best regards, Vincent On Tue, Nov 22, 2022 at 3:58 AM Eugen Block wrote: > Just one more thing to check, did you edit the security-group rules to > allow access to the outside world? > > Zitat von Adivya Singh : > > > it should be missing a default route most of the time. > > or check IP tables on router namespace the DNAT and SNAT are working > > properly > > > > > > > > On Tue, Nov 22, 2022 at 9:40 AM Tobias McNulty > > wrote: > > > >> On Mon, Nov 21, 2022 at 7:39 PM vincent lee > >> wrote: > >> > >>> After reviewing the post you shared, I believe that we have the correct > >>> subnet. Besides, we did not modify anything related to the cloud-init > for > >>> openstack. > >>> > >> > >> I didn't either. But I found it's a good test of the network! If you are > >> using an image that doesn't rely on it you might not notice (but I > >> would not recommend that). > >> > >> > >>> After launching the instances, we are able to ping between the > instances > >>> of the same subnet. However, we are not able to receive any internet > >>> connection within those instances. From the instance, we are able to > ping > >>> the router IP addresses 10.42.0.56 and 10.0.0.1. > >>> > >> > >> To make sure I understand: > >> - 10.42.0.56 is the IP of the router external to OpenStack that provides > >> internet access > >> - This router is tested and working for devices outside of OpenStack > >> - OpenStack compute instances can ping this router > >> - OpenStack compute instances cannot reach the internet > >> > >> If that is correct, it does not sound like an OpenStack issue > necessarily, > >> but perhaps a missing default route on your compute instances. I would > >> check that DHCP is enabled on the internal subnet and that it's > providing > >> everything necessary for an internet connection to the instances. > >> > >> Tobias > >> > >> > >> > > > > > -- thanks you. vincentleezihong 2garnet form2 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 76883 bytes Desc: not available URL: From wodel.youchi at gmail.com Tue Nov 29 13:39:39 2022 From: wodel.youchi at gmail.com (wodel youchi) Date: Tue, 29 Nov 2022 14:39:39 +0100 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: References: <963f2f34-baba-4ebc-74a9-f4dacef64f5c@ardc.edu.au> Message-ID: Hi, Here are some other details : (yogavenv) [deployer at rcdndeployer2 ~]$* openstack stack failures list 85b6ce8b-5c4c-4cab-b69f-aed69a96018f* testmulti-r37u5recibdq.kube_minions.1.docker_volume_attach: resource_type: Magnum::Optional::Cinder::VolumeAttachment physical_resource_id: 63cd097c-9877-40c0-968f-52b58bdaedf9 status: DELETE_FAILED status_reason: | DELETE aborted (Task delete from CinderVolumeAttachment "docker_volume_attach" [63cd097c-9877-40c0-968f-52b58bdaedf9] Stack "testmulti-r37u5recibdq-kube_minions-uakw7w4mnepn-1-pkoi6mf32khj" [6d31d728-61f7-4383-95e5-24800de91162] Timed out) (yogavenv) [deployer at rcdndeployer2 ~]$* openstack stack failures list c1801318-c0b8-4438-9c85-41cf5c7812f1* mymultik8-ub6qbaarl74z.kube_minions.1.docker_volume_attach: resource_type: Magnum::Optional::Cinder::VolumeAttachment physical_resource_id: 07e2e175-bf0b-498c-8f0d-531f49ec183c status: DELETE_FAILED status_reason: | DELETE aborted (Task delete from CinderVolumeAttachment "docker_volume_attach" [07e2e175-bf0b-498c-8f0d-531f49ec183c] Stack "mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v-1-oq7v4edmmj4e" [713fd534-f3ac-4fc4-a3e9-9a8a67bfe549] Timed out) mymultik8-ub6qbaarl74z.kube_minions.0.docker_volume_attach: resource_type: Magnum::Optional::Cinder::VolumeAttachment physical_resource_id: c9efd934-679c-4e7d-8258-e7c1c65b2b95 status: DELETE_FAILED status_reason: | DELETE aborted (Task delete from CinderVolumeAttachment "docker_volume_attach" [c9efd934-679c-4e7d-8258-e7c1c65b2b95] Stack "mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v-0-3zue4xpnyzep" [009bf512-7d67-4db0-9e63-264a320d23ab] Timed out) PS : (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack abandon 85b6ce8b-5c4c-4cab-b69f-aed69a96018f *ERROR: Stack Abandon is not supported.* Regards. Le mar. 29 nov. 2022 ? 11:04, wodel youchi a ?crit : > Hi, > > Here are two examples of stack which fail to be deleted > Stack 1: > (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 > c1801318-c0b8-4438-9c85-41cf5c7812f1 > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- > ------------------------------+ > | resource_name | physical_resource_id > | > resource_type > | > resource_status | updated_time | stack_name > | > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- > ------------------------------+ > > > *| kube_minions | ffc19ead-94f0-4ff3-b496-b454a156f1f7 > | OS::Heat::ResourceGroup > > | > DELETE_FAILED | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | * > | api_address_lb_switch | d0b7f19c-7b66-4c07-a60a-cb6e62c48876 > | Magnum::ApiGatewaySwitcher > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | etcd_address_lb_switch | 8c2a7952-e2aa-42d5-9da9-3a9881e69211 > | Magnum::ApiGatewaySwitcher > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | worker_nodes_server_group | cf3601c0-3876-4d91-8fa0-0f87e6b43eca > | OS::Nova::ServerGroup > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | kube_masters | ebbccf6a-40bf-4cbc-badd-fbafbb255337 > | OS::Heat::ResourceGroup > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | etcd_lb | 65fd17e5-5e54-406d-bbba-7d56a8d4eb93 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml > | CREATE_COMPLETE | 2022-05-08T11:59:09Z | > mymultik8-ub6qbaarl74z > | > | master_nodes_server_group | b0c9c49a-9160-4b33-af90-c2c16392b22e > | OS::Nova::ServerGroup > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | api_lb | b48e0ade-52ec-409f-9212-ce231318d4df > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml > | CREATE_COMPLETE | 2022-05-08T11:59:09Z | > mymultik8-ub6qbaarl74z > | > | network | 29549cda-d294-4031-90c0-a8e50cdf31dc > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/network.yaml > | CREATE_COMPLETE | 2022-05-08T11:59:09Z | > mymultik8-ub6qbaarl74z > | > | secgroup_kube_minion | 1123a7cd-080c-49a9-a7e8-81e162f07fa0 > | OS::Neutron::SecurityGroup > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > | secgroup_kube_master | 5e2286d4-eadc-46ee-91db-ac79485d92b2 > | OS::Neutron::SecurityGroup > > | > CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z > | > > > > > > *| 1 | 713fd534-f3ac-4fc4-a3e9-9a8a67bfe549 > | > file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > | DELETE_FAILED | 2022-05-08T12:04:22Z | > mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | | 0 > | 009bf512-7d67-4db0-9e63-264a320d23ab > | > file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > | DELETE_FAILED | 2022-05-08T12:04:22Z | > mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | * > | docker_volume_attach | 07e2e175-bf0b-498c-8f0d-531f49ec183c > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > DELETE_FAILED | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-1-oq7v4edmmj4e | > | kube-minion | eca4b475-12f7-4a67-8192-ac9d99db1a65 > | OS::Nova::Server > > | > CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-1-oq7v4edmmj4e | > | kube_minion_eth0 | 17c7474d-8a63-43f7-a9cd-30ae22b4b153 > | OS::Neutron::Port > > | > CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-1-oq7v4edmmj4e | > | docker_volume | 07e2e175-bf0b-498c-8f0d-531f49ec183c > | > Magnum::Optional::Cinder::Volume > | > CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-1-oq7v4edmmj4e | > | agent_config | 09d6d11f-0623-49ab-a12e-653241d54d0b > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-1-oq7v4edmmj4e | > | docker_volume_attach | c9efd934-679c-4e7d-8258-e7c1c65b2b95 > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > DELETE_FAILED | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-0-3zue4xpnyzep | > | kube-minion | 37e7921a-03c8-40a2-ad80-2b006fb79c22 > | OS::Nova::Server > > | > CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-0-3zue4xpnyzep | > | agent_config | 05fcc72e-0561-497d-98e2-6b92c109db0f > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-0-3zue4xpnyzep | > | kube_minion_eth0 | beb8bf56-e5e1-4c7a-94ce-fe650e5902c6 > | OS::Neutron::Port > > | > CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-0-3zue4xpnyzep | > | docker_volume | c9efd934-679c-4e7d-8258-e7c1c65b2b95 > | > Magnum::Optional::Cinder::Volume > | > CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion > s-wlu4j5as6u4v-0-3zue4xpnyzep | > | 0 | ff2d9839-9584-4f25-b4dd-869456635201 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml > | CREATE_COMPLETE | 2022-05-08T12:00:11Z | > mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz | > | 1 | e57c70ea-c165-4f0a-b680-a4344b97779d > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml > | CREATE_COMPLETE | 2022-05-08T12:00:11Z | > mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz | > | etcd_pool_member | cc587c78-a756-4720-9683-8f2564a5031d > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | etcd_volume_attach | ce25bf02-e20f-458e-b8e5-f3e3eb94064b > | > Magnum::Optional::Etcd::VolumeAttachm > ent > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | api_pool_member | 8b406579-f31c-4eed-8d88-4fa829db62af > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | upgrade_kubernetes_deployment | > | > OS::Heat::SoftwareDeployment > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | upgrade_kubernetes | c66e6546-1cb5-4d13-a617-c2e3fd502f97 > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | docker_volume_attach | 440b7e09-b45a-4abb-b814-2bf52a262d06 > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | master_config_deployment | 40f798e8-e505-431e-af92-fe04d3a60c17 > | > OS::Heat::SoftwareDeployment > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | master_config | 706cead9-113a-40f5-b76c-6c24ad825720 > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | docker_volume | 440b7e09-b45a-4abb-b814-2bf52a262d06 > | > Magnum::Optional::Cinder::Volume > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | etcd_volume | b36dcb9c-6960-42c9-86d9-c56f41e45f50 > | > Magnum::Optional::Etcd::Volume > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | api_address_switch | 9d76cd95-9c17-418b-9eeb-6ab5b6824995 > | Magnum::ApiGatewaySwitcher > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | kube_master_floating | db987622-fa06-4ee4-ab4a-70c710552afb > | > Magnum::Optional::KubeMaster::Neutron > ::FloatingIP > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | kube-master | 2baede5b-335b-4509-a165-45e742316fe4 > | OS::Nova::Server > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | kube_master_eth0 | 84ce8ff0-e0de-4b6c-885d-28dc7f2becc1 > | OS::Neutron::Port > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | agent_config | 0456f77d-f9c5-4db2-82bd-db72dd49c023 > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-0-r4srfjnthzxr | > | etcd_pool_member | c5f9cc21-c951-43c4-b35c-53dc1b579697 > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | etcd_volume_attach | 92d65782-a3ef-4938-9d9b-7ca8f965dc76 > | > Magnum::Optional::Etcd::VolumeAttachm > ent > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | api_pool_member | c86a539d-863c-4f73-a8fb-976593f78110 > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | upgrade_kubernetes_deployment | > | > OS::Heat::SoftwareDeployment > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | upgrade_kubernetes | 0ee8cb5b-deb1-48c2-9323-1558c31c7dfc > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | docker_volume_attach | 02430b0e-c9d0-455d-9f3f-084b808f9ddf > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | master_config_deployment | 0c17ecc1-b676-4af5-a86c-1f86971080cf > | > OS::Heat::SoftwareDeployment > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | master_config | 1eb9d15d-ed09-417a-8fac-1c4e1424ed6a > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | docker_volume | 02430b0e-c9d0-455d-9f3f-084b808f9ddf > | > Magnum::Optional::Cinder::Volume > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | etcd_volume | 5fe87023-8ff4-4381-b543-4f15edadc61b > | > Magnum::Optional::Etcd::Volume > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | api_address_switch | 5bbab773-1c9e-4104-ad2f-39f52401ab05 > | Magnum::ApiGatewaySwitcher > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | kube_master_floating | 6fd94160-d49a-41b7-b936-8f682466c614 > | > Magnum::Optional::KubeMaster::Neutron > ::FloatingIP > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | kube-master | 41c198bb-114a-4baa-9a56-322f8f345fcd > | OS::Nova::Server > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | kube_master_eth0 | 7be6a71a-4448-4f8f-97fb-56e739a457db > | OS::Neutron::Port > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | agent_config | ad2d8d64-3919-4b61-9a51-71cc81345220 > | OS::Heat::SoftwareConfig > > | > CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master > s-vfy2f6qbw2lz-1-phcfygifbg3o | > | monitor | 73a7e0b6-4d3d-497c-9ab1-cd9fcaee398a > | > Magnum::Optional::Neutron::LBaaS::Hea > lthMonitor > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp > rw4iculpj | > | pool | 061c047c-8990-45b1-8ef3-86f1f8207fa0 > | > Magnum::Optional::Neutron::LBaaS::Poo > l > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp > rw4iculpj | > | listener | 112c04c4-a0a7-4770-84fb-df3ef630bb51 > | > Magnum::Optional::Neutron::LBaaS::Lis > tener > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp > rw4iculpj | > | loadbalancer | 6ce77c54-d7a5-41ac-af14-ee9993c255d6 > | > Magnum::Optional::Neutron::LBaaS::Loa > dBalancer > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp > rw4iculpj | > | floating | 06ab733d-45d4-42e8-9aa2-e9ecc00e3d44 > | > Magnum::Optional::Neutron::LBaaS::Flo > atingIP > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 > aup7jtig | > | monitor | 51cdc4dd-d0f0-4361-9c86-1f519951c957 > | > Magnum::Optional::Neutron::LBaaS::Hea > lthMonitor > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 > aup7jtig | > | pool | 67f7372d-8d60-4693-8809-56075a6ad326 > | > Magnum::Optional::Neutron::LBaaS::Poo > l > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 > aup7jtig | > | listener | 7193aa59-43f9-45a8-aa7e-c5a5f86beced > | > Magnum::Optional::Neutron::LBaaS::Lis > tener > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 > aup7jtig | > | loadbalancer | 04bc48ad-0884-4eae-b6ee-48b4a1b9cab3 > | > Magnum::Optional::Neutron::LBaaS::Loa > dBalancer > | > CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 > aup7jtig | > | extrouter_inside | > 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd:subnet_id=a704ec6f-1f29-431f-88c4-aae3c328ce2e > | Magnum::Optional::Neutron::RouterInte > rface > | > CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 > tldcepeoi | > | extrouter | 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd > | > Magnum::Optional::Neutron::Router > | > CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 > tldcepeoi | > | network_switch | bc89bc1b-cf84-4760-83e9-e73c1cb1d786 > | Magnum::NetworkSwitcher > > | > CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 > tldcepeoi | > | private_subnet | a704ec6f-1f29-431f-88c4-aae3c328ce2e > | > Magnum::Optional::Neutron::Subnet > | > CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 > tldcepeoi | > | private_network | 8cea85b5-9fb0-4578-9295-8d362526da2e > | > Magnum::Optional::Neutron::Net > | > CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 > tldcepeoi | > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- > ------------------------------+ > > Stack2: > (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 > 85b6ce8b-5c4c-4cab-b69f-aed69a96018f > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- > -------------------------------+ > | resource_name | physical_resource_id > | > resource_type > | > resource_status | updated_time | stack_name > | > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- > -------------------------------+ > > > *| kube_minions | 225fb4e7-8ae0-42a8-95a5-9dbe63f54650 > | OS::Heat::ResourceGroup > > | > DELETE_FAILED | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | * > | etcd_address_lb_switch | 762b3454-629f-4f21-b8fb-9ae896a3a010 > | Magnum::ApiGatewaySwitcher > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | api_address_lb_switch | 8a8935d4-8762-4fbc-bf25-4866ce0a53a7 > | Magnum::ApiGatewaySwitcher > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | kube_masters | d724aa3d-780b-414e-a080-3a501becdaae > | OS::Heat::ResourceGroup > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | etcd_lb | 2281b6f9-46e0-411b-833e-853a4994ad96 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml > | CHECK_COMPLETE | 2022-05-11T10:35:10Z | > testmulti-r37u5recibdq > | > | api_lb | 586477b3-8a25-4740-865c-5d682f3fb4f3 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml > | CHECK_COMPLETE | 2022-05-11T10:35:10Z | > testmulti-r37u5recibdq > | > | network | b747ad8b-b592-4a8f-afcd-b76017a1f68c > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/common/templates/network.yaml > | CHECK_COMPLETE | 2022-05-11T10:35:10Z | > testmulti-r37u5recibdq > | > | master_nodes_server_group | d0e3eea2-7776-4e57-9651-2799751a3dbc > | OS::Nova::ServerGroup > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | secgroup_kube_minion | 320ace66-73af-4518-8852-abcac7233e0c > | OS::Neutron::SecurityGroup > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | secgroup_kube_master | 3c0778ef-2fb0-4de6-97a2-603e4e753c58 > | OS::Neutron::SecurityGroup > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > | worker_nodes_server_group | 75d9263d-3187-4256-9506-732cee98799a > | OS::Nova::ServerGroup > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq > | > > > *| 1 | 6d31d728-61f7-4383-95e5-24800de91162 > | > file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml > | DELETE_FAILED | 2022-05-11T10:40:16Z | > testmulti-r37u5recibdq-kube_minions-uakw7w4mnepn * | > | docker_volume_attach | 63cd097c-9877-40c0-968f-52b58bdaedf9 > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > DELETE_FAILED | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio > ns-uakw7w4mnepn-1-pkoi6mf32khj | > | docker_volume | 63cd097c-9877-40c0-968f-52b58bdaedf9 > | > Magnum::Optional::Cinder::Volume > | > SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio > ns-uakw7w4mnepn-1-pkoi6mf32khj | > | kube-minion | 1a605e7b-aafc-4113-90a7-98d8d8fe5f96 > | OS::Nova::Server > > | > SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio > ns-uakw7w4mnepn-1-pkoi6mf32khj | > | kube_minion_eth0 | 05bf8eb4-f2ca-4234-8497-c5b13c977a32 > | OS::Neutron::Port > > | > SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio > ns-uakw7w4mnepn-1-pkoi6mf32khj | > | agent_config | ad560f63-94b2-4287-8a4c-6ac66042cc35 > | OS::Heat::SoftwareConfig > > | > SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio > ns-uakw7w4mnepn-1-pkoi6mf32khj | > | 1 | 3f4cff1f-28f4-4276-b676-e707a42465b8 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml > | CHECK_COMPLETE | 2022-05-11T10:36:12Z | > testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g | > | 0 | 412844d4-a864-47c2-b9cf-6eecee186338 > | > file:///var/lib/kolla/venv/lib/python > 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml > | CHECK_COMPLETE | 2022-05-11T10:36:12Z | > testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g | > | api_pool_member | dd1c1021-67fd-4fe9-b2a0-e481f906c79d > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | etcd_pool_member | b11fae15-1869-4d49-b939-c63d683d1a9f > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | upgrade_kubernetes_deployment | > | > OS::Heat::SoftwareDeployment > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | upgrade_kubernetes | 5ef86e19-94c5-48fe-88ab-5962ef03f63a > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | docker_volume_attach | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | etcd_volume_attach | 4be01f1a-b386-421e-963a-f75425bff4b3 > | > Magnum::Optional::Etcd::VolumeAttachm > ent > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | master_config_deployment | d4b740e2-1cfc-4569-a1d5-eabbf1285229 > | > OS::Heat::SoftwareDeployment > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | master_config | 574d9330-5ac0-4f7b-8bea-8b46b4966e14 > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | docker_volume | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 > | > Magnum::Optional::Cinder::Volume > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | etcd_volume | a47316ff-16ba-4fc9-b2d0-840d691400aa > | > Magnum::Optional::Etcd::Volume > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | api_address_switch | dbeb0fff-b974-4fe7-9451-d0c2eabffc03 > | Magnum::ApiGatewaySwitcher > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | kube_master_floating | 99047ae9-39dd-4196-a507-fbc82dc4375a > | > Magnum::Optional::KubeMaster::Neutron > ::FloatingIP > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | kube-master | 4fde1552-ce13-4546-9d8f-1e9ea25352a5 > | OS::Nova::Server > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | kube_master_eth0 | d442fcef-43e6-4fe2-9390-991bf827c221 > | OS::Neutron::Port > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | agent_config | 802300e7-8f19-4694-b2fc-2cb5ad55eb25 > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-1-gfoenmdqijvt | > | api_pool_member | 98264ea6-ea88-4132-8cca-7f0a049d8cee > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | etcd_pool_member | 458d8d42-6994-4dc6-8517-d3486ad42d67 > | > Magnum::Optional::Neutron::LBaaS::Poo > lMember > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | docker_volume_attach | 37f3721a-81b7-41b6-beb1-9860e9265f6c > | > Magnum::Optional::Cinder::VolumeAttac > hment > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | upgrade_kubernetes_deployment | > | > OS::Heat::SoftwareDeployment > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | upgrade_kubernetes | 725193f9-c222-4ad9-8097-df88656e5d9f > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | master_config_deployment | 3994e5a7-c7df-48e7-a562-70d690310d06 > | > OS::Heat::SoftwareDeployment > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | master_config | 94978b0e-4d00-4ced-9212-eb19da22875e > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | api_address_switch | 17692180-d5cc-4f1b-80a5-7a64c61d512a > | Magnum::ApiGatewaySwitcher > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | docker_volume | 37f3721a-81b7-41b6-beb1-9860e9265f6c > | > Magnum::Optional::Cinder::Volume > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | kube_master_floating | 6ffa46aa-4cc9-43a8-9dbe-4246d4d167d0 > | > Magnum::Optional::KubeMaster::Neutron > ::FloatingIP > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | etcd_volume_attach | 50d479d5-ea54-40b1-927a-6287fbef277d > | > Magnum::Optional::Etcd::VolumeAttachm > ent > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | etcd_volume | 6e32c307-8645-4a93-bd68-f6a5172a54b8 > | > Magnum::Optional::Etcd::Volume > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | kube-master | 6515ad5d-cc35-46d9-b554-31836e41d59e > | OS::Nova::Server > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | kube_master_eth0 | 3c9390d2-5d07-4a02-9110-6551e743ee32 > | OS::Neutron::Port > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | agent_config | 25cdf836-a317-45ff-aa18-262b933c3e4b > | OS::Heat::SoftwareConfig > > | > CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste > rs-2rwfudiqoi5g-0-omoqeuv5oreq | > | monitor | eed43fb4-b6fa-41c8-b612-e15aae185488 > | > Magnum::Optional::Neutron::LBaaS::Hea > lthMonitor > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k > tdtic4o56a | > | pool | cdf18e7c-b823-4c45-9f79-400c0f5f15bd > | > Magnum::Optional::Neutron::LBaaS::Poo > l > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k > tdtic4o56a | > | listener | 4437d207-03fb-4c75-8a6c-eb6a4946f6cd > | > Magnum::Optional::Neutron::LBaaS::Lis > tener > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k > tdtic4o56a | > | loadbalancer | 075c3e23-79be-4a95-8875-7235db051d24 > | > Magnum::Optional::Neutron::LBaaS::Loa > dBalancer > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k > tdtic4o56a | > | floating | 3ffc5f32-a1fe-4948-857b-d304a56ee48c > | > Magnum::Optional::Neutron::LBaaS::Flo > atingIP > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 > ckf7fomri | > | monitor | 6fde29bd-a607-464c-a54b-7c38132efb2d > | > Magnum::Optional::Neutron::LBaaS::Hea > lthMonitor > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 > ckf7fomri | > | pool | 62f92f0e-51dd-4a34-b9b8-f267b745ebd1 > | > Magnum::Optional::Neutron::LBaaS::Poo > l > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 > ckf7fomri | > | listener | 7d805d71-e811-422d-b81a-8941ece42bff > | > Magnum::Optional::Neutron::LBaaS::Lis > tener > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 > ckf7fomri | > | loadbalancer | c7599c6f-b9c0-4ed5-9b20-ed4cd3ee1326 > | > Magnum::Optional::Neutron::LBaaS::Loa > dBalancer > | > CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 > ckf7fomri | > | network_switch | 0dda301f-9d35-432b-a010-618b8ca62f3b > | Magnum::NetworkSwitcher > > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh > l3pe5z4pqq | > | extrouter_inside | > e2f8b157-60b5-40ab-aa0a-9e7888900cd9:subnet_id=3f85b8a4-43fc-43a3-abc2-58982006a1c4 > | Magnum::Optional::Neutron::RouterInte > rface > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh > l3pe5z4pqq | > | private_subnet | 3f85b8a4-43fc-43a3-abc2-58982006a1c4 > | > Magnum::Optional::Neutron::Subnet > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh > l3pe5z4pqq | > | private_network | eaedb3a2-1619-42af-9fd7-81e02e6537df > | > Magnum::Optional::Neutron::Net > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh > l3pe5z4pqq | > | extrouter | e2f8b157-60b5-40ab-aa0a-9e7888900cd9 > | > Magnum::Optional::Neutron::Router > | > CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh > l3pe5z4pqq | > > +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- > > --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- > -------------------------------+ > > We had a problem on the platform, and some VMs have gone to error state, > in reality they have disappeared, we tried to delete the stack and > resources that no longer exists are making these errors. We are searching > for a way to delete the stack even if the corresponding resource does not > exist anymore. > > > Regards. > > Le mar. 29 nov. 2022 ? 10:36, Jake Yip a ?crit : > >> Hi, >> >> Can you see what resource it is failing at with `openstack stack >> resource list -n5 `? >> >> You can also abandon the stack with `openstack stack abandon`. That will >> leave stray resources lying around though. >> >> Regards, >> Jake >> >> On 29/11/2022 2:09 am, wodel youchi wrote: >> > Hi, >> > >> > I have a magnum cluster stack which contains errors in its >> constituents, >> > some of the VMs (minions) that belong to that cluster do longer exist. >> > When I try to delete the stack it fails, and I get >> > >> > DELETE aborted (Task delete from ResourceGroup "kube_minions" >> > [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack >> "testcluter01-puf45b6dxmrn" >> > [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) >> > >> > Is there a way to force the deletion to proceed even with those errors? >> > >> > Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Nov 30 14:34:04 2022 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 30 Nov 2022 21:34:04 +0700 Subject: [kolla-ansible][Yoga][Magnum] How to delete a cluster containing errors In-Reply-To: References: <963f2f34-baba-4ebc-74a9-f4dacef64f5c@ardc.edu.au> Message-ID: Hope It helps: https://platform9.com/kb/openstack/unable-to-delete-stacks-stuck-in-delete_failed-state If u still cannot remove k8s. pls go to delete k8s instances and volumes. In my case, it works. Nguyen Huu Khoi On Wed, Nov 30, 2022 at 8:00 PM wodel youchi wrote: > Hi, > > Here are some other details : > (yogavenv) [deployer at rcdndeployer2 ~]$* openstack stack failures list > 85b6ce8b-5c4c-4cab-b69f-aed69a96018f* > testmulti-r37u5recibdq.kube_minions.1.docker_volume_attach: > resource_type: Magnum::Optional::Cinder::VolumeAttachment > physical_resource_id: 63cd097c-9877-40c0-968f-52b58bdaedf9 > status: DELETE_FAILED > status_reason: | > DELETE aborted (Task delete from CinderVolumeAttachment > "docker_volume_attach" [63cd097c-9877-40c0-968f-52b58bdaedf9] Stack > "testmulti-r37u5recibdq-kube_minions-uakw7w4mnepn-1-pkoi6mf32khj" > [6d31d728-61f7-4383-95e5-24800de91162] Timed out) > > (yogavenv) [deployer at rcdndeployer2 ~]$* openstack stack failures list > c1801318-c0b8-4438-9c85-41cf5c7812f1* > mymultik8-ub6qbaarl74z.kube_minions.1.docker_volume_attach: > resource_type: Magnum::Optional::Cinder::VolumeAttachment > physical_resource_id: 07e2e175-bf0b-498c-8f0d-531f49ec183c > status: DELETE_FAILED > status_reason: | > DELETE aborted (Task delete from CinderVolumeAttachment > "docker_volume_attach" [07e2e175-bf0b-498c-8f0d-531f49ec183c] Stack > "mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v-1-oq7v4edmmj4e" > [713fd534-f3ac-4fc4-a3e9-9a8a67bfe549] Timed out) > mymultik8-ub6qbaarl74z.kube_minions.0.docker_volume_attach: > resource_type: Magnum::Optional::Cinder::VolumeAttachment > physical_resource_id: c9efd934-679c-4e7d-8258-e7c1c65b2b95 > status: DELETE_FAILED > status_reason: | > DELETE aborted (Task delete from CinderVolumeAttachment > "docker_volume_attach" [c9efd934-679c-4e7d-8258-e7c1c65b2b95] Stack > "mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v-0-3zue4xpnyzep" > [009bf512-7d67-4db0-9e63-264a320d23ab] Timed out) > > > > PS : > (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack abandon > 85b6ce8b-5c4c-4cab-b69f-aed69a96018f > > *ERROR: Stack Abandon is not supported.* > > Regards. > > Le mar. 29 nov. 2022 ? 11:04, wodel youchi a > ?crit : > >> Hi, >> >> Here are two examples of stack which fail to be deleted >> Stack 1: >> (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 >> c1801318-c0b8-4438-9c85-41cf5c7812f1 >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- >> ------------------------------+ >> | resource_name | physical_resource_id >> | >> resource_type >> | >> resource_status | updated_time | stack_name >> | >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- >> ------------------------------+ >> >> >> *| kube_minions | ffc19ead-94f0-4ff3-b496-b454a156f1f7 >> | OS::Heat::ResourceGroup >> >> | >> DELETE_FAILED | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | * >> | api_address_lb_switch | d0b7f19c-7b66-4c07-a60a-cb6e62c48876 >> | Magnum::ApiGatewaySwitcher >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | etcd_address_lb_switch | 8c2a7952-e2aa-42d5-9da9-3a9881e69211 >> | Magnum::ApiGatewaySwitcher >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | worker_nodes_server_group | cf3601c0-3876-4d91-8fa0-0f87e6b43eca >> | OS::Nova::ServerGroup >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | kube_masters | ebbccf6a-40bf-4cbc-badd-fbafbb255337 >> | OS::Heat::ResourceGroup >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | etcd_lb | 65fd17e5-5e54-406d-bbba-7d56a8d4eb93 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml >> | CREATE_COMPLETE | 2022-05-08T11:59:09Z | >> mymultik8-ub6qbaarl74z >> | >> | master_nodes_server_group | b0c9c49a-9160-4b33-af90-c2c16392b22e >> | OS::Nova::ServerGroup >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | api_lb | b48e0ade-52ec-409f-9212-ce231318d4df >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml >> | CREATE_COMPLETE | 2022-05-08T11:59:09Z | >> mymultik8-ub6qbaarl74z >> | >> | network | 29549cda-d294-4031-90c0-a8e50cdf31dc >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/network.yaml >> | CREATE_COMPLETE | 2022-05-08T11:59:09Z | >> mymultik8-ub6qbaarl74z >> | >> | secgroup_kube_minion | 1123a7cd-080c-49a9-a7e8-81e162f07fa0 >> | OS::Neutron::SecurityGroup >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> | secgroup_kube_master | 5e2286d4-eadc-46ee-91db-ac79485d92b2 >> | OS::Neutron::SecurityGroup >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:09Z | mymultik8-ub6qbaarl74z >> | >> >> >> >> >> >> *| 1 | 713fd534-f3ac-4fc4-a3e9-9a8a67bfe549 >> | >> file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >> | DELETE_FAILED | 2022-05-08T12:04:22Z | >> mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | | 0 >> | 009bf512-7d67-4db0-9e63-264a320d23ab >> | >> file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >> | DELETE_FAILED | 2022-05-08T12:04:22Z | >> mymultik8-ub6qbaarl74z-kube_minions-wlu4j5as6u4v | * >> | docker_volume_attach | 07e2e175-bf0b-498c-8f0d-531f49ec183c >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> DELETE_FAILED | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-1-oq7v4edmmj4e | >> | kube-minion | eca4b475-12f7-4a67-8192-ac9d99db1a65 >> | OS::Nova::Server >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-1-oq7v4edmmj4e | >> | kube_minion_eth0 | 17c7474d-8a63-43f7-a9cd-30ae22b4b153 >> | OS::Neutron::Port >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-1-oq7v4edmmj4e | >> | docker_volume | 07e2e175-bf0b-498c-8f0d-531f49ec183c >> | >> Magnum::Optional::Cinder::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-1-oq7v4edmmj4e | >> | agent_config | 09d6d11f-0623-49ab-a12e-653241d54d0b >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:23Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-1-oq7v4edmmj4e | >> | docker_volume_attach | c9efd934-679c-4e7d-8258-e7c1c65b2b95 >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> DELETE_FAILED | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-0-3zue4xpnyzep | >> | kube-minion | 37e7921a-03c8-40a2-ad80-2b006fb79c22 >> | OS::Nova::Server >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-0-3zue4xpnyzep | >> | agent_config | 05fcc72e-0561-497d-98e2-6b92c109db0f >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-0-3zue4xpnyzep | >> | kube_minion_eth0 | beb8bf56-e5e1-4c7a-94ce-fe650e5902c6 >> | OS::Neutron::Port >> >> | >> CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-0-3zue4xpnyzep | >> | docker_volume | c9efd934-679c-4e7d-8258-e7c1c65b2b95 >> | >> Magnum::Optional::Cinder::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:04:22Z | mymultik8-ub6qbaarl74z-kube_minion >> s-wlu4j5as6u4v-0-3zue4xpnyzep | >> | 0 | ff2d9839-9584-4f25-b4dd-869456635201 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> | CREATE_COMPLETE | 2022-05-08T12:00:11Z | >> mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz | >> | 1 | e57c70ea-c165-4f0a-b680-a4344b97779d >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> | CREATE_COMPLETE | 2022-05-08T12:00:11Z | >> mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz | >> | etcd_pool_member | cc587c78-a756-4720-9683-8f2564a5031d >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | etcd_volume_attach | ce25bf02-e20f-458e-b8e5-f3e3eb94064b >> | >> Magnum::Optional::Etcd::VolumeAttachm >> ent >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | api_pool_member | 8b406579-f31c-4eed-8d88-4fa829db62af >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | upgrade_kubernetes_deployment | >> | >> OS::Heat::SoftwareDeployment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | upgrade_kubernetes | c66e6546-1cb5-4d13-a617-c2e3fd502f97 >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | docker_volume_attach | 440b7e09-b45a-4abb-b814-2bf52a262d06 >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | master_config_deployment | 40f798e8-e505-431e-af92-fe04d3a60c17 >> | >> OS::Heat::SoftwareDeployment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | master_config | 706cead9-113a-40f5-b76c-6c24ad825720 >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | docker_volume | 440b7e09-b45a-4abb-b814-2bf52a262d06 >> | >> Magnum::Optional::Cinder::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | etcd_volume | b36dcb9c-6960-42c9-86d9-c56f41e45f50 >> | >> Magnum::Optional::Etcd::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | api_address_switch | 9d76cd95-9c17-418b-9eeb-6ab5b6824995 >> | Magnum::ApiGatewaySwitcher >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | kube_master_floating | db987622-fa06-4ee4-ab4a-70c710552afb >> | >> Magnum::Optional::KubeMaster::Neutron >> ::FloatingIP >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | kube-master | 2baede5b-335b-4509-a165-45e742316fe4 >> | OS::Nova::Server >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | kube_master_eth0 | 84ce8ff0-e0de-4b6c-885d-28dc7f2becc1 >> | OS::Neutron::Port >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | agent_config | 0456f77d-f9c5-4db2-82bd-db72dd49c023 >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:13Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-0-r4srfjnthzxr | >> | etcd_pool_member | c5f9cc21-c951-43c4-b35c-53dc1b579697 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | etcd_volume_attach | 92d65782-a3ef-4938-9d9b-7ca8f965dc76 >> | >> Magnum::Optional::Etcd::VolumeAttachm >> ent >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | api_pool_member | c86a539d-863c-4f73-a8fb-976593f78110 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | upgrade_kubernetes_deployment | >> | >> OS::Heat::SoftwareDeployment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | upgrade_kubernetes | 0ee8cb5b-deb1-48c2-9323-1558c31c7dfc >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | docker_volume_attach | 02430b0e-c9d0-455d-9f3f-084b808f9ddf >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | master_config_deployment | 0c17ecc1-b676-4af5-a86c-1f86971080cf >> | >> OS::Heat::SoftwareDeployment >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | master_config | 1eb9d15d-ed09-417a-8fac-1c4e1424ed6a >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | docker_volume | 02430b0e-c9d0-455d-9f3f-084b808f9ddf >> | >> Magnum::Optional::Cinder::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | etcd_volume | 5fe87023-8ff4-4381-b543-4f15edadc61b >> | >> Magnum::Optional::Etcd::Volume >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | api_address_switch | 5bbab773-1c9e-4104-ad2f-39f52401ab05 >> | Magnum::ApiGatewaySwitcher >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | kube_master_floating | 6fd94160-d49a-41b7-b936-8f682466c614 >> | >> Magnum::Optional::KubeMaster::Neutron >> ::FloatingIP >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | kube-master | 41c198bb-114a-4baa-9a56-322f8f345fcd >> | OS::Nova::Server >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | kube_master_eth0 | 7be6a71a-4448-4f8f-97fb-56e739a457db >> | OS::Neutron::Port >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | agent_config | ad2d8d64-3919-4b61-9a51-71cc81345220 >> | OS::Heat::SoftwareConfig >> >> | >> CREATE_COMPLETE | 2022-05-08T12:00:12Z | mymultik8-ub6qbaarl74z-kube_master >> s-vfy2f6qbw2lz-1-phcfygifbg3o | >> | monitor | 73a7e0b6-4d3d-497c-9ab1-cd9fcaee398a >> | >> Magnum::Optional::Neutron::LBaaS::Hea >> lthMonitor >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp >> rw4iculpj | >> | pool | 061c047c-8990-45b1-8ef3-86f1f8207fa0 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> l >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp >> rw4iculpj | >> | listener | 112c04c4-a0a7-4770-84fb-df3ef630bb51 >> | >> Magnum::Optional::Neutron::LBaaS::Lis >> tener >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp >> rw4iculpj | >> | loadbalancer | 6ce77c54-d7a5-41ac-af14-ee9993c255d6 >> | >> Magnum::Optional::Neutron::LBaaS::Loa >> dBalancer >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-etcd_lb-ogp >> rw4iculpj | >> | floating | 06ab733d-45d4-42e8-9aa2-e9ecc00e3d44 >> | >> Magnum::Optional::Neutron::LBaaS::Flo >> atingIP >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 >> aup7jtig | >> | monitor | 51cdc4dd-d0f0-4361-9c86-1f519951c957 >> | >> Magnum::Optional::Neutron::LBaaS::Hea >> lthMonitor >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 >> aup7jtig | >> | pool | 67f7372d-8d60-4693-8809-56075a6ad326 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> l >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 >> aup7jtig | >> | listener | 7193aa59-43f9-45a8-aa7e-c5a5f86beced >> | >> Magnum::Optional::Neutron::LBaaS::Lis >> tener >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 >> aup7jtig | >> | loadbalancer | 04bc48ad-0884-4eae-b6ee-48b4a1b9cab3 >> | >> Magnum::Optional::Neutron::LBaaS::Loa >> dBalancer >> | >> CREATE_COMPLETE | 2022-05-08T11:59:15Z | mymultik8-ub6qbaarl74z-api_lb-lce7 >> aup7jtig | >> | extrouter_inside | >> 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd:subnet_id=a704ec6f-1f29-431f-88c4-aae3c328ce2e >> | Magnum::Optional::Neutron::RouterInte >> rface >> | >> CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 >> tldcepeoi | >> | extrouter | 6a4cbb2b-6fdb-4edb-9f3d-075ee6f934fd >> | >> Magnum::Optional::Neutron::Router >> | >> CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 >> tldcepeoi | >> | network_switch | bc89bc1b-cf84-4760-83e9-e73c1cb1d786 >> | Magnum::NetworkSwitcher >> >> | >> CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 >> tldcepeoi | >> | private_subnet | a704ec6f-1f29-431f-88c4-aae3c328ce2e >> | >> Magnum::Optional::Neutron::Subnet >> | >> CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 >> tldcepeoi | >> | private_network | 8cea85b5-9fb0-4578-9295-8d362526da2e >> | >> Magnum::Optional::Neutron::Net >> | >> CREATE_COMPLETE | 2022-05-08T11:59:11Z | mymultik8-ub6qbaarl74z-network-lt4 >> tldcepeoi | >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+-----------------+----------------------+----------------------------------- >> ------------------------------+ >> >> Stack2: >> (yogavenv) [deployer at rcdndeployer2 ~]$ openstack stack resource list -n5 >> 85b6ce8b-5c4c-4cab-b69f-aed69a96018f >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- >> -------------------------------+ >> | resource_name | physical_resource_id >> | >> resource_type >> | >> resource_status | updated_time | stack_name >> | >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- >> -------------------------------+ >> >> >> *| kube_minions | 225fb4e7-8ae0-42a8-95a5-9dbe63f54650 >> | OS::Heat::ResourceGroup >> >> | >> DELETE_FAILED | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | * >> | etcd_address_lb_switch | 762b3454-629f-4f21-b8fb-9ae896a3a010 >> | Magnum::ApiGatewaySwitcher >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | api_address_lb_switch | 8a8935d4-8762-4fbc-bf25-4866ce0a53a7 >> | Magnum::ApiGatewaySwitcher >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | kube_masters | d724aa3d-780b-414e-a080-3a501becdaae >> | OS::Heat::ResourceGroup >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | etcd_lb | 2281b6f9-46e0-411b-833e-853a4994ad96 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/lb_etcd.yaml >> | CHECK_COMPLETE | 2022-05-11T10:35:10Z | >> testmulti-r37u5recibdq >> | >> | api_lb | 586477b3-8a25-4740-865c-5d682f3fb4f3 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/lb_api.yaml >> | CHECK_COMPLETE | 2022-05-11T10:35:10Z | >> testmulti-r37u5recibdq >> | >> | network | b747ad8b-b592-4a8f-afcd-b76017a1f68c >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/common/templates/network.yaml >> | CHECK_COMPLETE | 2022-05-11T10:35:10Z | >> testmulti-r37u5recibdq >> | >> | master_nodes_server_group | d0e3eea2-7776-4e57-9651-2799751a3dbc >> | OS::Nova::ServerGroup >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | secgroup_kube_minion | 320ace66-73af-4518-8852-abcac7233e0c >> | OS::Neutron::SecurityGroup >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | secgroup_kube_master | 3c0778ef-2fb0-4de6-97a2-603e4e753c58 >> | OS::Neutron::SecurityGroup >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> | worker_nodes_server_group | 75d9263d-3187-4256-9506-732cee98799a >> | OS::Nova::ServerGroup >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq >> | >> >> >> *| 1 | 6d31d728-61f7-4383-95e5-24800de91162 >> | >> file:///var/lib/kolla/venv/lib/python3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubeminion.yaml >> | DELETE_FAILED | 2022-05-11T10:40:16Z | >> testmulti-r37u5recibdq-kube_minions-uakw7w4mnepn * | >> | docker_volume_attach | 63cd097c-9877-40c0-968f-52b58bdaedf9 >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> DELETE_FAILED | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio >> ns-uakw7w4mnepn-1-pkoi6mf32khj | >> | docker_volume | 63cd097c-9877-40c0-968f-52b58bdaedf9 >> | >> Magnum::Optional::Cinder::Volume >> | >> SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio >> ns-uakw7w4mnepn-1-pkoi6mf32khj | >> | kube-minion | 1a605e7b-aafc-4113-90a7-98d8d8fe5f96 >> | OS::Nova::Server >> >> | >> SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio >> ns-uakw7w4mnepn-1-pkoi6mf32khj | >> | kube_minion_eth0 | 05bf8eb4-f2ca-4234-8497-c5b13c977a32 >> | OS::Neutron::Port >> >> | >> SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio >> ns-uakw7w4mnepn-1-pkoi6mf32khj | >> | agent_config | ad560f63-94b2-4287-8a4c-6ac66042cc35 >> | OS::Heat::SoftwareConfig >> >> | >> SUSPEND_COMPLETE | 2022-05-11T10:40:18Z | testmulti-r37u5recibdq-kube_minio >> ns-uakw7w4mnepn-1-pkoi6mf32khj | >> | 1 | 3f4cff1f-28f4-4276-b676-e707a42465b8 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> | CHECK_COMPLETE | 2022-05-11T10:36:12Z | >> testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g | >> | 0 | 412844d4-a864-47c2-b9cf-6eecee186338 >> | >> file:///var/lib/kolla/venv/lib/python >> 3.6/site-packages/magnum/drivers/k8s_fedora_coreos_v1/templates/kubemaster.yaml >> | CHECK_COMPLETE | 2022-05-11T10:36:12Z | >> testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g | >> | api_pool_member | dd1c1021-67fd-4fe9-b2a0-e481f906c79d >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | etcd_pool_member | b11fae15-1869-4d49-b939-c63d683d1a9f >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | upgrade_kubernetes_deployment | >> | >> OS::Heat::SoftwareDeployment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | upgrade_kubernetes | 5ef86e19-94c5-48fe-88ab-5962ef03f63a >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | docker_volume_attach | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | etcd_volume_attach | 4be01f1a-b386-421e-963a-f75425bff4b3 >> | >> Magnum::Optional::Etcd::VolumeAttachm >> ent >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | master_config_deployment | d4b740e2-1cfc-4569-a1d5-eabbf1285229 >> | >> OS::Heat::SoftwareDeployment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | master_config | 574d9330-5ac0-4f7b-8bea-8b46b4966e14 >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | docker_volume | 460a5a20-cbd1-4e4d-827b-ecf8876d1f75 >> | >> Magnum::Optional::Cinder::Volume >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | etcd_volume | a47316ff-16ba-4fc9-b2d0-840d691400aa >> | >> Magnum::Optional::Etcd::Volume >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | api_address_switch | dbeb0fff-b974-4fe7-9451-d0c2eabffc03 >> | Magnum::ApiGatewaySwitcher >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | kube_master_floating | 99047ae9-39dd-4196-a507-fbc82dc4375a >> | >> Magnum::Optional::KubeMaster::Neutron >> ::FloatingIP >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | kube-master | 4fde1552-ce13-4546-9d8f-1e9ea25352a5 >> | OS::Nova::Server >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | kube_master_eth0 | d442fcef-43e6-4fe2-9390-991bf827c221 >> | OS::Neutron::Port >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | agent_config | 802300e7-8f19-4694-b2fc-2cb5ad55eb25 >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:14Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-1-gfoenmdqijvt | >> | api_pool_member | 98264ea6-ea88-4132-8cca-7f0a049d8cee >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | etcd_pool_member | 458d8d42-6994-4dc6-8517-d3486ad42d67 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> lMember >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | docker_volume_attach | 37f3721a-81b7-41b6-beb1-9860e9265f6c >> | >> Magnum::Optional::Cinder::VolumeAttac >> hment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | upgrade_kubernetes_deployment | >> | >> OS::Heat::SoftwareDeployment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | upgrade_kubernetes | 725193f9-c222-4ad9-8097-df88656e5d9f >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | master_config_deployment | 3994e5a7-c7df-48e7-a562-70d690310d06 >> | >> OS::Heat::SoftwareDeployment >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | master_config | 94978b0e-4d00-4ced-9212-eb19da22875e >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | api_address_switch | 17692180-d5cc-4f1b-80a5-7a64c61d512a >> | Magnum::ApiGatewaySwitcher >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | docker_volume | 37f3721a-81b7-41b6-beb1-9860e9265f6c >> | >> Magnum::Optional::Cinder::Volume >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | kube_master_floating | 6ffa46aa-4cc9-43a8-9dbe-4246d4d167d0 >> | >> Magnum::Optional::KubeMaster::Neutron >> ::FloatingIP >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | etcd_volume_attach | 50d479d5-ea54-40b1-927a-6287fbef277d >> | >> Magnum::Optional::Etcd::VolumeAttachm >> ent >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | etcd_volume | 6e32c307-8645-4a93-bd68-f6a5172a54b8 >> | >> Magnum::Optional::Etcd::Volume >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | kube-master | 6515ad5d-cc35-46d9-b554-31836e41d59e >> | OS::Nova::Server >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | kube_master_eth0 | 3c9390d2-5d07-4a02-9110-6551e743ee32 >> | OS::Neutron::Port >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | agent_config | 25cdf836-a317-45ff-aa18-262b933c3e4b >> | OS::Heat::SoftwareConfig >> >> | >> CHECK_COMPLETE | 2022-05-11T10:36:13Z | testmulti-r37u5recibdq-kube_maste >> rs-2rwfudiqoi5g-0-omoqeuv5oreq | >> | monitor | eed43fb4-b6fa-41c8-b612-e15aae185488 >> | >> Magnum::Optional::Neutron::LBaaS::Hea >> lthMonitor >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k >> tdtic4o56a | >> | pool | cdf18e7c-b823-4c45-9f79-400c0f5f15bd >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> l >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k >> tdtic4o56a | >> | listener | 4437d207-03fb-4c75-8a6c-eb6a4946f6cd >> | >> Magnum::Optional::Neutron::LBaaS::Lis >> tener >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k >> tdtic4o56a | >> | loadbalancer | 075c3e23-79be-4a95-8875-7235db051d24 >> | >> Magnum::Optional::Neutron::LBaaS::Loa >> dBalancer >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-etcd_lb-2k >> tdtic4o56a | >> | floating | 3ffc5f32-a1fe-4948-857b-d304a56ee48c >> | >> Magnum::Optional::Neutron::LBaaS::Flo >> atingIP >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 >> ckf7fomri | >> | monitor | 6fde29bd-a607-464c-a54b-7c38132efb2d >> | >> Magnum::Optional::Neutron::LBaaS::Hea >> lthMonitor >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 >> ckf7fomri | >> | pool | 62f92f0e-51dd-4a34-b9b8-f267b745ebd1 >> | >> Magnum::Optional::Neutron::LBaaS::Poo >> l >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 >> ckf7fomri | >> | listener | 7d805d71-e811-422d-b81a-8941ece42bff >> | >> Magnum::Optional::Neutron::LBaaS::Lis >> tener >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 >> ckf7fomri | >> | loadbalancer | c7599c6f-b9c0-4ed5-9b20-ed4cd3ee1326 >> | >> Magnum::Optional::Neutron::LBaaS::Loa >> dBalancer >> | >> CHECK_COMPLETE | 2022-05-11T10:35:16Z | testmulti-r37u5recibdq-api_lb-mk5 >> ckf7fomri | >> | network_switch | 0dda301f-9d35-432b-a010-618b8ca62f3b >> | Magnum::NetworkSwitcher >> >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh >> l3pe5z4pqq | >> | extrouter_inside | >> e2f8b157-60b5-40ab-aa0a-9e7888900cd9:subnet_id=3f85b8a4-43fc-43a3-abc2-58982006a1c4 >> | Magnum::Optional::Neutron::RouterInte >> rface >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh >> l3pe5z4pqq | >> | private_subnet | 3f85b8a4-43fc-43a3-abc2-58982006a1c4 >> | >> Magnum::Optional::Neutron::Subnet >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh >> l3pe5z4pqq | >> | private_network | eaedb3a2-1619-42af-9fd7-81e02e6537df >> | >> Magnum::Optional::Neutron::Net >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh >> l3pe5z4pqq | >> | extrouter | e2f8b157-60b5-40ab-aa0a-9e7888900cd9 >> | >> Magnum::Optional::Neutron::Router >> | >> CHECK_COMPLETE | 2022-05-11T10:35:10Z | testmulti-r37u5recibdq-network-oh >> l3pe5z4pqq | >> >> +-------------------------------+-------------------------------------------------------------------------------------+-------------------------------------- >> >> --------------------------------------------------------------------------------+------------------+----------------------+---------------------------------- >> -------------------------------+ >> >> We had a problem on the platform, and some VMs have gone to error state, >> in reality they have disappeared, we tried to delete the stack and >> resources that no longer exists are making these errors. We are searching >> for a way to delete the stack even if the corresponding resource does not >> exist anymore. >> >> >> Regards. >> >> Le mar. 29 nov. 2022 ? 10:36, Jake Yip a ?crit : >> >>> Hi, >>> >>> Can you see what resource it is failing at with `openstack stack >>> resource list -n5 `? >>> >>> You can also abandon the stack with `openstack stack abandon`. That will >>> leave stray resources lying around though. >>> >>> Regards, >>> Jake >>> >>> On 29/11/2022 2:09 am, wodel youchi wrote: >>> > Hi, >>> > >>> > I have a magnum cluster stack which contains errors in its >>> constituents, >>> > some of the VMs (minions) that belong to that cluster do longer exist. >>> > When I try to delete the stack it fails, and I get >>> > >>> > DELETE aborted (Task delete from ResourceGroup "kube_minions" >>> > [fddb3056-9b00-4665-b0d6-c3d3f176814b] Stack >>> "testcluter01-puf45b6dxmrn" >>> > [d10af7f2-6ecd-442b-b1f9-140b79e58d13] Timed out) >>> > >>> > Is there a way to force the deletion to proceed even with those errors? >>> > >>> > Regards. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: