From skaplons at redhat.com Tue Aug 1 06:53:00 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 01 Aug 2023 08:53:00 +0200 Subject: [ptls][tc] OpenStack User Survey Updates In-Reply-To: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> References: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> Message-ID: <4953562.GGS8mfJHr0@p1> Hi, Dnia poniedzia?ek, 31 lipca 2023 21:17:07 CEST Allison Price pisze: > Hi Everyone, > > Like Helena mentioned last week, we are closing the 2023 OpenStack User Survey in a few weeks and will then open the 2024 OpenStack User Survey. At this time, we want to offer the project teams and TC the opportunity to update your project specific questions that appear at the end of the survey. As a reminder, for the project-specific questions, these appear if a survey taker selects your project in their deployment and TC questions appear to all survey takers. > > If you and your team would like to update the question, please let me know by Friday, August 18. I know that this is a holiday time for many, so if any team needs some extra time, just let me know. I am also able to share the existing questions with any team that needs a refresher on what is currently in the survey. > > In the meantime, please continue to promote openstack.org/usersurvey to anyone (and everyone!) you know who is running OpenStack or planning to in the future. We want to get as much feedback as we possibly can. > > Cheers, > Allison > Thx Allison for the heads up. I have one question - where I can find existing questions from the survey? Do I need to do 2023 survey to find out what questions are actually there now? -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From knikolla at bu.edu Tue Aug 1 13:41:31 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 1 Aug 2023 13:41:31 +0000 Subject: [tc] Technical Committee next weekly meeting today on August 1, 2023 Message-ID: Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held today (Tuesday, Aug 1, 2023) at 1800 UTC on Zoom. Use the following link to connect https://us06web.zoom.us/j/87108541765?pwd=emlXVXg4QUxrUTlLNDZ2TTllWUM3Zz09 The agenda for the meeting is: ? Roll call ? Follow up on past action items ? tc-members to review https://review.opendev.org/c/openstack/project-team-guide/+/843457 ? Unmaintained status replaces Extended Maintenance ? https://review.opendev.org/c/openstack/governance/+/888771 ? Release notes guidelines for SLURP/NON-SLURP cadence ? Gate health check ? Open Discussion and Reviews ? https://review.opendev.org/q/projects:openstack/governance+is:open Thank you, Kristi Nikolla From satish.txt at gmail.com Tue Aug 1 16:11:23 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 1 Aug 2023 12:11:23 -0400 Subject: [openvswitch][neutron] firewall_driver openvswitch in production Message-ID: Folks, Who is running the OVS firewall driver (firewall_driver = openvswitch) in production and are there any issues with running it which I may not be aware of? We are not yet ready for OVN deployments so have to stick with OVS. LinuxBridge is at the end of its life trying to get rid of any dependency. [securitygroup] firewall_driver = openvswitch -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Aug 1 18:04:17 2023 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 1 Aug 2023 13:04:17 -0500 Subject: [tc] Technical Committee next weekly meeting today on August 1, 2023 In-Reply-To: References: Message-ID: On Tue, 1 Aug 2023 at 08:47, Nikolla, Kristi wrote: > > Hi all, > > This is a reminder that the next weekly Technical Committee meeting is to be held today (Tuesday, Aug 1, 2023) at 1800 UTC on Zoom. Use the following link to connect https://us06web.zoom.us/j/87108541765?pwd=emlXVXg4QUxrUTlLNDZ2TTllWUM3Zz09 > > The agenda for the meeting is: > > ? Roll call > ? Follow up on past action items > ? tc-members to review https://review.opendev.org/c/openstack/project-team-guide/+/843457 > ? Unmaintained status replaces Extended Maintenance > ? https://review.opendev.org/c/openstack/governance/+/888771 I understand that I'm not on the TC but .... I can't participate in a zoom call, I request time to digest the discussion to date.a quick read of the resolution seems okay but also seems to contain some false assertions. Yours Tony. From knikolla at bu.edu Tue Aug 1 19:52:12 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 1 Aug 2023 19:52:12 +0000 Subject: [tc] Technical Committee next weekly meeting today on August 1, 2023 In-Reply-To: References: Message-ID: <94D08BEF-986C-4B14-9773-0C62640F150C@bu.edu> Hi Tony, Next weeks TC meeting will be held on IRC. We only hold the meeting on Zoom the first Tuesday of every month, with the other Tuesday meetings being held on IRC. Please do comment your concerns on the proposal on Gerrit, and if you can't make it to next week's meeting, we're usually available on the #openstack-tc channel so please feel free to ping us anytime or during the meeting. Best, Kristi On Aug 1, 2023, at 2:04 PM, Tony Breeds wrote: On Tue, 1 Aug 2023 at 08:47, Nikolla, Kristi > wrote: Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held today (Tuesday, Aug 1, 2023) at 1800 UTC on Zoom. Use the following link to connect https://us06web.zoom.us/j/87108541765?pwd=emlXVXg4QUxrUTlLNDZ2TTllWUM3Zz09 The agenda for the meeting is: ? Roll call ? Follow up on past action items ? tc-members to review https://review.opendev.org/c/openstack/project-team-guide/+/843457 ? Unmaintained status replaces Extended Maintenance ? https://review.opendev.org/c/openstack/governance/+/888771 I understand that I'm not on the TC but .... I can't participate in a zoom call, I request time to digest the discussion to date.a quick read of the resolution seems okay but also seems to contain some false assertions. Yours Tony. -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openinfra.dev Tue Aug 1 20:02:23 2023 From: allison at openinfra.dev (Allison Price) Date: Tue, 1 Aug 2023 15:02:23 -0500 Subject: [ptls][tc] OpenStack User Survey Updates In-Reply-To: <4953562.GGS8mfJHr0@p1> References: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> <4953562.GGS8mfJHr0@p1> Message-ID: Hi Slawek, I worked with our team to pull the questions from the survey into this Google Doc [1]. Most of the project specific questions are on the second tab, but there are a few on tab 3 as well. Please let me know if you have any questions. Cheers, Allison [1] https://docs.google.com/spreadsheets/d/1YZu7bJ4k_nog4ByrX1KNkjkFHJsIu6c9DM6gGO9R4QU/edit?usp=sharing > On Aug 1, 2023, at 1:53 AM, Slawek Kaplonski wrote: > > Hi, > > Dnia poniedzia?ek, 31 lipca 2023 21:17:07 CEST Allison Price pisze: >> Hi Everyone, >> >> Like Helena mentioned last week, we are closing the 2023 OpenStack User Survey in a few weeks and will then open the 2024 OpenStack User Survey. At this time, we want to offer the project teams and TC the opportunity to update your project specific questions that appear at the end of the survey. As a reminder, for the project-specific questions, these appear if a survey taker selects your project in their deployment and TC questions appear to all survey takers. >> >> If you and your team would like to update the question, please let me know by Friday, August 18. I know that this is a holiday time for many, so if any team needs some extra time, just let me know. I am also able to share the existing questions with any team that needs a refresher on what is currently in the survey. >> >> In the meantime, please continue to promote openstack.org/usersurvey to anyone (and everyone!) you know who is running OpenStack or planning to in the future. We want to get as much feedback as we possibly can. >> >> Cheers, >> Allison >> > > Thx Allison for the heads up. I have one question - where I can find existing questions from the survey? Do I need to do 2023 survey to find out what questions are actually there now? > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From satish.txt at gmail.com Tue Aug 1 21:20:46 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 1 Aug 2023 17:20:46 -0400 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED Message-ID: Folks, I am running the Xena release and fedora-coreos-31.X image. My cluster is always throwing an error kube_masters CREATE_FAILED. This is my template: openstack coe cluster template create --coe kubernetes --image "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor gen.medium --docker-storage-driver overlay2 --keypair jmp1-key --external-network net_eng_vlan_39 --network-driver flannel --dns-nameserver 8.8.8.8 --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ k8s-new-template-31 Command to create cluster: openstack coe cluster create --cluster-template k8s-new-template-31 --master-count 1 --node-count 2 --keypair jmp1-key mycluster31 Here is the output of heat stack [root at ostack-eng-osa images]# heat resource-list mycluster31-bw5yi3lzkw45 WARNING (shell) "heat resource-list" is deprecated, please use "openstack stack resource list" instead +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ | api_address_floating_switch | | Magnum::FloatingIPAddressSwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z | | api_address_lb_switch | | Magnum::ApiGatewaySwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z | | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | etcd_address_lb_switch | | Magnum::ApiGatewaySwitcher | INIT_COMPLETE | 2023-08-01T20:55:49Z | | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | kube_cluster_config | | OS::Heat::SoftwareConfig | INIT_COMPLETE | 2023-08-01T20:55:49Z | | kube_cluster_deploy | | OS::Heat::SoftwareDeployment | INIT_COMPLETE | 2023-08-01T20:55:49Z | | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 | OS::Heat::ResourceGroup | CREATE_FAILED | 2023-08-01T20:55:49Z | | kube_minions | | OS::Heat::ResourceGroup | INIT_COMPLETE | 2023-08-01T20:55:49Z | | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db | OS::Nova::ServerGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 | file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 | OS::Neutron::SecurityGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 | OS::Neutron::SecurityGroupRule | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b | OS::Neutron::SecurityGroupRule | CREATE_COMPLETE | 2023-08-01T20:55:49Z | | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 | OS::Nova::ServerGroup | CREATE_COMPLETE | 2023-08-01T20:55:49Z | +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ I can ssh into an instance but am not sure what logs I should be chasing to find the proper issue. Any kind of help appreciated -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Aug 1 21:27:32 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 1 Aug 2023 17:27:32 -0400 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED In-Reply-To: References: Message-ID: After some spelunking I found some error messages on instance in journalctl. Why error logs showing podman? https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/ On Tue, Aug 1, 2023 at 5:20?PM Satish Patel wrote: > Folks, > > I am running the Xena release and fedora-coreos-31.X image. My cluster is > always throwing an error kube_masters CREATE_FAILED. > > This is my template: > > openstack coe cluster template create --coe kubernetes --image > "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor > gen.medium --docker-storage-driver overlay2 --keypair jmp1-key > --external-network net_eng_vlan_39 --network-driver flannel > --dns-nameserver 8.8.8.8 > --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels > kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ > k8s-new-template-31 > > Command to create cluster: > > openstack coe cluster create --cluster-template k8s-new-template-31 > --master-count 1 --node-count 2 --keypair jmp1-key mycluster31 > > Here is the output of heat stack > > [root at ostack-eng-osa images]# heat resource-list mycluster31-bw5yi3lzkw45 > WARNING (shell) "heat resource-list" is deprecated, please use "openstack > stack resource list" instead > > +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ > | resource_name | physical_resource_id | > resource_type > | resource_status | updated_time > | > > +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ > | api_address_floating_switch | | > Magnum::FloatingIPAddressSwitcher > | INIT_COMPLETE | 2023-08-01T20:55:49Z > | > | api_address_lb_switch | | > Magnum::ApiGatewaySwitcher > | INIT_COMPLETE | > 2023-08-01T20:55:49Z | > | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 | > file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml > | CREATE_COMPLETE | 2023-08-01T20:55:49Z | > | etcd_address_lb_switch | | > Magnum::ApiGatewaySwitcher > | INIT_COMPLETE | > 2023-08-01T20:55:49Z | > | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 | > file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml > | CREATE_COMPLETE | 2023-08-01T20:55:49Z | > | kube_cluster_config | | > OS::Heat::SoftwareConfig > | INIT_COMPLETE | > 2023-08-01T20:55:49Z | > | kube_cluster_deploy | | > OS::Heat::SoftwareDeployment > | INIT_COMPLETE | > 2023-08-01T20:55:49Z | > | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 | > OS::Heat::ResourceGroup > | CREATE_FAILED | 2023-08-01T20:55:49Z > | > | kube_minions | | > OS::Heat::ResourceGroup > | INIT_COMPLETE | 2023-08-01T20:55:49Z > | > | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db | > OS::Nova::ServerGroup > | CREATE_COMPLETE | 2023-08-01T20:55:49Z > | > | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 | > file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml > | CREATE_COMPLETE | 2023-08-01T20:55:49Z | > | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f | > OS::Neutron::SecurityGroup > | CREATE_COMPLETE | > 2023-08-01T20:55:49Z | > | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 | > OS::Neutron::SecurityGroup > | CREATE_COMPLETE | > 2023-08-01T20:55:49Z | > | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 | > OS::Neutron::SecurityGroupRule > | CREATE_COMPLETE | > 2023-08-01T20:55:49Z | > | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b | > OS::Neutron::SecurityGroupRule > | CREATE_COMPLETE | > 2023-08-01T20:55:49Z | > | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 | > OS::Nova::ServerGroup > | CREATE_COMPLETE | 2023-08-01T20:55:49Z > | > > +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ > > > I can ssh into an instance but am not sure what logs I should be chasing > to find the proper issue. Any kind of help appreciated > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Aug 1 21:30:17 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 1 Aug 2023 17:30:17 -0400 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED In-Reply-To: References: Message-ID: Hmm, what the heck is going on here. Wallaby? (I am running openstack Xena, Am I using the wrong image?) [root at mycluster31-bw5yi3lzkw45-master-0 ~]# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e8b9a439194e docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 /usr/bin/start-he... 30 minutes ago Up 30 minutes ago heat-container-agent On Tue, Aug 1, 2023 at 5:27?PM Satish Patel wrote: > After some spelunking I found some error messages on instance in > journalctl. Why error logs showing podman? > > https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/ > > On Tue, Aug 1, 2023 at 5:20?PM Satish Patel wrote: > >> Folks, >> >> I am running the Xena release and fedora-coreos-31.X image. My cluster is >> always throwing an error kube_masters CREATE_FAILED. >> >> This is my template: >> >> openstack coe cluster template create --coe kubernetes --image >> "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor >> gen.medium --docker-storage-driver overlay2 --keypair jmp1-key >> --external-network net_eng_vlan_39 --network-driver flannel >> --dns-nameserver 8.8.8.8 >> --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels >> kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ >> k8s-new-template-31 >> >> Command to create cluster: >> >> openstack coe cluster create --cluster-template k8s-new-template-31 >> --master-count 1 --node-count 2 --keypair jmp1-key mycluster31 >> >> Here is the output of heat stack >> >> [root at ostack-eng-osa images]# heat resource-list mycluster31-bw5yi3lzkw45 >> WARNING (shell) "heat resource-list" is deprecated, please use "openstack >> stack resource list" instead >> >> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >> | resource_name | physical_resource_id | >> resource_type >> | resource_status | updated_time >> | >> >> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >> | api_address_floating_switch | | >> Magnum::FloatingIPAddressSwitcher >> | INIT_COMPLETE | 2023-08-01T20:55:49Z >> | >> | api_address_lb_switch | | >> Magnum::ApiGatewaySwitcher >> | INIT_COMPLETE | >> 2023-08-01T20:55:49Z | >> | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 | >> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml >> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >> | etcd_address_lb_switch | | >> Magnum::ApiGatewaySwitcher >> | INIT_COMPLETE | >> 2023-08-01T20:55:49Z | >> | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 | >> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml >> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >> | kube_cluster_config | | >> OS::Heat::SoftwareConfig >> | INIT_COMPLETE | >> 2023-08-01T20:55:49Z | >> | kube_cluster_deploy | | >> OS::Heat::SoftwareDeployment >> | INIT_COMPLETE | >> 2023-08-01T20:55:49Z | >> | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 | >> OS::Heat::ResourceGroup >> | CREATE_FAILED | 2023-08-01T20:55:49Z >> | >> | kube_minions | | >> OS::Heat::ResourceGroup >> | INIT_COMPLETE | 2023-08-01T20:55:49Z >> | >> | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db | >> OS::Nova::ServerGroup >> | CREATE_COMPLETE | 2023-08-01T20:55:49Z >> | >> | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 | >> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml >> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >> | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f | >> OS::Neutron::SecurityGroup >> | CREATE_COMPLETE | >> 2023-08-01T20:55:49Z | >> | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 | >> OS::Neutron::SecurityGroup >> | CREATE_COMPLETE | >> 2023-08-01T20:55:49Z | >> | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 | >> OS::Neutron::SecurityGroupRule >> | CREATE_COMPLETE | >> 2023-08-01T20:55:49Z | >> | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b | >> OS::Neutron::SecurityGroupRule >> | CREATE_COMPLETE | >> 2023-08-01T20:55:49Z | >> | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 | >> OS::Nova::ServerGroup >> | CREATE_COMPLETE | 2023-08-01T20:55:49Z >> | >> >> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >> >> >> I can ssh into an instance but am not sure what logs I should be chasing >> to find the proper issue. Any kind of help appreciated >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Joern.Kaster at epg.com Wed Aug 2 07:06:01 2023 From: Joern.Kaster at epg.com (=?Windows-1252?Q?Kaster=2C_J=F6rn?=) Date: Wed, 2 Aug 2023 07:06:01 +0000 Subject: [cloudkitty] InfluxDB version 1.8.10 Message-ID: Hello together, we have deployed OpenStack with the standard kolla-ansible toolset. Within is included the rating component cloudkitty with prometheus and InfluxDB. The InfluxDB version that is deployed with kolla-ansible is 1.8.10 [2021-10-11]. The actual newest InfluxDB version is 2.7.1 [2023-04-28]. Are there any plans to migrate in the near future to the newest version? I couldn't find any informations on InfluxDB Sites if the 1.8 branch is supported anymore. It seems not like described in Point 2.4 of [1]. [1] https://www.influxdata.com/legal/support-policy/ [https://images.ctfassets.net/o7xu9whrs0u9/fn7Q8NJ8ctkA2FOf8DPjW/e2c782d7edb86ebcb4f077a3e5420c82/Its-About-Time.-Build-on-InfluxDB.-2.png] Support Policy | InfluxData InfluxData Support Program describes InfluxData?s current support offerings and support policies for the Software and Subscription Services. www.influxdata.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Aug 2 07:13:47 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 02 Aug 2023 09:13:47 +0200 Subject: [ptls][tc] OpenStack User Survey Updates In-Reply-To: References: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> <4953562.GGS8mfJHr0@p1> Message-ID: <2518333.kKket2U23f@p1> Hi, Dnia wtorek, 1 sierpnia 2023 22:02:23 CEST Allison Price pisze: > Hi Slawek, > > I worked with our team to pull the questions from the survey into this Google Doc [1]. Most of the project specific questions are on the second tab, but there are a few on tab 3 as well. > > Please let me know if you have any questions. Thank You very much. I will look at the neutron related questions, discuss that in our team and will get back to You if we will want to change something. > > Cheers, > Allison > > > [1] https://docs.google.com/spreadsheets/d/1YZu7bJ4k_nog4ByrX1KNkjkFHJsIu6c9DM6gGO9R4QU/edit?usp=sharing > > > On Aug 1, 2023, at 1:53 AM, Slawek Kaplonski wrote: > > > > Hi, > > > > Dnia poniedzia?ek, 31 lipca 2023 21:17:07 CEST Allison Price pisze: > >> Hi Everyone, > >> > >> Like Helena mentioned last week, we are closing the 2023 OpenStack User Survey in a few weeks and will then open the 2024 OpenStack User Survey. At this time, we want to offer the project teams and TC the opportunity to update your project specific questions that appear at the end of the survey. As a reminder, for the project-specific questions, these appear if a survey taker selects your project in their deployment and TC questions appear to all survey takers. > >> > >> If you and your team would like to update the question, please let me know by Friday, August 18. I know that this is a holiday time for many, so if any team needs some extra time, just let me know. I am also able to share the existing questions with any team that needs a refresher on what is currently in the survey. > >> > >> In the meantime, please continue to promote openstack.org/usersurvey to anyone (and everyone!) you know who is running OpenStack or planning to in the future. We want to get as much feedback as we possibly can. > >> > >> Cheers, > >> Allison > >> > > > > Thx Allison for the heads up. I have one question - where I can find existing questions from the survey? Do I need to do 2023 survey to find out what questions are actually there now? > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Wed Aug 2 07:15:43 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 02 Aug 2023 09:15:43 +0200 Subject: [openstack][neutron[nova][kolla-ansible]instance cannot ping after live migrate In-Reply-To: References: <62631251.QqMBZyFNYO@p1> Message-ID: <2837183.3y46s6aIxz@p1> Hi, Dnia poniedzia?ek, 31 lipca 2023 15:12:43 CEST Satish Patel pisze: > Hi Slawek, > > You are suggesting not to use an OVS base native firewall? No. I didn't say anything like that. This is actually even default firewall driver in devstack with ML2/OVS backend. > > On Mon, Jul 31, 2023 at 3:12?AM Slawek Kaplonski > wrote: > > > Hi, > > > > Dnia niedziela, 30 lipca 2023 17:00:22 CEST Nguy?n H?u Kh?i pisze: > > > > > Hello. > > > > > Is it ok if we use ovs with native firewall driver which I mean don't use > > > > > ovn. How about migration from ovs to ovn. > > > > Regarding migration from ML2/OVS to ML2/OVN backend it's easier to do it > > when You are using ML2/OVS with openvswitch (native) firewall driver as in > > that case plugging of the VMs into br-int will be the same before and after > > migration. > > > > > > > > > > Nguyen Huu Khoi > > > > > > > > > > > > > > > On Sun, Jul 30, 2023 at 8:26?AM Satish Patel > > wrote: > > > > > > > > > > > iptables + linux bridge integration with OVS was very old and OVS ACL > > was > > > > > > not mature enough in earlier days. But nowadays OVN supports OVS base > > ACL > > > > > > and that means it's much more stable. > > > > I'm not sure but I think there are some mixed things here. Generally in > > Neutron we have "backends" like ML2/OVS (neutron-openvswitch-agent) or > > ML2/OVN (with ovn-controller running on compute nodes). There are more > > backends like ML2/Linuxbridge for example but lets not include them here > > and focus only on ML2/OVS and ML2/OVN as those were mentioned. > > > > Now, regarding firewall drivers, in ML2/OVS backend, > > neutron-openvswitch-agent can use one of the following firewall drivers: > > > > * iptables_hybrid - that's the one mentioned by Satish Patel as this "very > > old" solution. Indeed it is using linuxbridge between VM and br-int to > > implement iptables rule which will work on this linuxbridge for the > > instance, > > > > * openvswitch - this is newer firewall driver, where all SG rules are > > implemented on the host as OpenFlow rules in br-int. In this case VM is > > plugged directly to the br-int. But this isn't related to the OVN ACLs in > > any way. It's all implemented in the neutron-openvswitch-agent code. > > Details about it are in the: > > https://docs.openstack.org/neutron/latest/admin/config-ovsfwdriver.html > > > > In ML2/OVN backend there is only one implementation of the Security Groups > > and this is based on the OVN ACL mechanism. In this case of course there is > > also no need to use any Linuxbridge between VM and br-int so VM is plugged > > directly into br-int. > > > > > > > > > > > > On Sat, Jul 29, 2023 at 10:29?AM Nguy?n H?u Kh?i < > > > > > > nguyenhuukhoinw at gmail.com> wrote: > > > > > > > > > > > >> Hello. > > > > > >> I just known about ops firewall last week. I am going to compare > > > > > >> between them. > > > > > >> Could you share some experience about why ovs firewall driver over > > > > > >> iptables. > > > > > >> Thank you. > > > > > >> Nguyen Huu Khoi > > > > > >> > > > > > >> > > > > > >> On Sat, Jul 29, 2023 at 5:55?PM Satish Patel > > > > > >> wrote: > > > > > >> > > > > > >>> Why are you not using openvswitch flow based firewall instead of > > > > > >>> Linuxbridge which will add hops in packet path. > > > > > >>> > > > > > >>> Sent from my iPhone > > > > > >>> > > > > > >>> On Jul 27, 2023, at 12:25 PM, Nguy?n H?u Kh?i < > > nguyenhuukhoinw at gmail.com> > > > > > >>> wrote: > > > > > >>> > > > > > >>> ? > > > > > >>> Hello. > > > > > >>> I figured out that my rabbitmq queues are corrupt so neutron port > > cannot > > > > > >>> upgrade security rules. I need delete queues so I can migrate without > > > > > >>> problem. > > > > > >>> > > > > > >>> Thank you so much for replying to me. > > > > > >>> > > > > > >>> On Thu, Jul 27, 2023, 8:11 AM Nguy?n H?u Kh?i < > > nguyenhuukhoinw at gmail.com> > > > > > >>> wrote: > > > > > >>> > > > > > >>>> Hello. > > > > > >>>> > > > > > >>>> When my instances was migrated to other computes. I check on dest > > host > > > > > >>>> and I see that > > > > > >>>> > > > > > >>>> -A neutron-openvswi-i41ec1d15-e -d x.x.x.x(my instance ip)/32 -p > > udp -m > > > > > >>>> udp --sport 67 --dport 68 -j RETURN missing and my instance cannot > > get IP. > > > > > >>>> I must restart neutron_openvswitch_agent then this rule appears and > > I can > > > > > >>>> touch the instance via network. > > > > > >>>> > > > > > >>>> I use openswitch and provider networks. This problem has happened > > this > > > > > >>>> week. after the system was upgraded from xena to yoga and I enabled > > quorum > > > > > >>>> queue. > > > > > >>>> > > > > > >>>> > > > > > >>>> > > > > > >>>> Nguyen Huu Khoi > > > > > >>>> > > > > > >>>> > > > > > >>>> On Wed, Jul 26, 2023 at 5:28?PM Nguy?n H?u Kh?i < > > > > > >>>> nguyenhuukhoinw at gmail.com> wrote: > > > > > >>>> > > > > > >>>>> Because I dont see any error logs. Althought, i set debug log to > > on. > > > > > >>>>> > > > > > >>>>> Your advices are very helpful to me. I will try to dig deeply. I am > > > > > >>>>> lost so some suggests are the best way for me to continue. :) > > > > > >>>>> > > > > > >>>>> On Wed, Jul 26, 2023, 4:39 PM wrote: > > > > > >>>>> > > > > > >>>>>> On Wed, 2023-07-26 at 07:49 +0700, Nguy?n H?u Kh?i wrote: > > > > > >>>>>> > Hello guys. > > > > > >>>>>> > > > > > > >>>>>> > I am using openstack yoga with kolla ansible. > > > > > >>>>>> without logs of some kind i dont think anyoen will be able to hlep > > > > > >>>>>> you with this. > > > > > >>>>>> you have one issue with the config which i noted inline but that > > > > > >>>>>> should not break live migration. > > > > > >>>>>> but it would allow it to proceed when otherwise it would have > > failed. > > > > > >>>>>> and it woudl allow this issue to happen instead of the vm goign to > > > > > >>>>>> error ro the migration > > > > > >>>>>> being aborted in pre live migrate. > > > > > >>>>>> > > > > > > >>>>>> > When I migrate: > > > > > >>>>>> > > > > > > >>>>>> > instance1 from host A to host B after that I cannot ping this > > > > > >>>>>> > instance(telnet also). I must restart neutron_openvswitch_agent > > or > > > > > >>>>>> move > > > > > >>>>>> > this instance back to host B then this problem has gone. > > > > > >>>>>> > > > > > > >>>>>> > this is my settings: > > > > > >>>>>> > > > > > > >>>>>> > ----------------- neutron.conf ----------------- > > > > > >>>>>> > [nova] > > > > > >>>>>> > live_migration_events = True > > > > > >>>>>> > ------------------------------------------------ > > > > > >>>>>> > > > > > > >>>>>> > ----------------- nova.conf ----------------- > > > > > >>>>>> > [DEFAULT] > > > > > >>>>>> > vif_plugging_timeout = 600 > > > > > >>>>>> > vif_plugging_is_fatal = False > > > > > >>>>>> you should never run with this set to false in production. > > > > > >>>>>> it will break nova ability to detect if netroking is configured > > > > > >>>>>> when booting or migrating a vm. > > > > > >>>>>> we honestly should have remove this when we removed nova-networks > > > > > >>>>>> > debug = True > > > > > >>>>>> > > > > > > >>>>>> > [compute] > > > > > >>>>>> > live_migration_wait_for_vif_plug = True > > > > > >>>>>> > > > > > > >>>>>> > [workarounds] > > > > > >>>>>> > enable_qemu_monitor_announce_self = True > > > > > >>>>>> > > > > > > >>>>>> > ----------------- openvswitch_agent.ini----------------- > > > > > >>>>>> > [securitygroup] > > > > > >>>>>> > firewall_driver = openvswitch > > > > > >>>>>> > [ovs] > > > > > >>>>>> > openflow_processed_per_port = true > > > > > >>>>>> > > > > > > >>>>>> > I check nova, neutron, ops logs but they are ok. > > > > > >>>>>> > > > > > > >>>>>> > Thank you. > > > > > >>>>>> > > > > > > >>>>>> > > > > > > >>>>>> > Nguyen Huu Khoi > > > > > >>>>>> > > > > > >>>>>> > > > > > > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From james.page at canonical.com Wed Aug 2 12:56:45 2023 From: james.page at canonical.com (James Page) Date: Wed, 2 Aug 2023 13:56:45 +0100 Subject: [ptls][tc] OpenStack User Survey Updates In-Reply-To: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> References: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> Message-ID: Hi Allison On Mon, Jul 31, 2023 at 8:21?PM Allison Price wrote: > Hi Everyone, > > Like Helena mentioned last week, we are closing the 2023 OpenStack User > Survey in a few weeks and will then open the 2024 OpenStack User Survey. At > this time, we want to offer the project teams and TC the opportunity to > update your project specific questions that appear at the end of the > survey. As a reminder, for the project-specific questions, these appear if > a survey taker selects your project in their deployment and TC questions > appear to all survey takers. > > If you and your team would like to update the question, please let me know > by *Friday, August 18. *I know that this is a holiday time for many, so > if any team needs some extra time, just let me know. I am also able to > share the existing questions with any team that needs a refresher on what > is currently in the survey. > Please can I request the following updates to the survey questions - neither are project specific questions but I think they are gaps in the survey today. Page 1: Which projects does this deployment currently use, or are you interested in using in the future? (PoC/Testing) Request addition of Sunbeam and OpenStack Charms as projects. Page 2: Which tools are you using to deploy / manage this cluster? Request addition of Sunbeam Request update of Juju to OpenStack Charms Thanks James -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Aug 2 13:26:07 2023 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 2 Aug 2023 18:56:07 +0530 Subject: Cinder Bug Report 2023-08-02 Message-ID: Hello Argonauts, Cinder Bug Meeting Etherpad Low - RBD migrate_volume code optimization - Status: Recently reported, no comments or fix proposed - NetApp ONTAP - Error on failover-host with REST API - Status: Fix proposed - [Pure Storage] failure to disconnect remote hosts in uniform sync rep cluster - Status: No comments or fix proposed Thanks Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Wed Aug 2 15:33:57 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 02 Aug 2023 16:33:57 +0100 Subject: [SDK][OSC] Core team cleanup Message-ID: ?? The openstacksdk-core Gerrit team contains a number of individuals who have not contributed to SDK in at over a year. * Adrian Turjak * Dean Troyer * Doug Hellmann * Monty Taylor * Paul Belanger In addition, the python-openstackclient-core Gerrit group contains the following individuals who have not contributed to OSC in the same time period. * Dean Troyer * Doug Hellmann * Matt Riedemann * Monty Taylor So that I don't have to remember to follow-up on this in a week's time, I have gone ahead and preemptively removed these individuals from the respective groups. If you are on either list and wish to remain in the groups, please reach out to me and I'll re-add you promptly :) Alternatively, assuming these folks are indeed no longer working on SDK or OpenStack at large, thank you all for your historical contributions. As is customary with emeritus reviewers, if at a future date you wish to resume OpenStack contributions we'd be happy to re-add you to the openstacksdk-core group. Thanks, Stephen From kennelson11 at gmail.com Wed Aug 2 16:44:31 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 2 Aug 2023 11:44:31 -0500 Subject: [SDK][OSC] Core team cleanup In-Reply-To: References: Message-ID: Thanks for cleaning things up Stephen! -Kendall On Wed, Aug 2, 2023 at 10:35?AM Stephen Finucane wrote: > ?? The openstacksdk-core Gerrit team contains a number of individuals who > have > not contributed to SDK in at over a year. > * Adrian Turjak > * Dean Troyer > * Doug Hellmann > * Monty Taylor > * Paul Belanger > > In addition, the python-openstackclient-core Gerrit group contains the > following > individuals who have not contributed to OSC in the same time period. > * Dean Troyer > * Doug Hellmann > * Matt Riedemann > * Monty Taylor > > So that I don't have to remember to follow-up on this in a week's time, I > have > gone ahead and preemptively removed these individuals from the respective > groups. If you are on either list and wish to remain in the groups, please > reach > out to me and I'll re-add you promptly :) > > Alternatively, assuming these folks are indeed no longer working on SDK or > OpenStack at large, thank you all for your historical contributions. As is > customary with emeritus reviewers, if at a future date you wish to resume > OpenStack contributions we'd be happy to re-add you to the > openstacksdk-core > group. > > Thanks, > Stephen > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Wed Aug 2 17:22:03 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 02 Aug 2023 18:22:03 +0100 Subject: [placement][sdk] How to debug HTTP 502 errors with placement in DevStack? Message-ID: <5dc7c3427223e5c6debb8edc664c0b1b8e5676e0.camel@redhat.com> We recently merged support for placement traits in openstacksdk. Since then, we've seen an uptick in failures of various functional jobs [1]. The failure is always the same test: openstack.tests.functional.placement.v1.test_trait.TestTrait.test_resource_pr ovider_inventory That test simply creates a new, custom trait and then attempts to list all traits, show an individual trait, and finally delete the trait. The failure occurs during the first step, creation of the custom trait: openstack.exceptions.HttpException: HttpException: 502: Server Error for url: https://10.209.100.9/placement/traits/CUSTOM_A982E0BA1C2B4D08BFD6D2594C678313 , Bad Gateway: response from an upstream server.: The proxy server received an invalid: Apache/2.4.52 (Ubuntu) Server at 10.209.100.9 Port 80: Additionally, a 201 Created: 502 Bad Gateway: error was encountered while trying to use an ErrorDocument to handle the request. I've looked through the various job artefacts and haven't found any smoking guns. I can see placement receive and reply to the request so it would seem something is happening in between. *Fortunately*, this is also reproducible locally against a standard devstack deployment by running the following in the openstacksdk repo: OS_TEST_TIMEOUT=60 tox -e functional-py310 -- \ -n openstack/tests/functional/placement/v1/test_trait.py \ --until-failure Does anyone have any insight into what could be causing this issue and have suggestions for how we might go about debugging it? As things I haven't a clue ? Cheers, Stephen [1] https://zuul.opendev.org/t/openstack/builds?job_name=openstacksdk-functional-devstack&project=openstack%2Fdevstack&branch=master&skip=0 From rosmaita.fossdev at gmail.com Wed Aug 2 19:42:47 2023 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 2 Aug 2023 15:42:47 -0400 Subject: [ptl][osc][sdk] openstackclient/sdk "service cores" Message-ID: Hello Stackers, At the Vancouver 2023 Forum, Artem and Stephen proposed adding individual OpenStack project cores as "service cores" for the python-openstackclient and openstacksdk projects.[0] This will give -2..+2 review powers to subject matter experts for osc/sdk changes affecting particular openstack components, and hopefully speed up the reviewing process. The idea is that service-cores will verify that the cli/sdk code is correct for their particular openstack component, and the "regular" osc/sdk core team (which will have sole +W power) will ensure that changes preserve consistency across the entire osc/sdk. You can give your core team permissions to act as osc/sdk "service cores" by proposing a patch to project-config. You can use the following patch as an example: https://review.opendev.org/c/openstack/project-config/+/890346 cheers, brian [0] https://etherpad.opendev.org/p/oscsdk-vancouver-forum-2023 From nguyenhuukhoinw at gmail.com Wed Aug 2 23:29:57 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Thu, 3 Aug 2023 06:29:57 +0700 Subject: [openvswitch][neutron] firewall_driver openvswitch in production In-Reply-To: References: Message-ID: Hi Satish, I just tested openvswitch firewall driver. It is looking good, I mean no error after changed, but we need config live migrate like that: ----------------- neutron.conf ----------------- [nova] live_migration_events = True ------------------------------------------------ ----------------- nova.conf ----------------- [DEFAULT] vif_plugging_timeout = 600 vif_plugging_is_fatal = true debug = True [compute] live_migration_wait_for_vif_plug = True [workarounds] enable_qemu_monitor_announce_self = True ----------------- openvswitch_agent.ini----------------- [securitygroup] firewall_driver = openvswitch [ovs] openflow_processed_per_port = true These configs from the openstack community. You can prefer from docs. With native firewall backend you must "live_migration_events = True", without it, some instances cannot ping (you need to log in via console to wake up these instances) after live migrate, you can test. I am planning to test like https://thesaitech.wordpress.com/2019/02/15/a-comparative-study-of-openstack-networking-architectures/ to see what benefit ovs with native backend will bring to us. Nguyen Huu Khoi On Tue, Aug 1, 2023 at 11:30?PM Satish Patel wrote: > Folks, > > Who is running the OVS firewall driver (firewall_driver = openvswitch) in > production and are there any issues with running it which I may not be > aware of? We are not yet ready for OVN deployments so have to stick with > OVS. > > LinuxBridge is at the end of its life trying to get rid of any dependency. > > [securitygroup] > firewall_driver = openvswitch > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Aug 2 23:43:41 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Thu, 3 Aug 2023 06:43:41 +0700 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED In-Reply-To: References: Message-ID: Hello Satish, You need install k8s from tar files by using labels below. I think our Magnum too old to use. Just my experience. containerd_tarball_url containerd_tarball_sha256 Nguyen Huu Khoi On Wed, Aug 2, 2023 at 5:23?AM Satish Patel wrote: > Hmm, what the heck is going on here. Wallaby? (I am running openstack > Xena, Am I using the wrong image?) > > [root at mycluster31-bw5yi3lzkw45-master-0 ~]# podman ps > CONTAINER ID IMAGE > COMMAND CREATED STATUS PORTS > NAMES > e8b9a439194e > docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 > /usr/bin/start-he... 30 minutes ago Up 30 minutes ago > heat-container-agent > > > > On Tue, Aug 1, 2023 at 5:27?PM Satish Patel wrote: > >> After some spelunking I found some error messages on instance in >> journalctl. Why error logs showing podman? >> >> https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/ >> >> On Tue, Aug 1, 2023 at 5:20?PM Satish Patel wrote: >> >>> Folks, >>> >>> I am running the Xena release and fedora-coreos-31.X image. My cluster >>> is always throwing an error kube_masters CREATE_FAILED. >>> >>> This is my template: >>> >>> openstack coe cluster template create --coe kubernetes --image >>> "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor >>> gen.medium --docker-storage-driver overlay2 --keypair jmp1-key >>> --external-network net_eng_vlan_39 --network-driver flannel >>> --dns-nameserver 8.8.8.8 >>> --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels >>> kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ >>> k8s-new-template-31 >>> >>> Command to create cluster: >>> >>> openstack coe cluster create --cluster-template k8s-new-template-31 >>> --master-count 1 --node-count 2 --keypair jmp1-key mycluster31 >>> >>> Here is the output of heat stack >>> >>> [root at ostack-eng-osa images]# heat resource-list >>> mycluster31-bw5yi3lzkw45 >>> WARNING (shell) "heat resource-list" is deprecated, please use >>> "openstack stack resource list" instead >>> >>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>> | resource_name | physical_resource_id | >>> resource_type >>> | resource_status | updated_time >>> | >>> >>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>> | api_address_floating_switch | | >>> Magnum::FloatingIPAddressSwitcher >>> | INIT_COMPLETE | 2023-08-01T20:55:49Z >>> | >>> | api_address_lb_switch | | >>> Magnum::ApiGatewaySwitcher >>> | INIT_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 | >>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml >>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>> | etcd_address_lb_switch | | >>> Magnum::ApiGatewaySwitcher >>> | INIT_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 | >>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml >>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>> | kube_cluster_config | | >>> OS::Heat::SoftwareConfig >>> | INIT_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | kube_cluster_deploy | | >>> OS::Heat::SoftwareDeployment >>> | INIT_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 | >>> OS::Heat::ResourceGroup >>> | CREATE_FAILED | 2023-08-01T20:55:49Z >>> | >>> | kube_minions | | >>> OS::Heat::ResourceGroup >>> | INIT_COMPLETE | 2023-08-01T20:55:49Z >>> | >>> | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db | >>> OS::Nova::ServerGroup >>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z >>> | >>> | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 | >>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml >>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>> | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f | >>> OS::Neutron::SecurityGroup >>> | CREATE_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 | >>> OS::Neutron::SecurityGroup >>> | CREATE_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 | >>> OS::Neutron::SecurityGroupRule >>> | CREATE_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b | >>> OS::Neutron::SecurityGroupRule >>> | CREATE_COMPLETE | >>> 2023-08-01T20:55:49Z | >>> | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 | >>> OS::Nova::ServerGroup >>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z >>> | >>> >>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>> >>> >>> I can ssh into an instance but am not sure what logs I should be chasing >>> to find the proper issue. Any kind of help appreciated >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Aug 2 23:50:16 2023 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 02 Aug 2023 16:50:16 -0700 Subject: [placement][sdk] How to debug HTTP 502 errors with placement in DevStack? In-Reply-To: <5dc7c3427223e5c6debb8edc664c0b1b8e5676e0.camel@redhat.com> References: <5dc7c3427223e5c6debb8edc664c0b1b8e5676e0.camel@redhat.com> Message-ID: On Wed, Aug 2, 2023, at 10:22 AM, Stephen Finucane wrote: > We recently merged support for placement traits in openstacksdk. Since then, > we've seen an uptick in failures of various functional jobs [1]. The failure is > always the same test: > > openstack.tests.functional.placement.v1.test_trait.TestTrait.test_resource_pr > ovider_inventory > > That test simply creates a new, custom trait and then attempts to list all > traits, show an individual trait, and finally delete the trait. The failure > occurs during the first step, creation of the custom trait: > > openstack.exceptions.HttpException: HttpException: 502: Server Error for url: > https://10.209.100.9/placement/traits/CUSTOM_A982E0BA1C2B4D08BFD6D2594C678313 > , Bad Gateway: response from an upstream server.: The proxy server received > an invalid: Apache/2.4.52 (Ubuntu) Server at 10.209.100.9 Port 80: > Additionally, a 201 Created: 502 Bad Gateway: error was encountered while > trying to use an ErrorDocument to handle the request. > > I've looked through the various job artefacts and haven't found any smoking > guns. I can see placement receive and reply to the request so it would seem > something is happening in between. Yes, this appears to be some problem in apache2 (possibly caused by the response but as far as the backend server is concerned everything is ok). I would increase the log level of the apache server. There are two places to do this 1) for the https frontend here [2] and 2) for the http wsgi backend here [3]. I think the first file comes from the apache2 package in Ubuntu so I'm not sure what the best way to modify that is. The https proxy file is configured by devstack/lib/tls in a heredoc which you can modify for that frontend. You mention this is locally reproducible so you may be able to simply edit those files on disk and restart apache2 without needing to modify devstack. Hopefully, extra logging will give a better indication of what is going on. > > *Fortunately*, this is also reproducible locally against a standard devstack > deployment by running the following in the openstacksdk repo: > > OS_TEST_TIMEOUT=60 tox -e functional-py310 -- \ > -n openstack/tests/functional/placement/v1/test_trait.py \ > --until-failure > > Does anyone have any insight into what could be causing this issue and have > suggestions for how we might go about debugging it? As things I haven't a clue > ? > > Cheers, > Stephen > > [1] > https://zuul.opendev.org/t/openstack/builds?job_name=openstacksdk-functional-devstack&project=openstack%2Fdevstack&branch=master&skip=0 [2] https://zuul.opendev.org/t/openstack/build/b37d2aedd1514682b3672c4b732b2717/log/controller/logs/apache_config/000-default_conf.txt#14-18 [3] https://zuul.opendev.org/t/openstack/build/b37d2aedd1514682b3672c4b732b2717/log/controller/logs/apache_config/http-services-tls-proxy_conf.txt#27 From oliver.weinmann at me.com Thu Aug 3 07:51:33 2023 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Thu, 3 Aug 2023 09:51:33 +0200 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED In-Reply-To: References: Message-ID: <8E9547D0-46AC-443A-853A-9A6097C07274@me.com> An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Aug 3 13:11:15 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 3 Aug 2023 15:11:15 +0200 Subject: [cloudkitty] InfluxDB version 1.8.10 In-Reply-To: References: Message-ID: Hello J?rn, CloudKitty PTL Rafael Weing?rtner has been working on support for InfluxDB 2, which I believe he is planning to submit upstream soon. Best regards, Pierre Riteau (priteau) On Wed, 2 Aug 2023 at 09:12, Kaster, J?rn wrote: > Hello together, > we have deployed OpenStack with the standard kolla-ansible toolset. Within > is included the rating component cloudkitty with prometheus and InfluxDB. > The InfluxDB version that is deployed with kolla-ansible is 1.8.10 > [2021-10-11]. > The actual newest InfluxDB version is 2.7.1 [2023-04-28]. > Are there any plans to migrate in the near future to the newest version? > I couldn't find any informations on InfluxDB Sites if the 1.8 branch is > supported anymore. It seems not like described in Point 2.4 of [1]. > > [1] https://www.influxdata.com/legal/support-policy/ > > > > Support Policy | InfluxData > > InfluxData Support Program describes InfluxData?s current support > offerings and support policies for the Software and Subscription Services. > www.influxdata.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Joern.Kaster at epg.com Thu Aug 3 13:16:03 2023 From: Joern.Kaster at epg.com (=?Windows-1252?Q?Kaster=2C_J=F6rn?=) Date: Thu, 3 Aug 2023 13:16:03 +0000 Subject: AW: [cloudkitty] InfluxDB version 1.8.10 In-Reply-To: References: Message-ID: Hello Pierre, thank you for this answer. Is there any issue open for this? Have not found anything on storyboard. ________________________________ Von: Pierre Riteau Gesendet: Donnerstag, 3. August 2023 15:11 An: Kaster, J?rn Cc: openstack-discuss at lists.openstack.org Betreff: Re: [cloudkitty] InfluxDB version 1.8.10 OUTSIDE-EPG! Hello J?rn, CloudKitty PTL Rafael Weing?rtner has been working on support for InfluxDB 2, which I believe he is planning to submit upstream soon. Best regards, Pierre Riteau (priteau) On Wed, 2 Aug 2023 at 09:12, Kaster, J?rn > wrote: Hello together, we have deployed OpenStack with the standard kolla-ansible toolset. Within is included the rating component cloudkitty with prometheus and InfluxDB. The InfluxDB version that is deployed with kolla-ansible is 1.8.10 [2021-10-11]. The actual newest InfluxDB version is 2.7.1 [2023-04-28]. Are there any plans to migrate in the near future to the newest version? I couldn't find any informations on InfluxDB Sites if the 1.8 branch is supported anymore. It seems not like described in Point 2.4 of [1]. [1] https://www.influxdata.com/legal/support-policy/ [https://images.ctfassets.net/o7xu9whrs0u9/fn7Q8NJ8ctkA2FOf8DPjW/e2c782d7edb86ebcb4f077a3e5420c82/Its-About-Time.-Build-on-InfluxDB.-2.png] Support Policy | InfluxData InfluxData Support Program describes InfluxData?s current support offerings and support policies for the Software and Subscription Services. www.influxdata.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Aug 3 13:45:35 2023 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 3 Aug 2023 15:45:35 +0200 Subject: [cloudkitty] InfluxDB version 1.8.10 In-Reply-To: References: Message-ID: I created a new one here: https://storyboard.openstack.org/#!/story/2010863 On Thu, 3 Aug 2023 at 15:16, Kaster, J?rn wrote: > Hello Pierre, > thank you for this answer. Is there any issue open for this? > Have not found anything on storyboard. > ------------------------------ > *Von:* Pierre Riteau > *Gesendet:* Donnerstag, 3. August 2023 15:11 > *An:* Kaster, J?rn > *Cc:* openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Betreff:* Re: [cloudkitty] InfluxDB version 1.8.10 > > > OUTSIDE-EPG! > > Hello J?rn, > > CloudKitty PTL Rafael Weing?rtner has been working on support for InfluxDB > 2, which I believe he is planning to submit upstream soon. > > Best regards, > Pierre Riteau (priteau) > > On Wed, 2 Aug 2023 at 09:12, Kaster, J?rn wrote: > > Hello together, > we have deployed OpenStack with the standard kolla-ansible toolset. Within > is included the rating component cloudkitty with prometheus and InfluxDB. > The InfluxDB version that is deployed with kolla-ansible is 1.8.10 > [2021-10-11]. > The actual newest InfluxDB version is 2.7.1 [2023-04-28]. > Are there any plans to migrate in the near future to the newest version? > I couldn't find any informations on InfluxDB Sites if the 1.8 branch is > supported anymore. It seems not like described in Point 2.4 of [1]. > > [1] https://www.influxdata.com/legal/support-policy/ > > > > Support Policy | InfluxData > > InfluxData Support Program describes InfluxData?s current support > offerings and support policies for the Software and Subscription Services. > www.influxdata.com > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Aug 3 14:03:08 2023 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 3 Aug 2023 10:03:08 -0400 Subject: [magnum][openstack-ansible][k8s] kube_masters CREATE_FAILED In-Reply-To: <8E9547D0-46AC-443A-853A-9A6097C07274@me.com> References: <8E9547D0-46AC-443A-853A-9A6097C07274@me.com> Message-ID: Thank you for reply folks, But after attempting so many images I settled on fedora-coreos-31.20200517.3.0-openstack.x86_64.qcow2 for Xena release. Now everything is working fine. Look like magnum/openstack/fedoracore all should be aligned on specific versions. On Thu, Aug 3, 2023 at 3:51?AM Oliver Weinmann wrote: > Hi satish, > > For me it is working fine when using the following template: > > openstack coe cluster template create k8s-flan-small-35-1.21.11 \ > --image Fedora-CoreOS-35 \ > --keypair mykey \ > --external-network ext-net \ > --dns-nameserver 8.8.8.8 \ > --flavor m1.small \ > --master-flavor m1.small \ > --volume-driver cinder \ > --docker-volume-size 10 \ > --network-driver flannel \ > --docker-storage-driver overlay2 \ > --coe kubernetes \ > --labels kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ > > > I just recently deployed Antelope 2023.1 with kolla-Ansible and here > magnum works much better out of the box using the default settings. I have > never managed to get containerd working in Yoga or Zed. > > You can find more info on my blog: > > (( > https://www.roksblog.de/deploy-kubernetes-clusters-in-openstack-within-minutes-with-magnum/ > )) > > Cheers, > Oliver > > Von meinem iPhone gesendet > > Am 03.08.2023 um 02:04 schrieb Nguy?n H?u Kh?i >: > > ? > Hello Satish, > You need install k8s from tar files by using labels below. I think our > Magnum too old to use. Just my experience. > > containerd_tarball_url > containerd_tarball_sha256 > > > > Nguyen Huu Khoi > > > On Wed, Aug 2, 2023 at 5:23?AM Satish Patel wrote: > >> Hmm, what the heck is going on here. Wallaby? (I am running openstack >> Xena, Am I using the wrong image?) >> >> [root at mycluster31-bw5yi3lzkw45-master-0 ~]# podman ps >> CONTAINER ID IMAGE >> COMMAND CREATED STATUS PORTS >> NAMES >> e8b9a439194e >> docker.io/openstackmagnum/heat-container-agent:wallaby-stable-1 >> /usr/bin/start-he... 30 minutes ago Up 30 minutes ago >> heat-container-agent >> >> >> >> On Tue, Aug 1, 2023 at 5:27?PM Satish Patel wrote: >> >>> After some spelunking I found some error messages on instance in >>> journalctl. Why error logs showing podman? >>> >>> https://paste.opendev.org/show/bp1iEBV2meihZmRtH2M1/ >>> >>> On Tue, Aug 1, 2023 at 5:20?PM Satish Patel >>> wrote: >>> >>>> Folks, >>>> >>>> I am running the Xena release and fedora-coreos-31.X image. My cluster >>>> is always throwing an error kube_masters CREATE_FAILED. >>>> >>>> This is my template: >>>> >>>> openstack coe cluster template create --coe kubernetes --image >>>> "fedora-coreos-35.20220116" --flavor gen.medium --master-flavor >>>> gen.medium --docker-storage-driver overlay2 --keypair jmp1-key >>>> --external-network net_eng_vlan_39 --network-driver flannel >>>> --dns-nameserver 8.8.8.8 >>>> --labels="container_runtime=containerd,cinder_csi_enabled=false" --labels >>>> kube_tag=v1.21.11-rancher1,hyperkube_prefix=docker.io/rancher/ >>>> k8s-new-template-31 >>>> >>>> Command to create cluster: >>>> >>>> openstack coe cluster create --cluster-template k8s-new-template-31 >>>> --master-count 1 --node-count 2 --keypair jmp1-key mycluster31 >>>> >>>> Here is the output of heat stack >>>> >>>> [root at ostack-eng-osa images]# heat resource-list >>>> mycluster31-bw5yi3lzkw45 >>>> WARNING (shell) "heat resource-list" is deprecated, please use >>>> "openstack stack resource list" instead >>>> >>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>>> | resource_name | physical_resource_id >>>> | resource_type >>>> | resource_status | updated_time >>>> | >>>> >>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>>> | api_address_floating_switch | >>>> | Magnum::FloatingIPAddressSwitcher >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | api_address_lb_switch | >>>> | Magnum::ApiGatewaySwitcher >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | api_lb | 99e0f887-fbe2-4b2f-b3a1-b1834c9a21c2 >>>> | >>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_api.yaml >>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>>> | etcd_address_lb_switch | >>>> | Magnum::ApiGatewaySwitcher >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | etcd_lb | d4ba15f3-8862-4f2b-a2cf-53eafd36d286 >>>> | >>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/lb_etcd.yaml >>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>>> | kube_cluster_config | >>>> | OS::Heat::SoftwareConfig >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | kube_cluster_deploy | >>>> | OS::Heat::SoftwareDeployment >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | kube_masters | 9ac8fc3e-a7d8-4eca-90c6-f66a8e0c43f0 >>>> | OS::Heat::ResourceGroup >>>> | CREATE_FAILED | >>>> 2023-08-01T20:55:49Z | >>>> | kube_minions | >>>> | OS::Heat::ResourceGroup >>>> | INIT_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | master_nodes_server_group | 19c9b300-f655-4db4-b03e-ea1479c541db >>>> | OS::Nova::ServerGroup >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | network | a908f229-fe8f-4ab8-b245-e8cf90c1b233 >>>> | >>>> file:///openstack/venvs/magnum-24.5.1/lib/python3.8/site-packages/magnum/drivers/common/templates/network.yaml >>>> | CREATE_COMPLETE | 2023-08-01T20:55:49Z | >>>> | secgroup_kube_master | 79e6b233-1a18-48c4-8a4f-766819eb945f >>>> | OS::Neutron::SecurityGroup >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | secgroup_kube_minion | 2a908ffb-15bf-45c5-adad-6930b0313e94 >>>> | OS::Neutron::SecurityGroup >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | secgroup_rule_tcp_kube_minion | 95779e79-a8bc-4ed4-b035-fc21758bd241 >>>> | OS::Neutron::SecurityGroupRule >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | secgroup_rule_udp_kube_minion | 2a630b3e-51ca-4504-9013-353cbe7c581b >>>> | OS::Neutron::SecurityGroupRule >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> | worker_nodes_server_group | d14b0630-95fa-46dc-81e3-2f90e62c7943 >>>> | OS::Nova::ServerGroup >>>> | CREATE_COMPLETE | >>>> 2023-08-01T20:55:49Z | >>>> >>>> +-------------------------------+--------------------------------------+----------------------------------------------------------------------------------------------------------------+-----------------+----------------------+ >>>> >>>> >>>> I can ssh into an instance but am not sure what logs I should be >>>> chasing to find the proper issue. Any kind of help appreciated >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Thu Aug 3 15:09:29 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 3 Aug 2023 17:09:29 +0200 Subject: [swift][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery Message-ID: Hey openstack-discuss, we use Ceph RadosGW (Quincy release) to provide Swift as part of an OpenStack cloud. The endpoints are configured as such: > # openstack endpoint list > +----------------------------------+--------+--------------+----------------+---------+-----------+----------------------------------------------------------------------+ > | ID?????????????????????????????? | Region | Service Name | Service > Type?? | Enabled | Interface | URL | > +----------------------------------+--------+--------------+----------------+---------+-----------+----------------------------------------------------------------------+ > | 1234567890 | region ?? | swift??????? | object-store?? | True??? | > public??? | > https://object-store.region.cloud.example.com/swift/v1/AUTH_%(tenant_id)s > | > | 2345678901 |?region??? | swift??????? | object-store?? | True??? | > internal? | > https://object-store.region.cloud.example.com/swift/v1/AUTH_%(tenant_id)s > | > | 3456789012 |?region??? | swift??????? | object-store?? | True??? | > admin???? | > https://object-store.region.cloud.example.com/swift/v1/AUTH_%(tenant_id)s > | > so according to https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access, if I am not mistaken. I can also use the openstack client to access containers: > # openstack container list > +---------------+ > | Name????????? | > +---------------+ > | containera ? | > | containerb | > +---------------+ and there is no warnings or other apparent issues. But when using "project cleanup" via e.g. "openstack project cleanup --dry-run --project $PROJECT" I see multiple warnings like > Failed to contact the endpoint at > https://object-store.region.cloud.example.com/swift/v1/AUTH_f2bc4bd34567ddc341e197456789 > for discovery. Fallback to using that endpoint as the base url. > but only mentioning the swift endpoint. Following the warning string this originates from the discovery method of keystoneauth1, see https://github.com/openstack/keystoneauth/blob/28048af9593740315df8d9027c3bd6cae5e0a715/keystoneauth1/discover.py#L1243. Could anybody help me understand what is issue might be? Why does this warning not appear for every use of object storage / swift via openstackclient? Is there any way for me to "fix" this? Could this even be a bug? Thanks and with kind regards Christian From fungi at yuggoth.org Thu Aug 3 15:32:38 2023 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 3 Aug 2023 15:32:38 +0000 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: References: Message-ID: <20230803153237.jizkndfw5kgopn47@yuggoth.org> On 2023-08-03 17:09:29 +0200 (+0200), Christian Rohmann wrote: > we use Ceph RadosGW (Quincy release) to provide Swift as part of > an OpenStack cloud. [...] While I don't have an answer for your question, I did want to clear up any potential misunderstanding. Please be aware that Ceph RadosGW is not Swift, it is a popular non-OpenStack alternative to using actual Swift, and supplies a partial replica of Swift's user-facing API (but with a substantially different backend implementation). Ceph RadosGW does not have feature-parity with Swift, lacking a number of Swift's features and sometimes having much different behaviors or performance even for the features it attempts to replicate. So just to reiterate, Ceph RadosGW isn't Swift any more than Swift's S3-API is S3. It's merely a compatibility shim, and you shouldn't expect it to work the same way Swift does. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Thu Aug 3 16:10:53 2023 From: jean-francois.taltavull at elca.ch (=?iso-8859-1?Q?Taltavull_Jean-Fran=E7ois?=) Date: Thu, 3 Aug 2023 16:10:53 +0000 Subject: [RALLY] Running Rally tasks and Tempest tests in multi-user context Message-ID: Hi openstack-discuss, I'm currently using Rally v3.4.0 to test our OpenStack Zed platform. Rally has been deployed on a dedicated virtual machine and Rally tasks and Tempest tests, launched on this machine by Rundeck, run pretty well. Now, I wish every OpenStack team member could launch whatever scenario or test he wants, when he wants, for example after having applied a service configuration change on the staging platform. And a question is arising: can several users launch different Rally scenarios or Tempest tests at the same time, from their own Linux account/environment, using the same Rally, the one which is deployed on the dedicated machine ? Thanks and best regards, Jean-Francois -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Aug 3 18:30:11 2023 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 03 Aug 2023 19:30:11 +0100 Subject: [placement][sdk] How to debug HTTP 502 errors with placement in DevStack? In-Reply-To: References: <5dc7c3427223e5c6debb8edc664c0b1b8e5676e0.camel@redhat.com> Message-ID: On Wed, 2023-08-02 at 16:50 -0700, Clark Boylan wrote: > On Wed, Aug 2, 2023, at 10:22 AM, Stephen Finucane wrote: > > We recently merged support for placement traits in openstacksdk. Since then, > > we've seen an uptick in failures of various functional jobs [1]. The failure is > > always the same test: > > > > openstack.tests.functional.placement.v1.test_trait.TestTrait.test_resource_pr > > ovider_inventory > > > > That test simply creates a new, custom trait and then attempts to list all > > traits, show an individual trait, and finally delete the trait. The failure > > occurs during the first step, creation of the custom trait: > > > > openstack.exceptions.HttpException: HttpException: 502: Server Error for url: > > https://10.209.100.9/placement/traits/CUSTOM_A982E0BA1C2B4D08BFD6D2594C678313 > > , Bad Gateway: response from an upstream server.: The proxy server received > > an invalid: Apache/2.4.52 (Ubuntu) Server at 10.209.100.9 Port 80: > > Additionally, a 201 Created: 502 Bad Gateway: error was encountered while > > trying to use an ErrorDocument to handle the request. > > > > I've looked through the various job artefacts and haven't found any smoking > > guns. I can see placement receive and reply to the request so it would seem > > something is happening in between. > > Yes, this appears to be some problem in apache2 (possibly caused by the response but as far as the backend server is concerned everything is ok). I would increase the log level of the apache server. There are two places to do this 1) for the https frontend here [2] and 2) for the http wsgi backend here [3]. I think the first file comes from the apache2 package in Ubuntu so I'm not sure what the best way to modify that is. The https proxy file is configured by devstack/lib/tls in a heredoc which you can modify for that frontend. > > You mention this is locally reproducible so you may be able to simply edit those files on disk and restart apache2 without needing to modify devstack. Hopefully, extra logging will give a better indication of what is going on. I gave this a shot today but didn't get anywhere, unfortunately. I've posted full logs from two PUT requests here [1]. Diffing them, they're effectively identical right up until the response is sent. As in the CI, I can't see anything wrong in the Placement logs themselves either. I tried adding logging at multiple points and the only thing that seemed to "fix" things was adding large logs (dumping the entire request object) in 'placement.wsgi_wrapper', but I don't know if that was a fluke or what. Back to the drawing board it would seem. Stephen [1] https://paste.opendev.org/show/bd6TFQXwj0zmHF0EHpqA/ > > > > > *Fortunately*, this is also reproducible locally against a standard devstack > > deployment by running the following in the openstacksdk repo: > > > > OS_TEST_TIMEOUT=60 tox -e functional-py310 -- \ > > -n openstack/tests/functional/placement/v1/test_trait.py \ > > --until-failure > > > > Does anyone have any insight into what could be causing this issue and have > > suggestions for how we might go about debugging it? As things I haven't a clue > > ? > > > > Cheers, > > Stephen > > > > [1] > > https://zuul.opendev.org/t/openstack/builds?job_name=openstacksdk-functional-devstack&project=openstack%2Fdevstack&branch=master&skip=0 > > [2] https://zuul.opendev.org/t/openstack/build/b37d2aedd1514682b3672c4b732b2717/log/controller/logs/apache_config/000-default_conf.txt#14-18 > [3] https://zuul.opendev.org/t/openstack/build/b37d2aedd1514682b3672c4b732b2717/log/controller/logs/apache_config/http-services-tls-proxy_conf.txt#27 > From allison at openinfra.dev Thu Aug 3 20:15:22 2023 From: allison at openinfra.dev (Allison Price) Date: Thu, 3 Aug 2023 15:15:22 -0500 Subject: [ptls][tc] OpenStack User Survey Updates In-Reply-To: References: <8B38E573-BF3C-4816-B07F-48CEA3645256@openinfra.dev> Message-ID: <2BD67835-6F1D-4398-A328-D7836B56AC2F@openinfra.dev> Hi James, Yes, we can make both of those updates. Cheers, Allison > On Aug 2, 2023, at 7:56 AM, James Page wrote: > > Hi Allison > > On Mon, Jul 31, 2023 at 8:21?PM Allison Price > wrote: >> Hi Everyone, >> >> Like Helena mentioned last week, we are closing the 2023 OpenStack User Survey in a few weeks and will then open the 2024 OpenStack User Survey. At this time, we want to offer the project teams and TC the opportunity to update your project specific questions that appear at the end of the survey. As a reminder, for the project-specific questions, these appear if a survey taker selects your project in their deployment and TC questions appear to all survey takers. >> >> If you and your team would like to update the question, please let me know by Friday, August 18. I know that this is a holiday time for many, so if any team needs some extra time, just let me know. I am also able to share the existing questions with any team that needs a refresher on what is currently in the survey. > > Please can I request the following updates to the survey questions - neither are project specific questions but I think they are gaps in the survey today. > > Page 1: > > Which projects does this deployment currently use, or are you interested in using in the future? (PoC/Testing) > > Request addition of Sunbeam and OpenStack Charms as projects. > > Page 2: > > Which tools are you using to deploy / manage this cluster? > > Request addition of Sunbeam > Request update of Juju to OpenStack Charms > > Thanks > > James -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri Aug 4 04:35:11 2023 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 4 Aug 2023 00:35:11 -0400 Subject: [openvswitch][neutron] firewall_driver openvswitch in production In-Reply-To: References: Message-ID: Thanks for the update. I am going to switch my firewall driver to openvswitch and will update here for any issues or gotchas!!! On Wed, Aug 2, 2023 at 7:30?PM Nguy?n H?u Kh?i wrote: > Hi Satish, > I just tested openvswitch firewall driver. > > It is looking good, I mean no error after changed, but we need config live > migrate like that: > > ----------------- neutron.conf ----------------- > [nova] > live_migration_events = True > ------------------------------------------------ > > ----------------- nova.conf ----------------- > [DEFAULT] > vif_plugging_timeout = 600 > vif_plugging_is_fatal = true > debug = True > > [compute] > live_migration_wait_for_vif_plug = True > > [workarounds] > enable_qemu_monitor_announce_self = True > > ----------------- openvswitch_agent.ini----------------- > > [securitygroup] > firewall_driver = openvswitch > [ovs] > openflow_processed_per_port = true > > These configs from the openstack community. You can prefer from docs. > > With native firewall backend you must "live_migration_events = True", > without it, some instances cannot ping (you need to log in via console to > wake up these instances) after live migrate, you can test. > > I am planning to test like > > > https://thesaitech.wordpress.com/2019/02/15/a-comparative-study-of-openstack-networking-architectures/ > > to see what benefit ovs with native backend will bring to us. > > Nguyen Huu Khoi > > > On Tue, Aug 1, 2023 at 11:30?PM Satish Patel wrote: > >> Folks, >> >> Who is running the OVS firewall driver (firewall_driver = openvswitch) >> in production and are there any issues with running it which I may not be >> aware of? We are not yet ready for OVN deployments so have to stick with >> OVS. >> >> LinuxBridge is at the end of its life trying to get rid of any >> dependency. >> >> [securitygroup] >> firewall_driver = openvswitch >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pureewat.k at bangmod.co.th Fri Aug 4 06:51:40 2023 From: pureewat.k at bangmod.co.th (Pureewat Kaewpoi) Date: Fri, 4 Aug 2023 06:51:40 +0000 Subject: How to add Audit middleware in placement-api Message-ID: Dear Community ! According to this document (https://docs.openstack.org/keystonemiddleware/yoga/audit.html) It say, I have to add "audit" middleware into api-paste.ini file. But placement api It not have api-paste.ini and It looks like placement calling middleware in this file (https://opendev.org/openstack/placement/src/branch/master/placement/deploy.py#L108) As I understand I have to change some code to use audit middleware ? Or Just create api-paste.ini for placement-api and config wsgi to use this paste ? Best Regards, Pureewat -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Aug 4 07:49:07 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 4 Aug 2023 09:49:07 +0200 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: <20230803153237.jizkndfw5kgopn47@yuggoth.org> References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> Message-ID: <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> On 03/08/2023 17:32, Jeremy Stanley wrote: > On 2023-08-03 17:09:29 +0200 (+0200), Christian Rohmann wrote: >> we use Ceph RadosGW (Quincy release) to provide Swift as part of >> an OpenStack cloud. > [...] > > While I don't have an answer for your question, I did want to clear > up any potential misunderstanding. Please be aware that Ceph RadosGW > is not Swift, it is a popular non-OpenStack alternative to using > actual Swift, and supplies a partial replica of Swift's user-facing > API (but with a substantially different backend implementation). > Ceph RadosGW does not have feature-parity with Swift, lacking a > number of Swift's features and sometimes having much different > behaviors or performance even for the features it attempts to > replicate. > > So just to reiterate, Ceph RadosGW isn't Swift any more than Swift's > S3-API is S3. It's merely a compatibility shim, and you shouldn't > expect it to work the same way Swift does. Thanks Jeremy for clarifying this. Honestly I was pretty aware of that fact, but did not want to NOT mention it. Regarding the particular error I am seeing, this seems to then cause the OpenstackSDK to fail on discovering the "Swift caps" at https://opendev.org/openstack/openstacksdk/src/commit/88fc0c2cf6269dd2d3f8620e674851320316f887/openstack/object_store/v1/_proxy.py#L1147 with the exception being: "No Info found for None: Client Error for url: https://object-store.region.cloud.example.com/info, Not Found" (I added a line to log the exception, which otherwise is just silently ignored). When issuing a GET to "https://object-store.region.cloud.example.com/swift/info" the caps are returned: > {"bulk_delete":{},"container_quotas":{},"swift":{"max_file_size":5368709120,"container_listing_limit":10000,"version":"17.2.6","policies":[{"default":true,"name":"default-placement"}],"max_object_name_size":1024,"strict_cors_mode":true,"max_container_name_length":255},"tempurl":{"methods":["GET","HEAD","PUT","POST","DELETE"]},"slo":{"max_manifest_segments":1000},"account_quotas":{},"staticweb":{},"tempauth":{"account_acls":true}}% So to me it's not (yet) about Ceph vs. original Swift, but rather about the discovery logic in keystoneauth1 to find the correct base-URL from the endpoints, which is "/swift" instead of "/" in my case. But that could very well be also a possible setup for Swift, right? And isn't that what the endpoint list is all about? Telling clients the correct URLs for services? And why does this (using /swift) seem work without issue when doing other actions like "container list"? I violently added Artem to CC, since he wrote the wrote the project cleanup code for object storage https://review.opendev.org/c/openstack/openstacksdk/+/853015 Regards (and sorry Artem for dragging you into this thread), Christian From artem.goncharov at gmail.com Fri Aug 4 07:59:40 2023 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 4 Aug 2023 09:59:40 +0200 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> Message-ID: <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> Hi Christian, It?s good that you pulled me explicitly. So, the __bug__ is in https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/object_store/v1/info.py#L68 Explanation (reduced version): - normally all the service endpoints are consumed as announced by the service catalog (https://object-store.region.cloud.example.com/swift/v1/AUTH_%(tenant_id)s in your case) - the URL above means that all of the API requests for swift must go to https://object-store.region.cloud.example.com/swift/v1/AUTH_f2bc4bd34567ddc341e197456789/XXX - now the /info endpoint of the swift is explicitly NOT including the project_id (AUTH_XXX suffix). Current code calculates the URL to query as mentioned above by reconstructing it from the base domain. There are few similar cases in the SDK when service exposes functions under __non_standard__ url by doing dirty hacks. - Since so far nobody from us was deploying or dealing with cloud having additional element in the path we never seen that as a bug. Solution: - either you remove /swift from your path when deploying swift - or we need to change the mentioned calculation logic by explicitly stripping last 2 elements of the path of the service catalog entry (what seems logical on one side is not guaranteed to be a proper solution either - I have seen wild examples) Regards, Artem > On 4. Aug 2023, at 09:49, Christian Rohmann wrote: > > On 03/08/2023 17:32, Jeremy Stanley wrote: >> On 2023-08-03 17:09:29 +0200 (+0200), Christian Rohmann wrote: >>> we use Ceph RadosGW (Quincy release) to provide Swift as part of >>> an OpenStack cloud. >> [...] >> >> While I don't have an answer for your question, I did want to clear >> up any potential misunderstanding. Please be aware that Ceph RadosGW >> is not Swift, it is a popular non-OpenStack alternative to using >> actual Swift, and supplies a partial replica of Swift's user-facing >> API (but with a substantially different backend implementation). >> Ceph RadosGW does not have feature-parity with Swift, lacking a >> number of Swift's features and sometimes having much different >> behaviors or performance even for the features it attempts to >> replicate. >> >> So just to reiterate, Ceph RadosGW isn't Swift any more than Swift's >> S3-API is S3. It's merely a compatibility shim, and you shouldn't >> expect it to work the same way Swift does. > > Thanks Jeremy for clarifying this. Honestly I was pretty aware of that fact, but did not want to NOT mention it. > > Regarding the particular error I am seeing, this seems to then cause the OpenstackSDK to fail on discovering the "Swift caps" at https://opendev.org/openstack/openstacksdk/src/commit/88fc0c2cf6269dd2d3f8620e674851320316f887/openstack/object_store/v1/_proxy.py#L1147 > > with the exception being: "No Info found for None: Client Error for url: https://object-store.region.cloud.example.com/info, Not Found" (I added a line to log the exception, which otherwise is just silently ignored). > When issuing a GET to "https://object-store.region.cloud.example.com/swift/info" the caps are returned: > >> {"bulk_delete":{},"container_quotas":{},"swift":{"max_file_size":5368709120,"container_listing_limit":10000,"version":"17.2.6","policies":[{"default":true,"name":"default-placement"}],"max_object_name_size":1024,"strict_cors_mode":true,"max_container_name_length":255},"tempurl":{"methods":["GET","HEAD","PUT","POST","DELETE"]},"slo":{"max_manifest_segments":1000},"account_quotas":{},"staticweb":{},"tempauth":{"account_acls":true}}% > > > > So to me it's not (yet) about Ceph vs. original Swift, but rather about the discovery logic in keystoneauth1 to find the correct base-URL from the endpoints, which is "/swift" instead of "/" in my case. > But that could very well be also a possible setup for Swift, right? And isn't that what the endpoint list is all about? Telling clients the correct URLs for services? > And why does this (using /swift) seem work without issue when doing other actions like "container list"? > > I violently added Artem to CC, since he wrote the wrote the project cleanup code for object storage https://review.opendev.org/c/openstack/openstacksdk/+/853015 > > > > Regards (and sorry Artem for dragging you into this thread), > > > Christian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Fri Aug 4 09:42:36 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 4 Aug 2023 11:42:36 +0200 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> Message-ID: <51cfde8d-c3b3-4953-4ae7-0b3d2e659462@inovex.de> Hey Artem, thanks a bunch for giving into this so quickly ... On 04/08/2023 09:59, Artem Goncharov wrote: > > Solution: > - either you remove /swift from your path when deploying swift That's not possible, unfortunately, if you also want to support S3. See https://docs.ceph.com/en/latest/radosgw/config-ref/#confval-rgw_swift_url_prefix One option might be to use dedicated instances of RGW just for Swift and others for S3. But this requires to also use different endpoints / hostnames. But even if only Swift was used, with "/swift" being the default prefix on Ceph RGWs, I am highly confident that this prefixed path therefore exists for quite a few clouds using Ceph to provide the object storage via the Swift protocol. > - or we need to change the mentioned calculation logic by explicitly > stripping last 2 elements of the path of the service catalog entry > (what seems logical on one side is not guaranteed to be a proper > solution either - I have seen wild examples) Well statically stripping a certain number of elements does indeed not seem "proper. If you look at https://docs.ceph.com/en/latest/radosgw/keystone/#ocata-and-later vs https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access the existence of "AUTH_$(project_id)s" cannot always be expected. But maybe a rule / regex can be found to strip optional version and AUTH element? Kinda like ... ? '(.*?)(\/v[0-9](\/AUTH_.+)?)?$' to get all the path elements until the (optional) "v"ersion and an (optional) "AUTH_" at the end. Honestly I don't know how the swift endpoint URLs looks like when deploying OpenStack Swift instead of Ceph RGW. But I suppose the endpoint has some potential variance in path there as well? Regards Christian From artem.goncharov at gmail.com Fri Aug 4 09:59:35 2023 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 4 Aug 2023 11:59:35 +0200 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: <51cfde8d-c3b3-4953-4ae7-0b3d2e659462@inovex.de> References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> <51cfde8d-c3b3-4953-4ae7-0b3d2e659462@inovex.de> Message-ID: > > Well statically stripping a certain number of elements does indeed not seem "proper. > > If you look at https://docs.ceph.com/en/latest/radosgw/keystone/#ocata-and-later vs https://docs.ceph.com/en/latest/radosgw/keystone/#cross-project-tenant-access the existence of "AUTH_$(project_id)s" cannot always be expected. > But maybe a rule / regex can be found to strip optional version and AUTH element? > > Kinda like ... > > '(.*?)(\/v[0-9](\/AUTH_.+)?)?$' > > to get all the path elements until the (optional) "v"ersion and an (optional) "AUTH_" at the end. > > Honestly I don't know how the swift endpoint URLs looks like when deploying OpenStack Swift instead of Ceph RGW. > But I suppose the endpoint has some potential variance in path there as well? Precisely. I always strongly disliked cases where REST methods are exposed under non service catalog provided base. This is an ugly grave full of corpses. Idea with regex sounds like a least bad for me. Eventually finding /v1 and getting one level above. From christian.rohmann at inovex.de Fri Aug 4 11:52:14 2023 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 4 Aug 2023 13:52:14 +0200 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> <51cfde8d-c3b3-4953-4ae7-0b3d2e659462@inovex.de> Message-ID: On 04/08/2023 11:59, Artem Goncharov wrote: > Precisely. I always strongly disliked cases where REST methods are exposed under non service catalog provided base. This is an ugly grave full of corpses. > Idea with regex sounds like a least bad for me. Eventually finding /v1 and getting one level above. You mean using the path up until "v1" or "v[1-9]+"? But is v1 (or any version) always part of the endpoint url so it's possible to use this as a definitive delimiter? In any case, could you then craft a patch for this one? Regards and thanks again, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Fri Aug 4 12:13:31 2023 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 4 Aug 2023 12:13:31 +0000 Subject: [ceph][OpenStackSDK][keystone] Warnings about (my) swift endpoint URL by keystoneauth1 discovery In-Reply-To: References: <20230803153237.jizkndfw5kgopn47@yuggoth.org> <09b09a34-e183-9189-93af-5c79bcf54ec3@inovex.de> <357A641D-CAAC-4902-8939-D13EC06DBFB7@gmail.com> <51cfde8d-c3b3-4953-4ae7-0b3d2e659462@inovex.de> Message-ID: <009148BB-9263-4F0E-A89A-081ACF1D8EC3@binero.com> I remember something similar that was an issue when using Ceph RadosGW for Swift API when interaction through Horizon. This was solved directly in python-swiftclient which Horizon was using [1], perhaps not any help but I just recalled it when reading this. [1] https://review.opendev.org/c/openstack/python-swiftclient/+/722395 On 4 Aug 2023, at 13:52, Christian Rohmann wrote: On 04/08/2023 11:59, Artem Goncharov wrote: Precisely. I always strongly disliked cases where REST methods are exposed under non service catalog provided base. This is an ugly grave full of corpses. Idea with regex sounds like a least bad for me. Eventually finding /v1 and getting one level above. You mean using the path up until "v1" or "v[1-9]+"? But is v1 (or any version) always part of the endpoint url so it's possible to use this as a definitive delimiter? In any case, could you then craft a patch for this one? Regards and thanks again, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From justin.lamp at netways.de Fri Aug 4 13:30:45 2023 From: justin.lamp at netways.de (Justin Lamp) Date: Fri, 4 Aug 2023 13:30:45 +0000 Subject: [ovn] VM in external network unable to arp Message-ID: Hey, we are using OVN 22.03 and face an issue where a VM that is directly connected to the provider network won't be accessible, because it cannot arp for the Gateway IP. OVN routers do reply to the arp request though. We know that this exact scenario works as we have it running in our staging environment. Oddly enough if the right MAC-IP Binding is manually defined within the VM and the Gateway, the traffic will begin to flow correctly according to the right SGs. I did an ovn-trace and were able to see that the traffic is supposed to be flooded to the right ports. The ovs-trace on the other hand did not show the same picture. It just did 4k recirculations and then dropped the packet. I already restarted the ovn-controller on the right hv, but that did not do anything. The LSP: $ ovn-nbctl list Logical_Switch_Port cfce175b-9d88-4c2e-a5cc-d76cd5c71deb _uuid : c5dfb248-941e-4d4e-af1a-9ccafc22db70 addresses : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] dhcpv4_options : 1922ee38-282f-4f5c-ade8-6cd157ee52e9 dhcpv6_options : [] dynamic_addresses : [] enabled : true external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} ha_chassis_group : [] name : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} parent_name : [] port_security : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] tag : [] tag_request : [] type : "" up : true The PB: $ ovn-sbctl find Port_Binding logical_port=cfce175b-9d88-4c2e-a5cc-d76cd5c71deb _uuid : e9e5ce44-698f-4a29-acd1-2f24cc1d1950 chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 datapath : 993b44d5-1629-4e9b-b44e-24096d8b3959 encap : [] external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} gateway_chassis : [] ha_chassis_group : [] logical_port : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" mac : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] nat_addresses : [] options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} parent_port : [] requested_chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 tag : [] tunnel_key : 344 type : "" up : true virtual_parent : [] The LS: $ ovn-nbctl list Logical_Switch public-network _uuid : 56d8be55-462a-4b93-8710-3c79ca386213 acls : [] copp : [] dns_records : [] external_ids : {"neutron:mtu"="1500", "neutron:network_name"=public-network, "neutron:revision_number"="21"} forwarding_groups : [] load_balancer : [] load_balancer_group : [] name : neutron-210e26d7-942f-4e17-89b2-571eee87d7e4 other_config : {mcast_flood_unregistered="false", mcast_snoop="false"} ports : [00225774-8fbc-473f-ae5e-d486c54212c8, ..., c5dfb248-941e-4d4e-af1a-9ccafc22db70, ... qos_rules : [] The patchport: $ ovn-nbctl list Logical_Switch_Port provnet-aa35051c-6fc0-463a-8807-0cb28903be14 _uuid : f7259aeb-0e63-4d20-8a8e-54ebf454a524 addresses : [unknown] dhcpv4_options : [] dhcpv6_options : [] dynamic_addresses : [] enabled : [] external_ids : {} ha_chassis_group : [] name : provnet-aa35051c-6fc0-463a-8807-0cb28903be14 options : {mcast_flood="false", mcast_flood_reports="true", network_name=physnet1} parent_name : [] port_security : [] tag : [] tag_request : [] type : localnet up : false I hope I provided the needed context! Thanks in advance! Best regards, Justin Lamp --? Justin Lamp Systems Engineer NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg Tel: +49 911 92885-0 | Fax: +49 911 92885-77 CEO: Julian Hein, Bernd Erk, Sebastian Saemann | AG Nuernberg HRB25207 https://www.netways.de | justin.lamp at netways.de ** stackconf 2023 - September - https://stackconf.eu ** ** OSMC 2023 - November - https://osmc.de ** ** NETWAYS Web Services - https://nws.netways.de ** ** NETWAYS Trainings - https://netways.de/trainings ** -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vm_ext_ovn_trace.log Type: text/x-log Size: 159460 bytes Desc: vm_ext_ovn_trace.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vm_ext_ovs_trace.log Type: text/x-log Size: 778222 bytes Desc: vm_ext_ovs_trace.log URL: From hanoi952022 at gmail.com Fri Aug 4 15:25:12 2023 From: hanoi952022 at gmail.com (Ha Noi) Date: Fri, 4 Aug 2023 22:25:12 +0700 Subject: [openstack][nova] Slow provision 1 instance . Message-ID: Hi everyone, We have a openstack with one region and more than 100 compute nodes. We are using ceph for block storage. I don't know why my instance was provisioned too slow: more than 1 minutes. Nova compute log: Took 56.47 seconds to build instance. So I would like to optimize and speed up instance build time. Thanks and Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Fri Aug 4 19:35:00 2023 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Fri, 4 Aug 2023 15:35:00 -0400 Subject: [ovn] VM in external network unable to arp In-Reply-To: References: Message-ID: Hi Justin, It's a shot in the dark, but one scenario that I experienced was related to the selected TC qdisc. The behavior (ARP not getting through despite what ovn-trace suggests) showed with `fq` TC qdisc used, and switching to `fq_codel` fixed the problem. You may want to try another TC discipline. Something like: tc qdisc replace dev eth0 root fq_codel Where eth0 is your NIC for the provider network. Ihar On Fri, Aug 4, 2023 at 2:38?PM Justin Lamp wrote: > Hey, > > we are using OVN 22.03 and face an issue where a VM that is directly > connected to the provider network won't be accessible, because it cannot > arp for the Gateway IP. OVN routers do reply to the arp request though. We > know that this exact scenario works as we have it running in our staging > environment. > > Oddly enough if the right MAC-IP Binding is manually defined within the > VM and the Gateway, the traffic will begin to flow correctly according to > the right SGs. > > I did an ovn-trace and were able to see that the traffic is supposed to be > flooded to the right ports. The ovs-trace on the other hand did not show > the same picture. It just did 4k recirculations and then dropped the > packet. I already restarted the ovn-controller on the right hv, but that > did not do anything. > > The LSP: > > $ ovn-nbctl list Logical_Switch_Port cfce175b-9d88-4c2e-a5cc-d76cd5c71deb > _uuid : c5dfb248-941e-4d4e-af1a-9ccafc22db70 > addresses : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > dhcpv4_options : 1922ee38-282f-4f5c-ade8-6cd157ee52e9 > dhcpv6_options : [] > dynamic_addresses : [] > enabled : true > external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} > ha_chassis_group : [] > name : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" > options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} > parent_name : [] > port_security : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > tag : [] > tag_request : [] > type : "" > up : true > > The PB: > > $ ovn-sbctl find Port_Binding logical_port=cfce175b-9d88-4c2e-a5cc-d76cd5c71deb > _uuid : e9e5ce44-698f-4a29-acd1-2f24cc1d1950 > chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 > datapath : 993b44d5-1629-4e9b-b44e-24096d8b3959 > encap : [] > external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} > gateway_chassis : [] > ha_chassis_group : [] > logical_port : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" > mac : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > nat_addresses : [] > options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} > parent_port : [] > requested_chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 > tag : [] > tunnel_key : 344 > type : "" > up : true > virtual_parent : [] > > > The LS: > > $ ovn-nbctl list Logical_Switch public-network > _uuid : 56d8be55-462a-4b93-8710-3c79ca386213 > acls : [] > copp : [] > dns_records : [] > external_ids : {"neutron:mtu"="1500", "neutron:network_name"=public-network, "neutron:revision_number"="21"} > forwarding_groups : [] > load_balancer : [] > load_balancer_group : [] > name : neutron-210e26d7-942f-4e17-89b2-571eee87d7e4 > other_config : {mcast_flood_unregistered="false", mcast_snoop="false"} > ports : [00225774-8fbc-473f-ae5e-d486c54212c8, ..., c5dfb248-941e-4d4e-af1a-9ccafc22db70, ... > qos_rules : [] > > > The patchport: > > $ ovn-nbctl list Logical_Switch_Port provnet-aa35051c-6fc0-463a-8807-0cb28903be14 > _uuid : f7259aeb-0e63-4d20-8a8e-54ebf454a524 > addresses : [unknown] > dhcpv4_options : [] > dhcpv6_options : [] > dynamic_addresses : [] > enabled : [] > external_ids : {} > ha_chassis_group : [] > name : provnet-aa35051c-6fc0-463a-8807-0cb28903be14 > options : {mcast_flood="false", mcast_flood_reports="true", network_name=physnet1} > parent_name : [] > port_security : [] > tag : [] > tag_request : [] > type : localnet > up : false > > > I hope I provided the needed context! > Thanks in advance! > > Best regards, > Justin Lamp > > -- > Justin Lamp > Systems Engineer > > NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > CEO: Julian Hein, Bernd Erk, Sebastian Saemann | AG Nuernberg HRB25207 > https://www.netways.de | justin.lamp at netways.de > > ** stackconf 2023 - September - https://stackconf.eu ** > ** OSMC 2023 - November - https://osmc.de ** > ** NETWAYS Web Services - https://nws.netways.de ** > ** NETWAYS Trainings - https://netways.de/trainings ** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From murilo at evocorp.com.br Fri Aug 4 22:39:33 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Fri, 4 Aug 2023 19:39:33 -0300 Subject: [OSA] CEPH libvirt secrets Message-ID: Good evening everyone! Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to consume. I'm using the following configuration: cinder_backends: ceph1: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph1_vol rbd_ceph_conf: /etc/ceph/ceph1.conf rbd_store_chunk_size: 8 volume_backend_name: ceph1 rbd_user: ceph1_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" report_discard_supported: true ceph2: volume_driver: cinder.volume.drivers.rbd.RBDDriver rbd_pool: ceph2_vol rbd_ceph_conf: /etc/ceph/ceph2.conf rbd_store_chunk_size: 8 volume_backend_name: ceph2 rbd_user: ceph2_vol rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" report_discard_supported: true ceph_extra_confs: - src: /etc/openstack_deploy/ceph/ceph1.conf dest: /etc/ceph/ceph1.conf client_name: ceph1_vol keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid }}' - src: /etc/openstack_deploy/ceph/ceph2.conf dest: /etc/ceph/ceph2.conf client_name: ceph2_vol keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring secret_uuid: '{{ cinder_ceph_client_uuid2 }}' But when executing the `virsh secret-list` command it only shows the UUID of "cinder_ceph_client_uuid". Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined in "user_secrets.yml". I have a slight impression that I didn't configure something, but I don't know what, because I didn't find anything else to be done, according to the documentation [1], or it went unnoticed by me. [1] https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ceph.html#extra-client-configuration-files Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at rwts.com.au Sat Aug 5 08:40:41 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Sat, 5 Aug 2023 18:40:41 +1000 Subject: [openstack][nova] Slow provision 1 instance . (Ha Noi) Message-ID: Hi Han, Can you give us some more information? Where is the delay? Is it in volume building, allocation selection? Spawning? Etc. The VM instantiation process has a number of us processes that can all slow down at scale for different reasons. I doubt it is the actual spawning or allocation stage, likely a ceph/cinder building stage, especially if it?s copying data via glance to a volume. Thanks, Karl. -------------- next part -------------- An HTML attachment was scrubbed... URL: From murilo at evocorp.com.br Sat Aug 5 13:42:49 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Sat, 5 Aug 2023 10:42:49 -0300 Subject: [OSA] CEPH libvirt secrets In-Reply-To: References: Message-ID: Apparently the "mon_host" parameter is mandatory to create secrets [1], but setting this parameter also makes it SSH into MON [2], which I would like to avoid. Would this statement be true? [1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_auth_extra_compute.yml#L92 [2] https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_config_extra.yml#L23 Em sex., 4 de ago. de 2023 ?s 19:39, Murilo Morais escreveu: > Good evening everyone! > > Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to > consume. > > I'm using the following configuration: > > cinder_backends: > ceph1: > volume_driver: cinder.volume.drivers.rbd.RBDDriver > rbd_pool: ceph1_vol > rbd_ceph_conf: /etc/ceph/ceph1.conf > rbd_store_chunk_size: 8 > volume_backend_name: ceph1 > rbd_user: ceph1_vol > rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" > report_discard_supported: true > > ceph2: > volume_driver: cinder.volume.drivers.rbd.RBDDriver > rbd_pool: ceph2_vol > rbd_ceph_conf: /etc/ceph/ceph2.conf > rbd_store_chunk_size: 8 > volume_backend_name: ceph2 > rbd_user: ceph2_vol > rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" > report_discard_supported: true > > ceph_extra_confs: > - src: /etc/openstack_deploy/ceph/ceph1.conf > dest: /etc/ceph/ceph1.conf > client_name: ceph1_vol > keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring > keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring > secret_uuid: '{{ cinder_ceph_client_uuid }}' > - src: /etc/openstack_deploy/ceph/ceph2.conf > dest: /etc/ceph/ceph2.conf > client_name: ceph2_vol > keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring > keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring > secret_uuid: '{{ cinder_ceph_client_uuid2 }}' > > But when executing the `virsh secret-list` command it only shows the UUID > of "cinder_ceph_client_uuid". > > Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined > in "user_secrets.yml". > > I have a slight impression that I didn't configure something, but I don't > know what, because I didn't find anything else to be done, according to the > documentation [1], or it went unnoticed by me. > > [1] > https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ceph.html#extra-client-configuration-files > > Thanks in advance! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sat Aug 5 14:10:09 2023 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sat, 5 Aug 2023 16:10:09 +0200 Subject: [OSA] CEPH libvirt secrets In-Reply-To: References: Message-ID: Hey Murilo, I'm not sure that ceph_cliebt role does support multiple secrets right now, I will be able to look deeper into this on Monday But there's yet another place where we set secrets [1], so it shouldn't be required to have mon_hosts defined. But yes, having mon_hosts would require ssh access to them to fetch ceph.conf and authx keys. [1] https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c0f18394e5f23d79bff08280e9c09af7b5ca/tasks/ceph_auth.yml#L67 On Sat, Aug 5, 2023, 15:46 Murilo Morais wrote: > Apparently the "mon_host" parameter is mandatory to create secrets [1], > but setting this parameter also makes it SSH into MON [2], which I would > like to avoid. Would this statement be true? > > [1] > https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_auth_extra_compute.yml#L92 > [2] > https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_config_extra.yml#L23 > > Em sex., 4 de ago. de 2023 ?s 19:39, Murilo Morais > escreveu: > >> Good evening everyone! >> >> Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to >> consume. >> >> I'm using the following configuration: >> >> cinder_backends: >> ceph1: >> volume_driver: cinder.volume.drivers.rbd.RBDDriver >> rbd_pool: ceph1_vol >> rbd_ceph_conf: /etc/ceph/ceph1.conf >> rbd_store_chunk_size: 8 >> volume_backend_name: ceph1 >> rbd_user: ceph1_vol >> rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" >> report_discard_supported: true >> >> ceph2: >> volume_driver: cinder.volume.drivers.rbd.RBDDriver >> rbd_pool: ceph2_vol >> rbd_ceph_conf: /etc/ceph/ceph2.conf >> rbd_store_chunk_size: 8 >> volume_backend_name: ceph2 >> rbd_user: ceph2_vol >> rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" >> report_discard_supported: true >> >> ceph_extra_confs: >> - src: /etc/openstack_deploy/ceph/ceph1.conf >> dest: /etc/ceph/ceph1.conf >> client_name: ceph1_vol >> keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring >> keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring >> secret_uuid: '{{ cinder_ceph_client_uuid }}' >> - src: /etc/openstack_deploy/ceph/ceph2.conf >> dest: /etc/ceph/ceph2.conf >> client_name: ceph2_vol >> keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring >> keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring >> secret_uuid: '{{ cinder_ceph_client_uuid2 }}' >> >> But when executing the `virsh secret-list` command it only shows the UUID >> of "cinder_ceph_client_uuid". >> >> Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are defined >> in "user_secrets.yml". >> >> I have a slight impression that I didn't configure something, but I don't >> know what, because I didn't find anything else to be done, according to the >> documentation [1], or it went unnoticed by me. >> >> [1] >> https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ceph.html#extra-client-configuration-files >> >> Thanks in advance! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesleong123098 at gmail.com Sat Aug 5 23:20:31 2023 From: jamesleong123098 at gmail.com (James Leong) Date: Sat, 5 Aug 2023 18:20:31 -0500 Subject: [horizon][keystone][kolla-ansible][domain][policies][yoga] Allow user to access different domain Message-ID: Hi everyone, I am curious if a user from the default domain can access a different domain. For instance, the cloud admin created multiple domains and used the below command to add a user to a new domain other than the default one. "openstack role add --user test --domain test member" As a user with a member role, I am not able to see the list of the domains associated with the user account. I will only be able to see it on the admin side. Can member roles be allowed to view domains they are in? Best, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Sun Aug 6 18:23:45 2023 From: andr.kurilin at gmail.com (Andriy Kurilin) Date: Sun, 6 Aug 2023 20:23:45 +0200 Subject: [Rally] In-Reply-To: References: Message-ID: hi! Try to run Rally in debug mode like `rally --debug task start`. It should show the full error and traceback ??, 27 ???. 2023??. ? 20:51, Aayushi Gautam : > Hello, > I am Aayushi , an intern with Redhat working on ESI ( Elastic secure > Infracstructure) group. We were trying to use Rally to test the performance > of our codebase. I have created a Plugin and task . And getting an error. > The same bug is also asked on the bug page of Rally, but it was not > answered. > > I have attached the code of the plugin and task and the error message. > > Looking forward to hearing from you. > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Sun Aug 6 18:37:49 2023 From: andr.kurilin at gmail.com (Andriy Kurilin) Date: Sun, 6 Aug 2023 20:37:49 +0200 Subject: [RALLY] Running Rally tasks and Tempest tests in multi-user context In-Reply-To: References: Message-ID: hi! ??, 3 ???. 2023??. ? 18:22, Taltavull Jean-Fran?ois < jean-francois.taltavull at elca.ch>: > Hi openstack-discuss, > > I?m currently using Rally v3.4.0 to test our OpenStack Zed platform. > > > > Rally has been deployed on a dedicated virtual machine and Rally tasks and > Tempest tests, launched on this machine by Rundeck, run pretty well. > > > > Now, I wish every OpenStack team member could launch whatever scenario or > test he wants, when he wants, for example after having applied a service > configuration change on the staging platform. > > And a question is arising: can several users launch different Rally > scenarios or Tempest tests at the same time, from their own Linux > account/environment, using the same Rally, the one which is deployed on the > dedicated machine ? > > > Rally Task framework itself does not have limitations for parallel executions. But it is worth considering a few nuances: - Database backend. Rally is configured to use SQLite by default. It does not support simultaneous write operations, which can be an issue for using a dedicated rally instance for running parallel tasks. Switching to MySQL/Postgres should not have such an issue. - File descriptions. If you run too many parallel tasks with a huge number of parallel iterations, you may face default linux limitation of open file descriptors. I never ran Tempest simultaneously for the same cloud, so I cannot guarantee that there are no bugs, but it should be ok in general. > > > Thanks and best regards, > > > > Jean-Francois > > > > > > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jamesleong123098 at gmail.com Sun Aug 6 22:32:01 2023 From: jamesleong123098 at gmail.com (James Leong) Date: Sun, 6 Aug 2023 17:32:01 -0500 Subject: [horizon][yoga][kolla-ansible][domain] list project created by a specific user Message-ID: Hi all, Is it possible to list only projects that are created by the respective user. For example, user A created project 1, and project 2. I do not want user B to be able to view project 1, and project 2 other than the project created by user B. Apart from that, is there a way to allow users to switch between default domain to another domain via openstack dashboard (horizon)? Best, James -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at rwts.com.au Mon Aug 7 01:58:02 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Mon, 7 Aug 2023 01:58:02 +0000 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Ha Noi, From those logs it appears that the lag is in the libvirt section of the block device connecting and latching to the compute node. If you try and rbd mount manually a drive from your ceph cluster, do you see a long delay in this as well? Kind Regards, -- Karl Kloppenborg, (BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA Linux+ XK0-004) Managing Director, Invention Labs. From: Ha Noi Date: Monday, 7 August 2023 at 11:38 am To: kkloppenborg at rwts.com.au Cc: openstack-discuss at lists.openstack.org Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, The volume creation time is 10 seconds. Below is the log in nova-compute. I would like to optimize nova-compute first . 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image 3418:2023-08-07 08:33:35.967 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') 3419:2023-08-07 08:33:37.763 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the instance on the hypervisor. 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. As you can see, the first log, instance was claimed on node compute 004 at 08:33:08, Then it takes 1 minute (04:34:02) to build instance. Thanks On Fri, Aug 4, 2023 at 10:25?PM Ha Noi > wrote: Hi everyone, We have a openstack with one region and more than 100 compute nodes. We are using ceph for block storage. I don't know why my instance was provisioned too slow: more than 1 minutes. Nova compute log: Took 56.47 seconds to build instance. So I would like to optimize and speed up instance build time. Thanks and Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From kkloppenborg at rwts.com.au Mon Aug 7 04:27:03 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Mon, 7 Aug 2023 04:27:03 +0000 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Ha Noi, Correct, try and take the same volume and directly map it on the server, see if it hangs for any period of time. --Karl. From: Ha Noi Date: Monday, 7 August 2023 at 1:11 pm To: Karl Kloppenborg Cc: openstack-discuss at lists.openstack.org Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, Do you mean try rbd-nbd map a volume from ceph to compute node? Thanks, On Mon, Aug 7, 2023 at 8:58?AM Karl Kloppenborg > wrote: Hi Ha Noi, From those logs it appears that the lag is in the libvirt section of the block device connecting and latching to the compute node. If you try and rbd mount manually a drive from your ceph cluster, do you see a long delay in this as well? Kind Regards, -- Karl Kloppenborg, (BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA Linux+ XK0-004) Managing Director, Invention Labs. From: Ha Noi > Date: Monday, 7 August 2023 at 11:38 am To: kkloppenborg at rwts.com.au > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, The volume creation time is 10 seconds. Below is the log in nova-compute. I would like to optimize nova-compute first . 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image 3418:2023-08-07 08:33:35.967 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') 3419:2023-08-07 08:33:37.763 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the instance on the hypervisor. 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. As you can see, the first log, instance was claimed on node compute 004 at 08:33:08, Then it takes 1 minute (04:34:02) to build instance. Thanks On Fri, Aug 4, 2023 at 10:25?PM Ha Noi > wrote: Hi everyone, We have a openstack with one region and more than 100 compute nodes. We are using ceph for block storage. I don't know why my instance was provisioned too slow: more than 1 minutes. Nova compute log: Took 56.47 seconds to build instance. So I would like to optimize and speed up instance build time. Thanks and Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Mon Aug 7 06:22:58 2023 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 7 Aug 2023 06:22:58 +0000 Subject: instance memory save and restore Message-ID: Hi, VMWare vCenter has such feature to snapshot/save instance memory along with disk. Then instance can be reverted to that snapshot later when needed. OpenStack doesn't seem to support it. Any plan to work on that? Actually, I'm looking for the motivation/requirement to for that feature. Typically, we snapshot/backup storage, then restore the storage and restart/reboot the host/instance. Would that be sufficient? I think there will be a chance of inconsistence between memory and storage. That's why we may need to save the state of both memory and storage. Does that make sense? Any opinions is welcome. Thanks! Tony From andr.kurilin at gmail.com Mon Aug 7 11:01:57 2023 From: andr.kurilin at gmail.com (Andriy Kurilin) Date: Mon, 7 Aug 2023 13:01:57 +0200 Subject: [Rally] In-Reply-To: References: Message-ID: Glad to hear that everything is resolved now. PS: Usually, such issue happens if one or several dependencies of the plugin is missing or conflicts with the environment. ??, 7 ???. 2023??. ? 06:50, Aayushi Gautam : > Thank You for your reply. But the issue is resolved now. > > On Sun, Aug 6, 2023 at 2:24?PM Andriy Kurilin > wrote: > >> hi! >> Try to run Rally in debug mode like `rally --debug task start`. It should >> show the full error and traceback >> >> ??, 27 ???. 2023??. ? 20:51, Aayushi Gautam : >> >>> Hello, >>> I am Aayushi , an intern with Redhat working on ESI ( Elastic secure >>> Infracstructure) group. We were trying to use Rally to test the performance >>> of our codebase. I have created a Plugin and task . And getting an error. >>> The same bug is also asked on the bug page of Rally, but it was not >>> answered. >>> >>> I have attached the code of the plugin and task and the error message. >>> >>> Looking forward to hearing from you. >>> >> >> >> -- >> Best regards, >> Andrey Kurilin. >> > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Aug 7 11:08:26 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Mon, 07 Aug 2023 12:08:26 +0100 Subject: instance memory save and restore In-Reply-To: References: Message-ID: On Mon, 2023-08-07 at 06:22 +0000, Tony Liu wrote: > Hi, > > VMWare vCenter has such feature to snapshot/save instance memory along with disk. > Then instance can be reverted to that snapshot later when needed. > > OpenStack doesn't seem to support it. Any plan to work on that? if you are asking in the context of the libvirt driver then no its not supproted with libvirt and not planeed to be added in the future. we do not have an api action the corresponds to a live snapshot with memeory capcture so that would likely need a new api action or at least a modifition to the exsiting snapshot api. snapshots in openstack are defiend ot only be of the root disk. libvirt has a managed save api which will save the guest ram to file but i do not bleive there is a version of that that works without interupting the guest exectuion. > > Actually, I'm looking for the motivation/requirement to for that feature. this has come up in the past but part of the issue is its not really in line with the cloud computing usage model. there are uscases for it but its not really a generic capability that is protable to diffent compute drivers. vmware is not a cloud plathform its an enterprise virtualization system so the intended use is quite different. i would be interested to know if aws, azure or gce supported memory snapshoting. > Typically, we snapshot/backup storage, then restore the storage and restart/reboot > the host/instance. Would that be sufficient? I think there will be a chance of inconsistence > between memory and storage. without knowing hte usecase its hard to say but generally snapshoting the disk and rebooting form it is less error prone then trying to also restore the memeory. snapshots are used for more then just backup and restore i.e. they can be used to create multipel new vms by first preparing a vm then snapshoting it and cloning it via booting new vms with the snapshot. we also use the snapshot for shleve/unshelve. > That's why we may need to save the state of both memory > and storage. Does that make sense? i understand the usecease but in general i don't know if this is a good fit for openstack. this type of snapshoting is generally only requried for pet vms not generic cloud workloads. libvirt has the ablity to create vm memory snapshots https://libvirt.org/kbase/snapshots.html#overview-of-manual-snapshots https://libvirt.org/formatsnapshot.html#snapshot-xml however it would not be safe to use this type of snapshoting on say a database. even if the database data was stored on cinder volumes restoring the ram will restructure the dbms state on that node to the older state which could lead to data corruption. generally this si only safe for things like virtual desktops, or other similar systems. the other challenge is the storage or the ram image. it would need to be stored in glance and you would likely want to consider encrypting it. There are many edgecases that would need to be blocked such as it is not going to be possibel to restore a snapshot of memory if you have resized the vm since it was taken. its also likely not going to be possibel to take a memory snapshot of vms with sr-iov devices, vgpus or other hardware features like amd SEV encyptetd memroy. With all that in mind this would requrie a detailed nova spec to cover the usecase, proposed changed and the upgrade/operator/security/testing aspect of the proposal. > Any opinions is welcome. > > Thanks! > Tony > > From smooney at redhat.com Mon Aug 7 11:08:26 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Mon, 07 Aug 2023 12:08:26 +0100 Subject: instance memory save and restore In-Reply-To: References: Message-ID: On Mon, 2023-08-07 at 06:22 +0000, Tony Liu wrote: > Hi, > > VMWare vCenter has such feature to snapshot/save instance memory along with disk. > Then instance can be reverted to that snapshot later when needed. > > OpenStack doesn't seem to support it. Any plan to work on that? if you are asking in the context of the libvirt driver then no its not supproted with libvirt and not planeed to be added in the future. we do not have an api action the corresponds to a live snapshot with memeory capcture so that would likely need a new api action or at least a modifition to the exsiting snapshot api. snapshots in openstack are defiend ot only be of the root disk. libvirt has a managed save api which will save the guest ram to file but i do not bleive there is a version of that that works without interupting the guest exectuion. > > Actually, I'm looking for the motivation/requirement to for that feature. this has come up in the past but part of the issue is its not really in line with the cloud computing usage model. there are uscases for it but its not really a generic capability that is protable to diffent compute drivers. vmware is not a cloud plathform its an enterprise virtualization system so the intended use is quite different. i would be interested to know if aws, azure or gce supported memory snapshoting. > Typically, we snapshot/backup storage, then restore the storage and restart/reboot > the host/instance. Would that be sufficient? I think there will be a chance of inconsistence > between memory and storage. without knowing hte usecase its hard to say but generally snapshoting the disk and rebooting form it is less error prone then trying to also restore the memeory. snapshots are used for more then just backup and restore i.e. they can be used to create multipel new vms by first preparing a vm then snapshoting it and cloning it via booting new vms with the snapshot. we also use the snapshot for shleve/unshelve. > That's why we may need to save the state of both memory > and storage. Does that make sense? i understand the usecease but in general i don't know if this is a good fit for openstack. this type of snapshoting is generally only requried for pet vms not generic cloud workloads. libvirt has the ablity to create vm memory snapshots https://libvirt.org/kbase/snapshots.html#overview-of-manual-snapshots https://libvirt.org/formatsnapshot.html#snapshot-xml however it would not be safe to use this type of snapshoting on say a database. even if the database data was stored on cinder volumes restoring the ram will restructure the dbms state on that node to the older state which could lead to data corruption. generally this si only safe for things like virtual desktops, or other similar systems. the other challenge is the storage or the ram image. it would need to be stored in glance and you would likely want to consider encrypting it. There are many edgecases that would need to be blocked such as it is not going to be possibel to restore a snapshot of memory if you have resized the vm since it was taken. its also likely not going to be possibel to take a memory snapshot of vms with sr-iov devices, vgpus or other hardware features like amd SEV encyptetd memroy. With all that in mind this would requrie a detailed nova spec to cover the usecase, proposed changed and the upgrade/operator/security/testing aspect of the proposal. > Any opinions is welcome. > > Thanks! > Tony > > From kennelson11 at gmail.com Mon Aug 7 16:59:19 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 7 Aug 2023 11:59:19 -0500 Subject: vPTG October 2023 Team Signup Deadline Approaching! Message-ID: Hello Everyone, Sign up your team up for the next virtual Project Teams Gathering (PTG), which will be held from Monday, October 23 to Friday, October 27 2023! If you haven't already done so and your team is interested in participating, you need to complete the survey[1] by August 20, 2023 at 7:00 UTC. Then make sure to register[2] - it?s free :) Thanks! -Kendall (diablo_rojo) [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2023_ptg_team_signup [2] PTG Registration: http://ptg2023.openinfra.dev/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanoi952022 at gmail.com Mon Aug 7 01:38:30 2023 From: hanoi952022 at gmail.com (Ha Noi) Date: Mon, 7 Aug 2023 08:38:30 +0700 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Karl, The volume creation time is 10 seconds. Below is the log in nova-compute. I would like to optimize nova-compute first . 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image 3418:2023-08-07 08:33:35.967 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') 3419:2023-08-07 08:33:37.763 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the instance on the hypervisor. 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. As you can see, the first log, instance was claimed on node compute 004 at 08:33:08, Then it takes 1 minute (04:34:02) to build instance. Thanks On Fri, Aug 4, 2023 at 10:25?PM Ha Noi wrote: > Hi everyone, > > We have a openstack with one region and more than 100 compute nodes. We > are using ceph for block storage. > > I don't know why my instance was provisioned too slow: more than 1 > minutes. > > Nova compute log: Took 56.47 seconds to build instance. > > > So I would like to optimize and speed up instance build time. > > > Thanks and Best Regards > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanoi952022 at gmail.com Mon Aug 7 03:11:02 2023 From: hanoi952022 at gmail.com (Ha Noi) Date: Mon, 7 Aug 2023 10:11:02 +0700 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Karl, Do you mean try rbd-nbd map a volume from ceph to compute node? Thanks, On Mon, Aug 7, 2023 at 8:58?AM Karl Kloppenborg wrote: > Hi Ha Noi, > > > > From those logs it appears that the lag is in the libvirt section of the > block device connecting and latching to the compute node. > > > > If you try and rbd mount manually a drive from your ceph cluster, do you > see a long delay in this as well? > > > > Kind Regards, > > -- > > *Karl Kloppenborg, **(BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA > Linux+ XK0-004)* > > Managing Director, Invention Labs. > > > > > > *From: *Ha Noi > *Date: *Monday, 7 August 2023 at 11:38 am > *To: *kkloppenborg at rwts.com.au > *Cc: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *Re: [openstack][nova] Slow provision 1 instance . > > Hi Karl, > > > > The volume creation time is 10 seconds. Below is the log in nova-compute. > I would like to optimize nova-compute first . > > > > > > > > 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 > 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: > /dev/vda. Libvirt can't honour user-supplied dev names > 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume > 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda > 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image > 3418:2023-08-07 08:33:35.967 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') > 3419:2023-08-07 08:33:37.763 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') > 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the > instance on the hypervisor. > 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. > > > > > > As you can see, the first log, instance was claimed on node compute 004 at > 08:33:08, Then it takes 1 minute (04:34:02) to build instance. > > > > > > Thanks > > > > On Fri, Aug 4, 2023 at 10:25?PM Ha Noi wrote: > > Hi everyone, > > > > We have a openstack with one region and more than 100 compute nodes. We > are using ceph for block storage. > > > > I don't know why my instance was provisioned too slow: more than 1 > minutes. > > > > Nova compute log: > > Took 56.47 seconds to build instance. > > > > > > > > So I would like to optimize and speed up instance build time. > > > > > > Thanks and Best Regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanoi952022 at gmail.com Mon Aug 7 04:44:28 2023 From: hanoi952022 at gmail.com (Ha Noi) Date: Mon, 7 Aug 2023 11:44:28 +0700 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Karl, time rbd-nbd --id cinder map SSD1/volume-f2ff322b-c9f4-4efe-9b6e-d80a9d0aa043_ERR /dev/nbd0 real 0m0.111s user 0m0.028s sys 0m0.005s The time mapping is quite fast. Thanks, On Mon, Aug 7, 2023 at 11:27?AM Karl Kloppenborg wrote: > Hi Ha Noi, > > > > Correct, try and take the same volume and directly map it on the server, > see if it hangs for any period of time. > > > > --Karl. > > > > *From: *Ha Noi > *Date: *Monday, 7 August 2023 at 1:11 pm > *To: *Karl Kloppenborg > *Cc: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *Re: [openstack][nova] Slow provision 1 instance . > > Hi Karl, > > > > Do you mean try rbd-nbd map a volume from ceph to compute node? > > > > Thanks, > > > > On Mon, Aug 7, 2023 at 8:58?AM Karl Kloppenborg > wrote: > > Hi Ha Noi, > > > > From those logs it appears that the lag is in the libvirt section of the > block device connecting and latching to the compute node. > > > > If you try and rbd mount manually a drive from your ceph cluster, do you > see a long delay in this as well? > > > > Kind Regards, > > -- > > *Karl Kloppenborg, **(BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA > Linux+ XK0-004)* > > Managing Director, Invention Labs. > > > > > > *From: *Ha Noi > *Date: *Monday, 7 August 2023 at 11:38 am > *To: *kkloppenborg at rwts.com.au > *Cc: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *Re: [openstack][nova] Slow provision 1 instance . > > Hi Karl, > > > > The volume creation time is 10 seconds. Below is the log in nova-compute. > I would like to optimize nova-compute first . > > > > > > > > 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 > 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: > /dev/vda. Libvirt can't honour user-supplied dev names > 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume > 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda > 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image > 3418:2023-08-07 08:33:35.967 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') > 3419:2023-08-07 08:33:37.763 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') > 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the > instance on the hypervisor. > 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. > > > > > > As you can see, the first log, instance was claimed on node compute 004 at > 08:33:08, Then it takes 1 minute (04:34:02) to build instance. > > > > > > Thanks > > > > On Fri, Aug 4, 2023 at 10:25?PM Ha Noi wrote: > > Hi everyone, > > > > We have a openstack with one region and more than 100 compute nodes. We > are using ceph for block storage. > > > > I don't know why my instance was provisioned too slow: more than 1 > minutes. > > > > Nova compute log: > > Took 56.47 seconds to build instance. > > > > > > > > So I would like to optimize and speed up instance build time. > > > > > > Thanks and Best Regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aagautam at redhat.com Mon Aug 7 04:50:01 2023 From: aagautam at redhat.com (Aayushi Gautam) Date: Mon, 7 Aug 2023 00:50:01 -0400 Subject: [Rally] In-Reply-To: References: Message-ID: Thank You for your reply. But the issue is resolved now. On Sun, Aug 6, 2023 at 2:24?PM Andriy Kurilin wrote: > hi! > Try to run Rally in debug mode like `rally --debug task start`. It should > show the full error and traceback > > ??, 27 ???. 2023??. ? 20:51, Aayushi Gautam : > >> Hello, >> I am Aayushi , an intern with Redhat working on ESI ( Elastic secure >> Infracstructure) group. We were trying to use Rally to test the performance >> of our codebase. I have created a Plugin and task . And getting an error. >> The same bug is also asked on the bug page of Rally, but it was not >> answered. >> >> I have attached the code of the plugin and task and the error message. >> >> Looking forward to hearing from you. >> > > > -- > Best regards, > Andrey Kurilin. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hanoi952022 at gmail.com Mon Aug 7 08:56:56 2023 From: hanoi952022 at gmail.com (Ha Noi) Date: Mon, 7 Aug 2023 15:56:56 +0700 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Hi Karl, time rbd-nbd --id cinder map SSD1/volume-f2ff322b-c9f4-4efe-9b6e-d80a9d0aa043_ERR /dev/nbd0 real 0m0.111s user 0m0.028s sys 0m0.005s The time mapping is quite fast. Thanks, On Mon, Aug 7, 2023 at 11:27?AM Karl Kloppenborg wrote: > Hi Ha Noi, > > > > Correct, try and take the same volume and directly map it on the server, > see if it hangs for any period of time. > > > > --Karl. > > > > *From: *Ha Noi > *Date: *Monday, 7 August 2023 at 1:11 pm > *To: *Karl Kloppenborg > *Cc: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *Re: [openstack][nova] Slow provision 1 instance . > > Hi Karl, > > > > Do you mean try rbd-nbd map a volume from ceph to compute node? > > > > Thanks, > > > > On Mon, Aug 7, 2023 at 8:58?AM Karl Kloppenborg > wrote: > > Hi Ha Noi, > > > > From those logs it appears that the lag is in the libvirt section of the > block device connecting and latching to the compute node. > > > > If you try and rbd mount manually a drive from your ceph cluster, do you > see a long delay in this as well? > > > > Kind Regards, > > -- > > *Karl Kloppenborg, **(BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA > Linux+ XK0-004)* > > Managing Director, Invention Labs. > > > > > > *From: *Ha Noi > *Date: *Monday, 7 August 2023 at 11:38 am > *To: *kkloppenborg at rwts.com.au > *Cc: *openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org> > *Subject: *Re: [openstack][nova] Slow provision 1 instance . > > Hi Karl, > > > > The volume creation time is 10 seconds. Below is the log in nova-compute. > I would like to optimize nova-compute first . > > > > > > > > 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 > 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: > /dev/vda. Libvirt can't honour user-supplied dev names > 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume > 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda > 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image > 3418:2023-08-07 08:33:35.967 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') > 3419:2023-08-07 08:33:37.763 8 INFO os_vif > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] Successfully plugged > vif > VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') > 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the > instance on the hypervisor. > 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager > [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 > f64e241d000441d292b91e85138e325c - default default] [instance: > c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. > > > > > > As you can see, the first log, instance was claimed on node compute 004 at > 08:33:08, Then it takes 1 minute (04:34:02) to build instance. > > > > > > Thanks > > > > On Fri, Aug 4, 2023 at 10:25?PM Ha Noi wrote: > > Hi everyone, > > > > We have a openstack with one region and more than 100 compute nodes. We > are using ceph for block storage. > > > > I don't know why my instance was provisioned too slow: more than 1 > minutes. > > > > Nova compute log: > > Took 56.47 seconds to build instance. > > > > > > > > So I would like to optimize and speed up instance build time. > > > > > > Thanks and Best Regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Mon Aug 7 21:59:10 2023 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Tue, 8 Aug 2023 06:59:10 +0900 Subject: [tacker] Cancelling IRC meetings Message-ID: <7c61a7a8-b629-9ccd-c929-6975aa9c8210@gmail.com> Hi team, I'd like to skip the next two weekly meetings becaseu I'm not available today, nothing new issues from the last week and next week is a week holiday for many of us. Thanks, Yasufumi From kkloppenborg at rwts.com.au Mon Aug 7 22:28:45 2023 From: kkloppenborg at rwts.com.au (Karl Kloppenborg) Date: Mon, 7 Aug 2023 22:28:45 +0000 Subject: [openstack][nova] Slow provision 1 instance . In-Reply-To: References: Message-ID: Good morning Ha Noi, Okay, based on this it seems there maybe some sort of delay in Libvirt. Have you got any logs from Libvirt? Thanks, Karl. From: Ha Noi Date: Monday, 7 August 2023 at 6:57 pm To: Karl Kloppenborg Cc: openstack-discuss at lists.openstack.org Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, time rbd-nbd --id cinder map SSD1/volume-f2ff322b-c9f4-4efe-9b6e-d80a9d0aa043_ERR /dev/nbd0 real 0m0.111s user 0m0.028s sys 0m0.005s The time mapping is quite fast. Thanks, On Mon, Aug 7, 2023 at 11:27?AM Karl Kloppenborg > wrote: Hi Ha Noi, Correct, try and take the same volume and directly map it on the server, see if it hangs for any period of time. --Karl. From: Ha Noi > Date: Monday, 7 August 2023 at 1:11 pm To: Karl Kloppenborg > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, Do you mean try rbd-nbd map a volume from ceph to compute node? Thanks, On Mon, Aug 7, 2023 at 8:58?AM Karl Kloppenborg > wrote: Hi Ha Noi, From those logs it appears that the lag is in the libvirt section of the block device connecting and latching to the compute node. If you try and rbd mount manually a drive from your ceph cluster, do you see a long delay in this as well? Kind Regards, -- Karl Kloppenborg, (BCompSc, CNCF-[KCNA, CKA, CKAD], LFCE, CompTIA Linux+ XK0-004) Managing Director, Invention Labs. From: Ha Noi > Date: Monday, 7 August 2023 at 11:38 am To: kkloppenborg at rwts.com.au > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [openstack][nova] Slow provision 1 instance . Hi Karl, The volume creation time is 10 seconds. Below is the log in nova-compute. I would like to optimize nova-compute first . 3414:2023-08-07 08:33:08.436 8 INFO nova.compute.claims [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Claim successful on node compute-004 3415:2023-08-07 08:33:08.702 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names 3416:2023-08-07 08:33:08.815 8 INFO nova.virt.block_device [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Booting with volume 37d1487e-9118-4231-9908-fb662b626977 at /dev/vda 3417:2023-08-07 08:33:34.269 8 INFO nova.virt.libvirt.driver [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Creating image 3418:2023-08-07 08:33:35.967 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:1e:06:ae,bridge_name='qbr2ea5e4e0-a9',has_traffic_filtering=True,id=2ea5e4e0-a934-404d-ac4c-a4deadf2aa73,network=Network(a4e1151a-9671-4fb6-b620-66b80d7dce8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2ea5e4e0-a9') 3419:2023-08-07 08:33:37.763 8 INFO os_vif [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] Successfully plugged vif VIFBridge(active=False,address=fa:16:3e:b0:e6:12,bridge_name='qbre636e020-ec',has_traffic_filtering=True,id=e636e020-ec99-47d4-bad3-9c87c65e080f,network=Network(5f7989ab-cb9d-493b-a8c5-edf688782f81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tape636e020-ec') 3425:2023-08-07 08:33:45.725 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 11.46 seconds to spawn the instance on the hypervisor. 3459:2023-08-07 08:34:02.909 8 INFO nova.compute.manager [req-33bf9cfd-38c5-40c3-aceb-2c4c0bfa986c 125406d260d24c77aca1b09206825437 f64e241d000441d292b91e85138e325c - default default] [instance: c54db6fe-7bf0-4ca3-8167-20ec2763b79f] Took 61.40 seconds to build instance. As you can see, the first log, instance was claimed on node compute 004 at 08:33:08, Then it takes 1 minute (04:34:02) to build instance. Thanks On Fri, Aug 4, 2023 at 10:25?PM Ha Noi > wrote: Hi everyone, We have a openstack with one region and more than 100 compute nodes. We are using ceph for block storage. I don't know why my instance was provisioned too slow: more than 1 minutes. Nova compute log: Took 56.47 seconds to build instance. So I would like to optimize and speed up instance build time. Thanks and Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Mon Aug 7 23:18:54 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Tue, 8 Aug 2023 06:18:54 +0700 Subject: [cinder] prevent change volume type from different AZ Message-ID: Hello guys. I have 2 AZ and 1 volume backend on each AZ. My goal is when 1 volume was created from the AZ then user cannot change it to another volume type on a different AZ. Thank you. Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Mon Aug 7 23:25:21 2023 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 7 Aug 2023 19:25:21 -0400 Subject: [Skyline] error username or password incorrect Message-ID: Folks, Try to install skyline UI to replace horizon using doc: https://docs.openstack.org/skyline-apiserver/latest/install/docker-install-ubuntu.html Everything went well and I got a login page on http://x.x.x.x:9999 also it pulled Region/Domains. When I am trying to login with my account, I get an error: Username or Password is incorrect. I am using sqlite DB for skyline as per documents. No errors in logs command $ docker logs skyline When I use Chrome Developer Tools then it was indicating an error in these URLs. http://openstack.example.com:9999/api/openstack/skyline/api/v1/profile http://openstack.example.com:9999/api/openstack/skyline/api/v1/policies 401 Unauthorized ( {"detail":"no such table: revoked_token"} ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Aug 8 03:51:29 2023 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 7 Aug 2023 23:51:29 -0400 Subject: [Skyline] error username or password incorrect In-Reply-To: References: Message-ID: Skyline Team, I found similar issue in BUG Report but no solution yet https://bugs.launchpad.net/skyline-apiserver/+bug/2025755 On Mon, Aug 7, 2023 at 7:25?PM Satish Patel wrote: > Folks, > > Try to install skyline UI to replace horizon using doc: > https://docs.openstack.org/skyline-apiserver/latest/install/docker-install-ubuntu.html > > > Everything went well and I got a login page on http://x.x.x.x:9999 also > it pulled Region/Domains. When I am trying to login with my account, I get > an error: Username or Password is incorrect. > > I am using sqlite DB for skyline as per documents. > > No errors in logs command > $ docker logs skyline > > When I use Chrome Developer Tools then it was indicating an error in these > URLs. > > http://openstack.example.com:9999/api/openstack/skyline/api/v1/profile > http://openstack.example.com:9999/api/openstack/skyline/api/v1/policies > > 401 Unauthorized ( {"detail":"no such table: revoked_token"} ) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Aug 8 05:11:01 2023 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 8 Aug 2023 01:11:01 -0400 Subject: [Skyline] error username or password incorrect In-Reply-To: References: Message-ID: Update: After switching DB from sqlite to mysql DB it works. Now admin account works but when I login with _member_ users or normal account and trying to create instance then pop up windows throwing error: { "message": "You don't have access to get instances.", "status": 401 } On Mon, Aug 7, 2023 at 11:51?PM Satish Patel wrote: > Skyline Team, > > I found similar issue in BUG Report but no solution yet > > https://bugs.launchpad.net/skyline-apiserver/+bug/2025755 > > On Mon, Aug 7, 2023 at 7:25?PM Satish Patel wrote: > >> Folks, >> >> Try to install skyline UI to replace horizon using doc: >> https://docs.openstack.org/skyline-apiserver/latest/install/docker-install-ubuntu.html >> >> >> Everything went well and I got a login page on http://x.x.x.x:9999 also >> it pulled Region/Domains. When I am trying to login with my account, I get >> an error: Username or Password is incorrect. >> >> I am using sqlite DB for skyline as per documents. >> >> No errors in logs command >> $ docker logs skyline >> >> When I use Chrome Developer Tools then it was indicating an error in >> these URLs. >> >> http://openstack.example.com:9999/api/openstack/skyline/api/v1/profile >> http://openstack.example.com:9999/api/openstack/skyline/api/v1/policies >> >> 401 Unauthorized ( {"detail":"no such table: revoked_token"} ) >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From felix.huettner at mail.schwarz Tue Aug 8 12:18:01 2023 From: felix.huettner at mail.schwarz (Felix Huettner) Date: Tue, 8 Aug 2023 14:18:01 +0200 Subject: [ovn] VM in external network unable to arp In-Reply-To: References: Message-ID: Hi Justin, i guess your "public-network" logical switch has a large number of ports connected to it (somewhere between 200 and 300 or more). In this case you might be interested in this fix [1]. It allows you to configure the "public-network" logical switch to not broadcast arp requests to all routers and instead only send it to non-router ports (Arp requests for LRPs will still be forwarded to that individual router). Note that this setting will break GARPs on this network. Not sure if that would be an issue for you. I have never tested this with a VM that is connected directly on a provider network but i would guess it should still work. Regards Felix [1]: https://github.com/ovn-org/ovn/commit/37d308a2074515834692d442475a8e05310a152d On Fri, Aug 04, 2023 at 01:30:45PM +0000, Justin Lamp wrote: > Hey, > > we are using OVN 22.03 and face an issue where a VM that is directly connected to the provider network won't be accessible, because it cannot arp for the Gateway IP. OVN routers do reply to the arp request though. We know that this exact scenario works as we have it running in our staging environment. > > Oddly enough if the right MAC-IP Binding is manually defined within the VM and the Gateway, the traffic will begin to flow correctly according to the right SGs. > > I did an ovn-trace and were able to see that the traffic is supposed to be flooded to the right ports. The ovs-trace on the other hand did not show the same picture. It just did 4k recirculations and then dropped the packet. I already restarted the ovn-controller on the right hv, but that did not do anything. > > The LSP: > > $ ovn-nbctl list Logical_Switch_Port cfce175b-9d88-4c2e-a5cc-d76cd5c71deb > _uuid : c5dfb248-941e-4d4e-af1a-9ccafc22db70 > addresses : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > dhcpv4_options : 1922ee38-282f-4f5c-ade8-6cd157ee52e9 > dhcpv6_options : [] > dynamic_addresses : [] > enabled : true > external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} > ha_chassis_group : [] > name : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" > options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} > parent_name : [] > port_security : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > tag : [] > tag_request : [] > type : "" > up : true > > > The PB: > > $ ovn-sbctl find Port_Binding logical_port=cfce175b-9d88-4c2e-a5cc-d76cd5c71deb > _uuid : e9e5ce44-698f-4a29-acd1-2f24cc1d1950 > chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 > datapath : 993b44d5-1629-4e9b-b44e-24096d8b3959 > encap : [] > external_ids : {"neutron:cidrs"="2a02:ed80:0:3::341/64 91.198.2.33/24", "neutron:device_id"="8062ec61-0c68-41dd-b77c-e8b72ad16a88", "neutron:device_owner"="compute:AZ1", "neutron:network_name"=neutron-210e26d7-942f-4e17-89b2-571eee87d7e4, "neutron:port_name"="", "neutron:project_id"="99fb21796a8f4cbda42ba5b9d1e307dd", "neutron:revision_number"="16", "neutron:security_group_ids"="3e41777f-7aa4-4368-9992-5ca7cc2a5372 873b3b62-0918-4b1e-be73-fdbed50d2ac2"} > gateway_chassis : [] > ha_chassis_group : [] > logical_port : "cfce175b-9d88-4c2e-a5cc-d76cd5c71deb" > mac : ["fa:16:3e:a2:d7:1a 2a02:ed80:0:3::341 91.198.2.33"] > nat_addresses : [] > options : {mcast_flood_reports="true", requested-chassis=net-openstack-hv31} > parent_port : [] > requested_chassis : c944c21a-3344-4fda-ab4e-a4cc07403125 > tag : [] > tunnel_key : 344 > type : "" > up : true > virtual_parent : [] > > > > The LS: > > $ ovn-nbctl list Logical_Switch public-network > _uuid : 56d8be55-462a-4b93-8710-3c79ca386213 > acls : [] > copp : [] > dns_records : [] > external_ids : {"neutron:mtu"="1500", "neutron:network_name"=public-network, "neutron:revision_number"="21"} > forwarding_groups : [] > load_balancer : [] > load_balancer_group : [] > name : neutron-210e26d7-942f-4e17-89b2-571eee87d7e4 > other_config : {mcast_flood_unregistered="false", mcast_snoop="false"} > ports : [00225774-8fbc-473f-ae5e-d486c54212c8, ..., c5dfb248-941e-4d4e-af1a-9ccafc22db70, ... > qos_rules : [] > > > > The patchport: > > $ ovn-nbctl list Logical_Switch_Port provnet-aa35051c-6fc0-463a-8807-0cb28903be14 > _uuid : f7259aeb-0e63-4d20-8a8e-54ebf454a524 > addresses : [unknown] > dhcpv4_options : [] > dhcpv6_options : [] > dynamic_addresses : [] > enabled : [] > external_ids : {} > ha_chassis_group : [] > name : provnet-aa35051c-6fc0-463a-8807-0cb28903be14 > options : {mcast_flood="false", mcast_flood_reports="true", network_name=physnet1} > parent_name : [] > port_security : [] > tag : [] > tag_request : [] > type : localnet > up : false > > > > I hope I provided the needed context! > Thanks in advance! > > Best regards, > Justin Lamp > > --? > Justin Lamp > Systems Engineer > > NETWAYS Managed Services GmbH | Deutschherrnstr. 15-19 | D-90429 Nuernberg > Tel: +49 911 92885-0 | Fax: +49 911 92885-77 > CEO: Julian Hein, Bernd Erk, Sebastian Saemann | AG Nuernberg HRB25207 > https://www.netways.de | justin.lamp at netways.de > > ** stackconf 2023 - September - https://stackconf.eu ** > ** OSMC 2023 - November - https://osmc.de ** > ** NETWAYS Web Services - https://nws.netways.de ** > ** NETWAYS Trainings - https://netways.de/trainings ** Diese E Mail enth?lt m?glicherweise vertrauliche Inhalte und ist nur f?r die Verwertung durch den vorgesehenen Empf?nger bestimmt. Sollten Sie nicht der vorgesehene Empf?nger sein, setzen Sie den Absender bitte unverz?glich in Kenntnis und l?schen diese E Mail. Hinweise zum Datenschutz finden Sie hier. This e-mail may contain confidential content and is intended only for the specified recipient/s. If you are not the intended recipient, please inform the sender immediately and delete this e-mail. Information on data protection can be found here. From satish.txt at gmail.com Mon Aug 7 23:21:45 2023 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 7 Aug 2023 19:21:45 -0400 Subject: [skyline] Username or password is incorrect Message-ID: Folks, Try to install skyline UI to replace horizon using doc: https://docs.openstack.org/skyline-apiserver/latest/install/docker-install-ubuntu.html Everything went well and I got a login page on http://x.x.x.x:9999 also it pulled Region/Domains. When I am trying to login with my account, I get an error: Username or Password is incorrect. I am using sqlite DB for skyline as per documents. No errors in logs command $ docker logs skyline When I use Chrome Developer Tools then it was indicating an error in these URLs. http://openstack.example.com:9999/api/openstack/skyline/api/v1/profile http://openstack.example.com:9999/api/openstack/skyline/api/v1/policies 401 Unauthorized ( {"detail":"no such table: revoked_token"} ) Find attached screenshot -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot 2023-08-07 at 7.21.21 PM.png Type: image/png Size: 161469 bytes Desc: not available URL: From kushagrguptasps.mun at gmail.com Tue Aug 8 12:42:17 2023 From: kushagrguptasps.mun at gmail.com (Kushagr Gupta) Date: Tue, 8 Aug 2023 18:12:17 +0530 Subject: TripleO deployment over IPV6 In-Reply-To: References: Message-ID: Hi Julia,Team, Thank you for the response @Julia Kreger On Thu, Jul 27, 2023 at 6:59?PM Julia Kreger wrote: > > I guess what is weird in this entire thing is it sounds like you're > shifting over to what appears to be OPROM boot code in a network interface > card, which might not support v6. Then again a port mirrored packet capture > would be the needful item to troubleshoot further. > We have setup a local dnsmasq-dhcp server and TFTP server on a VM and tried PXE booting the same set of hardwares. The hardware are booting on IPV6 so I think the hardware supports IPV6 PXE booting. > Are you able to extract the exact command line which is being passed to > the dnsmasq process for that container launch? > > I guess I'm worried if somehow dnsmasq changed or if an old version is > somehow in the container image you're using. > > The command line which is getting executed is as follows: " "command": [ "/bin/bash", "-c", "BIND_HOST=ca:ca:ca:9900::171; /usr/sbin/dnsmasq --keep-in-foreground --log-facility=/var/log/ironic/dnsmasq.log --user=root --conf-file=/dev/null --listen-address=$BIND_HOST --port=0 --enable-tftp --tftp-root=/var/lib/ironic/tftpboot" ], " We found this command in the following: /var/lib/tripleo-config/container-startup-config/step_4/ironic_pxe_tftp.json Apart from this we also tried to install the openstack version zed. In this version, the container ironic_pxe_tftp is up and running but we were still getting the same error: [image: image.png] We tried to curl the file which the TFTP container provides from a remote machine(not the undercloud), but we are unable to curl it. [image: image.png] But when, we do the same thing from the undercloud, it is working fine: [image: image.png] We also set up an undercloud machine on ipv4 for comparison. When we tried to curl the image from a remote machine(not the undercloud) for this server, we were able to curl it. [image: image.png] On further digging, we found that in the zed release, the "ironic_pxe_tftp" is in *healthy *state while three containers namely: "ironic_api","ironic_conductor","ironic_pxe_http" are in *unhealthy *state but are up and running. We re-installed the undercloud on the fresh machine and re-tried node introspection after performing basic tasks like image upload, node registration. To our surprise, the Introspection was successful. and the nodes came in available state: [image: nodes_available_4.PNG] At this point we were also able to curl the file from a random machine: [image: image.png] But it all stopped once we restarted the undercloud node even though all the containers were up and running. We are further investigating this issue. Thanks and Regards Kushagra Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19364 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 21209 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 20947 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 61733 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 37677 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nodes_available_4.PNG Type: image/png Size: 30101 bytes Desc: not available URL: From knikolla at bu.edu Tue Aug 8 13:46:02 2023 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 8 Aug 2023 13:46:02 +0000 Subject: [tc] Technical Committee next weekly meeting Today on August 8, 2023 Message-ID: <00440C04-97F1-478F-8F0E-7A1A57E4D285@bu.edu> Hi all, This is a reminder that the next weekly Technical Committee meeting is to be held Today, Tuesday, August 8, 2023 at 1800 UTC on #openstack-tc on OFTC IRC. Please find the agenda below: ? Roll call ? Follow up on past action items ? rosmaita to review guidelines patch and poke at automating it ? Gate health check ? Unmaintained status replaces Extended Maintenance ? https://review.opendev.org/c/openstack/governance/+/888771 ? User survey question updates by Aug 18 ? https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034589.html ? Open Discussion and Reviews ? https://review.opendev.org/q/projects:openstack/governance+is:open Thank you, Kristi Nikolla From lucasagomes at gmail.com Tue Aug 8 14:56:56 2023 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Tue, 8 Aug 2023 15:56:56 +0100 Subject: [neutron] Bug Deputy Report July 31 - August 06 Message-ID: Hi, This is the Neutron bug report from July 31st to August 6th. *High:* * https://bugs.launchpad.net/neutron/+bug/2029722 - Routed subnets cannot use sna -Assigned to: Alban PRATS *Medium:* * https://bugs.launchpad.net/neutron/+bug/2029335 - [centos-9-stream] jobs fails as nova-compute stuck at libvirt connect since systemd-252-16.el9 - Assigned to: Yatin * https://bugs.launchpad.net/neutron/+bug/2029419 - Add Tempest scenario to validate NIC teaming - Unassigned *Needs further triage:* * https://bugs.launchpad.net/neutron/+bug/2029420 - ovdb connections fail. out-of-sync with ovs library - Unassigned * https://bugs.launchpad.net/neutron/+bug/2030294 - OVN: garbled DNS responses when edns is being used - Unassigned * https://bugs.launchpad.net/neutron/+bug/2030295 - OVN: no DNS responses are generated for TCP queries - Unassigned Cheers, Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Tue Aug 8 16:30:05 2023 From: jean-francois.taltavull at elca.ch (=?utf-8?B?VGFsdGF2dWxsIEplYW4tRnJhbsOnb2lz?=) Date: Tue, 8 Aug 2023 16:30:05 +0000 Subject: [RALLY] Running Rally tasks and Tempest tests in multi-user context In-Reply-To: References: Message-ID: Thanks a lot for this clear answer Andrey ! All the best, JF From: Andriy Kurilin Sent: dimanche, 6 ao?t 2023 20:38 To: Taltavull Jean-Fran?ois Cc: openstack-discuss at lists.openstack.org Subject: Re: [RALLY] Running Rally tasks and Tempest tests in multi-user context EXTERNAL MESSAGE - This email comes from outside ELCA companies. hi! ??, 3 ???. 2023??. ? 18:22, Taltavull Jean-Fran?ois >: Hi openstack-discuss, I?m currently using Rally v3.4.0 to test our OpenStack Zed platform. Rally has been deployed on a dedicated virtual machine and Rally tasks and Tempest tests, launched on this machine by Rundeck, run pretty well. Now, I wish every OpenStack team member could launch whatever scenario or test he wants, when he wants, for example after having applied a service configuration change on the staging platform. And a question is arising: can several users launch different Rally scenarios or Tempest tests at the same time, from their own Linux account/environment, using the same Rally, the one which is deployed on the dedicated machine ? Rally Task framework itself does not have limitations for parallel executions. But it is worth considering a few nuances: - Database backend. Rally is configured to use SQLite by default. It does not support simultaneous write operations, which can be an issue for using a dedicated rally instance for running parallel tasks. Switching to MySQL/Postgres should not have such an issue. - File descriptions. If you run too many parallel tasks with a huge number of parallel iterations, you may face default linux limitation of open file descriptors. I never ran Tempest simultaneously for the same cloud, so I cannot guarantee that there are no bugs, but it should be ok in general. Thanks and best regards, Jean-Francois -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Aug 8 19:06:48 2023 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 8 Aug 2023 14:06:48 -0500 Subject: Call for Mentors & Projects - Boston University Students Message-ID: Hello Everyone! As some of you might know, we have had a working relationship with Boston University once a year where students actively participate in our OpenInfra project communities. The semester in which the Fundamentals of Cloud Computing (the course where we collaborate with them)( is approaching. September 5 is when the course begins and it runs until December 21. The course is a senior/graduate-level class that includes a semester-long capstone-like project, with the projects being proposed and supervised by an industry mentor. The projects are completed in teams of 5-7 students, who are expected to spend 5-8 hours per week on the project over the 13-week term, with presentations to the class every two weeks to evaluate their progress. *So! If you know of a project you would like to propose, please let me know ASAP! The professor would like to have the finalized list of projects to offer students by the first week in September. * Let me know if you have any other questions! -Kendall Nelson -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Aug 8 19:23:13 2023 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 8 Aug 2023 13:23:13 -0600 Subject: Call for Mentors & Projects - Boston University Students In-Reply-To: References: Message-ID: Hi Kendal, Can I get the lecture list so I can understand all the things this project needs to include? Cheers! On Tue, Aug 8, 2023 at 1:15?PM Kendall Nelson wrote: > Hello Everyone! > > As some of you might know, we have had a working relationship with Boston > University once a year where students actively participate in our OpenInfra > project communities. The semester in which the Fundamentals of Cloud > Computing (the course where we collaborate with them)( is approaching. > September 5 is when the course begins and it runs until December 21. The > course is a senior/graduate-level class that includes a semester-long > capstone-like project, with the projects being proposed and supervised by > an industry mentor. The projects are completed in teams of 5-7 students, > who are expected to spend 5-8 hours per week on the project over the > 13-week term, with presentations to the class every two weeks to evaluate > their progress. > > > *So! If you know of a project you would like to propose, please let me > know ASAP! The professor would like to have the finalized list of projects > to offer students by the first week in September. * > > Let me know if you have any other questions! > > -Kendall Nelson > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thremes172 at gmail.com Tue Aug 8 19:30:02 2023 From: thremes172 at gmail.com (kaqiu pi) Date: Wed, 9 Aug 2023 03:30:02 +0800 Subject: Why service use UTC in db and log use local timezone Message-ID: Hello, I have a openstack cluster. And the local time of server is different from UTC. I saw that the log date of openstack service is local time. But when I use the command line to query the openstack service status, time zone is UTC. Such as `opentack compute service list`, "Update At" is UTC. In the email archive[1], I saw that there was a discussion earlier: > i would guess it a deliberate design descisn to only store data information in utc format and leave it to the clinets to convert to local timezones if desired But at that time it was just a guess, I would like to ask if this is the meaning of the original design, why the service use UTC instead of local time? I would appreciate any kind of guidance or help. Best wishes. [1] https://lists.openstack.org/pipermail/openstack-discuss/2021-June/022857.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Aug 8 20:24:06 2023 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 8 Aug 2023 13:24:06 -0700 Subject: Why service use UTC in db and log use local timezone In-Reply-To: References: Message-ID: Clients can be anywhere, and you don't want them to have to be aware of the remote server local time zone to have to translate that to UTC or to your own local time zone. Everything being in UTC aids that because if you need to convert it to your local time, you only have the single offset to be aware of to put into your context. Mix in clients in different time zones, and standardizing on something such as UTC is critical. Logging, realistically should just be an artifact of the localtime set on the machine when the process launches, and that is typically set for humans to relate it to their own local time. Hope that makes sense, -Julia On Tue, Aug 8, 2023 at 12:34?PM kaqiu pi wrote: > Hello, > > I have a openstack cluster. And the local time of server is different from > UTC. > > I saw that the log date of openstack service is local time. But when I > use the command line to query the openstack service status, time zone is > UTC. Such as `opentack compute service list`, "Update At" is UTC. > > In the email archive[1], I saw that there was a discussion earlier: > > i would guess it a deliberate design descisn to only store data > information in utc format and leave it to the clinets to convert to local > timezones if desired > > But at that time it was just a guess, I would like to ask if this is the > meaning of the original design, why the service use UTC instead of local > time? > > I would appreciate any kind of guidance or help. > Best wishes. > > > [1] > https://lists.openstack.org/pipermail/openstack-discuss/2021-June/022857.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From murilo at evocorp.com.br Tue Aug 8 20:37:04 2023 From: murilo at evocorp.com.br (Murilo Morais) Date: Tue, 8 Aug 2023 17:37:04 -0300 Subject: [OSA] CEPH libvirt secrets In-Reply-To: References: Message-ID: Dmitry, hello! I have to be honest, can't understand properly how to apply. Do I just have to set the "nova_ceph_client_uuid"? Em s?b., 5 de ago. de 2023 ?s 11:11, Dmitriy Rabotyagov < noonedeadpunk at gmail.com> escreveu: > Hey Murilo, > > I'm not sure that ceph_cliebt role does support multiple secrets right > now, I will be able to look deeper into this on Monday > > But there's yet another place where we set secrets [1], so it shouldn't be > required to have mon_hosts defined. But yes, having mon_hosts would require > ssh access to them to fetch ceph.conf and authx keys. > > > [1] > https://opendev.org/openstack/openstack-ansible-ceph_client/src/commit/05e3c0f18394e5f23d79bff08280e9c09af7b5ca/tasks/ceph_auth.yml#L67 > > On Sat, Aug 5, 2023, 15:46 Murilo Morais wrote: > >> Apparently the "mon_host" parameter is mandatory to create secrets [1], >> but setting this parameter also makes it SSH into MON [2], which I would >> like to avoid. Would this statement be true? >> >> [1] >> https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_auth_extra_compute.yml#L92 >> [2] >> https://opendev.org/openstack/openstack-ansible-ceph_client/src/branch/stable/zed/tasks/ceph_config_extra.yml#L23 >> >> Em sex., 4 de ago. de 2023 ?s 19:39, Murilo Morais >> escreveu: >> >>> Good evening everyone! >>> >>> Guys, I'm having trouble adding a second CEPH cluster for Nova/Cincer to >>> consume. >>> >>> I'm using the following configuration: >>> >>> cinder_backends: >>> ceph1: >>> volume_driver: cinder.volume.drivers.rbd.RBDDriver >>> rbd_pool: ceph1_vol >>> rbd_ceph_conf: /etc/ceph/ceph1.conf >>> rbd_store_chunk_size: 8 >>> volume_backend_name: ceph1 >>> rbd_user: ceph1_vol >>> rbd_secret_uuid: "{{ cinder_ceph_client_uuid }}" >>> report_discard_supported: true >>> >>> ceph2: >>> volume_driver: cinder.volume.drivers.rbd.RBDDriver >>> rbd_pool: ceph2_vol >>> rbd_ceph_conf: /etc/ceph/ceph2.conf >>> rbd_store_chunk_size: 8 >>> volume_backend_name: ceph2 >>> rbd_user: ceph2_vol >>> rbd_secret_uuid: "{{ cinder_ceph_client_uuid2 }}" >>> report_discard_supported: true >>> >>> ceph_extra_confs: >>> - src: /etc/openstack_deploy/ceph/ceph1.conf >>> dest: /etc/ceph/ceph1.conf >>> client_name: ceph1_vol >>> keyring_src: /etc/openstack_deploy/ceph/ceph1_vol.keyring >>> keyring_dest: /etc/ceph/ceph1.client.ceph1_vol.keyring >>> secret_uuid: '{{ cinder_ceph_client_uuid }}' >>> - src: /etc/openstack_deploy/ceph/ceph2.conf >>> dest: /etc/ceph/ceph2.conf >>> client_name: ceph2_vol >>> keyring_src: /etc/openstack_deploy/ceph/ceph2_vol.keyring >>> keyring_dest: /etc/ceph/ceph2.client.ceph2_vol.keyring >>> secret_uuid: '{{ cinder_ceph_client_uuid2 }}' >>> >>> But when executing the `virsh secret-list` command it only shows the >>> UUID of "cinder_ceph_client_uuid". >>> >>> Both "cinder_ceph_client_uuid" and "cinder_ceph_client_uuid2" are >>> defined in "user_secrets.yml". >>> >>> I have a slight impression that I didn't configure something, but I >>> don't know what, because I didn't find anything else to be done, according >>> to the documentation [1], or it went unnoticed by me. >>> >>> [1] >>> https://docs.openstack.org/openstack-ansible-ceph_client/latest/configure-ceph.html#extra-client-configuration-files >>> >>> Thanks in advance! >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Aug 9 02:21:51 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 9 Aug 2023 09:21:51 +0700 Subject: [openstack][neutron] set a fixed ip for router gateway Message-ID: Hello guys. I am curious why we can not set a fixed ip for router gateway. [image: image.png] Although I see these codes were merged many years ago. https://review.opendev.org/c/openstack/neutron/+/83664 Nguyen Huu Khoi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 19683 bytes Desc: not available URL: From bxzhu_5355 at 163.com Wed Aug 9 02:58:24 2023 From: bxzhu_5355 at 163.com (=?utf-8?B?5pyx5Y2a56Wl?=) Date: Wed, 9 Aug 2023 10:58:24 +0800 Subject: [skyline] Username or password is incorrect In-Reply-To: References: Message-ID: <18D49D9B-BFF8-4B13-9605-1C2A46734027@163.com> Hi, For sqlite DB for skyline, I think you can followed by this step[1]. [1] https://opendev.org/openstack/skyline-apiserver#deployment-with-sqlite Thanks Boxiang > 2023?8?8? ??7:21?Satish Patel ??? > > Folks, > > Try to install skyline UI to replace horizon using doc: https://docs.openstack.org/skyline-apiserver/latest/install/docker-install-ubuntu.html > > Everything went well and I got a login page on http://x.x.x.x:9999 also it pulled Region/Domains. When I am trying to login with my account, I get an error: Username or Password is incorrect. > > I am using sqlite DB for skyline as per documents. > > No errors in logs command > $ docker logs skyline > > When I use Chrome Developer Tools then it was indicating an error in these URLs. > > http://openstack.example.com:9999/api/openstack/skyline/api/v1/profile > http://openstack.example.com:9999/api/openstack/skyline/api/v1/policies > > 401 Unauthorized ( {"detail":"no such table: revoked_token"} ) For this error message, I think you did not do bootstrap for skyline when you use sqlite db. > > Find attached screenshot > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Aug 9 06:31:57 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 09 Aug 2023 08:31:57 +0200 Subject: [openstack][neutron] set a fixed ip for router gateway In-Reply-To: References: Message-ID: <5625670.L5hakYDufj@p1> Hi, Dnia ?roda, 9 sierpnia 2023 04:21:51 CEST Nguy?n H?u Kh?i pisze: > Hello guys. > > I am curious why we can not set a fixed ip for router gateway. > > [image: image.png] > Although I see these codes were merged many years ago. > > https://review.opendev.org/c/openstack/neutron/+/83664 > > Nguyen Huu Khoi > I can't say why it's like that in the Horizon dasboard but from the Neutron API PoV this is allowed by default only for admin users. See https://github.com/openstack/neutron/blob/176b144460570fcf2792a199d2b639b335822f14/neutron/conf/policies/router.py#L117 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From massimo.sgaravatto at gmail.com Wed Aug 9 06:43:22 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 9 Aug 2023 08:43:22 +0200 Subject: [glance][ops] Problems snapshotting a VM: image gets deleted Message-ID: I have problems snapshotting a particular VM. This is what happens: [root at cld-ctrl-01 ~]# nova image-create --show --poll 8cfd9323-f9d8-41f8-b607-41597d9c79e9 ns3-new-snap-2023-08-09-2 Server snapshotting... 0% completeERROR (NotFound): No image found with ID 133d4978-1de3-4cef-8804-8b0f07b12e4e

(HTTP 404) [root at cld-ctrl-01 ~]# According to the logs [*], it looks like the snapshot is for some reason deleted, but I don't understand what the problem is Any hints ? Thanks, Massimo [*] 2023-08-09 08:35:05.683 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ ,192.168.60.232 - - [09/Aug/2023 08:35:05] "POST /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e/members HTTP/1.1" 200 424 0.037093 2023-08-09 08:35:05.689 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001015 2023-08-09 08:35:05.690 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/member HTTP/1.1" 200 872 0.003611 2023-08-09 08:35:05.690 5501 INFO eventlet.wsgi.server [-] 192.168.60.234 - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000932 2023-08-09 08:35:05.770 5500 INFO eventlet.wsgi.server [req-4146429a-27a6-4ac9-a3f7-4534c32c6dd0 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 193.206.210.24\ 0,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012723 2023-08-09 08:35:05.812 5500 INFO eventlet.wsgi.server [-] 192.168.60.234 - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000906 2023-08-09 08:35:05.812 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001003 2023-08-09 08:35:05.862 5503 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012832 2023-08-09 08:35:05.868 5506 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/image HTTP/1.1" 200 6278 0.002641 2023-08-09 08:35:06.014 5500 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ ,192.168.60.232 - - [09/Aug/2023 08:35:06] "DELETE /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 204 208 0.140910 -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Wed Aug 9 06:52:34 2023 From: akekane at redhat.com (Abhishek Kekane) Date: Wed, 9 Aug 2023 12:22:34 +0530 Subject: [glance][ops] Problems snapshotting a VM: image gets deleted In-Reply-To: References: Message-ID: Hi Massimo, Can you check nova-compute logs as well, the request to delete the image might be coming from nova only. Thanks & Best Regards, Abhishek Kekane On Wed, Aug 9, 2023 at 12:18?PM Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > I have problems snapshotting a particular VM. This is what happens: > > [root at cld-ctrl-01 ~]# nova image-create --show --poll > 8cfd9323-f9d8-41f8-b607-41597d9c79e9 ns3-new-snap-2023-08-09-2 > > Server snapshotting... 0% completeERROR (NotFound): No image found with ID > 133d4978-1de3-4cef-8804-8b0f07b12e4e

> > > (HTTP 404) > [root at cld-ctrl-01 ~]# > > > According to the logs [*], it looks like the snapshot is for some reason > deleted, but I don't understand what the problem is > Any hints ? > Thanks, Massimo > > > [*] > 2023-08-09 08:35:05.683 5507 INFO eventlet.wsgi.server > [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ > ,192.168.60.232 - - [09/Aug/2023 08:35:05] "POST > /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e/members HTTP/1.1" 200 424 > 0.037093 > 2023-08-09 08:35:05.689 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 > - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001015 > 2023-08-09 08:35:05.690 5507 INFO eventlet.wsgi.server > [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ > ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/member > HTTP/1.1" 200 872 0.003611 > 2023-08-09 08:35:05.690 5501 INFO eventlet.wsgi.server [-] 192.168.60.234 > - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000932 > 2023-08-09 08:35:05.770 5500 INFO eventlet.wsgi.server > [req-4146429a-27a6-4ac9-a3f7-4534c32c6dd0 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 193.206.210.24\ > 0,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET > /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012723 > 2023-08-09 08:35:05.812 5500 INFO eventlet.wsgi.server [-] 192.168.60.234 > - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000906 > 2023-08-09 08:35:05.812 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 > - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001003 > 2023-08-09 08:35:05.862 5503 INFO eventlet.wsgi.server > [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ > ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET > /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012832 > 2023-08-09 08:35:05.868 5506 INFO eventlet.wsgi.server > [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ > ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/image > HTTP/1.1" 200 6278 0.002641 > 2023-08-09 08:35:06.014 5500 INFO eventlet.wsgi.server > [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 > beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ > ,192.168.60.232 - - [09/Aug/2023 08:35:06] "DELETE > /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 204 208 0.140910 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nguyenhuukhoinw at gmail.com Wed Aug 9 06:59:12 2023 From: nguyenhuukhoinw at gmail.com (=?UTF-8?B?Tmd1eeG7hW4gSOG7r3UgS2jDtGk=?=) Date: Wed, 9 Aug 2023 13:59:12 +0700 Subject: [openstack][neutron] set a fixed ip for router gateway In-Reply-To: <5625670.L5hakYDufj@p1> References: <5625670.L5hakYDufj@p1> Message-ID: Hi Slawek, Thank you much, I got it, I will ask Horizon team. Is there a Horizon member? If yes, Do I need some configuration to enable this function on Horizon? Thank you very much! Nguyen Huu Khoi On Wed, Aug 9, 2023 at 1:32?PM Slawek Kaplonski wrote: > Hi, > > Dnia ?roda, 9 sierpnia 2023 04:21:51 CEST Nguy?n H?u Kh?i pisze: > > > Hello guys. > > > > > > I am curious why we can not set a fixed ip for router gateway. > > > > > > [image: image.png] > > > Although I see these codes were merged many years ago. > > > > > > https://review.opendev.org/c/openstack/neutron/+/83664 > > > > > > Nguyen Huu Khoi > > > > > I can't say why it's like that in the Horizon dasboard but from the > Neutron API PoV this is allowed by default only for admin users. See > https://github.com/openstack/neutron/blob/176b144460570fcf2792a199d2b639b335822f14/neutron/conf/policies/router.py#L117 > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Wed Aug 9 07:11:04 2023 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Wed, 9 Aug 2023 09:11:04 +0200 Subject: [glance][ops] Problems snapshotting a VM: image gets deleted In-Reply-To: References: Message-ID: Hi Abhishek. If I grep wrt that reqid I only see these events: /var/log/nova/nova-api.log:2023-08-09 08:35:05.734 5626 INFO nova.osapi_compute.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 193.206.210.240,192.168.60.232 "POST /v2.1/servers/8cfd9323-f9d8-41f8-b607-41597d9c79e9/action HTTP/1.1" status: 202 len: 453 time: 0.2672498 /var/log/glance/glance-api.log:2023-08-09 08:35:05.506 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/image HTTP/1.1" 200 6278 0.003351 /var/log/glance/glance-api.log:2023-08-09 08:35:05.639 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105,192.168.60.232 - - [09/Aug/2023 08:35:05] "POST /v2/images HTTP/1.1" 201 1648 0.072484 /var/log/glance/glance-api.log:2023-08-09 08:35:05.683 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105,192.168.60.232 - - [09/Aug/2023 08:35:05] "POST /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e/members HTTP/1.1" 200 424 0.037093 /var/log/glance/glance-api.log:2023-08-09 08:35:05.690 5507 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/member HTTP/1.1" 200 872 0.003611 /var/log/glance/glance-api.log:2023-08-09 08:35:05.862 5503 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012832 /var/log/glance/glance-api.log:2023-08-09 08:35:05.868 5506 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/image HTTP/1.1" 200 6278 0.002641 /var/log/glance/glance-api.log:2023-08-09 08:35:06.014 5500 INFO eventlet.wsgi.server [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159,192.168.60.232 - - [09/Aug/2023 08:35:06] "DELETE /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 204 208 0.140910 On Wed, Aug 9, 2023 at 8:53?AM Abhishek Kekane wrote: > Hi Massimo, > > Can you check nova-compute logs as well, the request to delete the image > might be coming from nova only. > > Thanks & Best Regards, > > Abhishek Kekane > > > On Wed, Aug 9, 2023 at 12:18?PM Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> wrote: > >> I have problems snapshotting a particular VM. This is what happens: >> >> [root at cld-ctrl-01 ~]# nova image-create --show --poll >> 8cfd9323-f9d8-41f8-b607-41597d9c79e9 ns3-new-snap-2023-08-09-2 >> >> Server snapshotting... 0% completeERROR (NotFound): No image found with >> ID 133d4978-1de3-4cef-8804-8b0f07b12e4e

>> >> >> (HTTP 404) >> [root at cld-ctrl-01 ~]# >> >> >> According to the logs [*], it looks like the snapshot is for some reason >> deleted, but I don't understand what the problem is >> Any hints ? >> Thanks, Massimo >> >> >> [*] >> 2023-08-09 08:35:05.683 5507 INFO eventlet.wsgi.server >> [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ >> ,192.168.60.232 - - [09/Aug/2023 08:35:05] "POST >> /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e/members HTTP/1.1" 200 424 >> 0.037093 >> 2023-08-09 08:35:05.689 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 >> - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001015 >> 2023-08-09 08:35:05.690 5507 INFO eventlet.wsgi.server >> [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.105\ >> ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/member >> HTTP/1.1" 200 872 0.003611 >> 2023-08-09 08:35:05.690 5501 INFO eventlet.wsgi.server [-] 192.168.60.234 >> - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000932 >> 2023-08-09 08:35:05.770 5500 INFO eventlet.wsgi.server >> [req-4146429a-27a6-4ac9-a3f7-4534c32c6dd0 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 193.206.210.24\ >> 0,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET >> /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012723 >> 2023-08-09 08:35:05.812 5500 INFO eventlet.wsgi.server [-] 192.168.60.234 >> - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.000906 >> 2023-08-09 08:35:05.812 5506 INFO eventlet.wsgi.server [-] 192.168.60.234 >> - - [09/Aug/2023 08:35:05] "OPTIONS /versions HTTP/1.0" 200 1781 0.001003 >> 2023-08-09 08:35:05.862 5503 INFO eventlet.wsgi.server >> [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ >> ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET >> /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 200 1456 0.012832 >> 2023-08-09 08:35:05.868 5506 INFO eventlet.wsgi.server >> [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ >> ,192.168.60.232 - - [09/Aug/2023 08:35:05] "GET /v2/schemas/image >> HTTP/1.1" 200 6278 0.002641 >> 2023-08-09 08:35:06.014 5500 INFO eventlet.wsgi.server >> [req-b0908999-5e95-4520-ae5a-614c503e91e9 a1f915a7a36c471d87d6702255016df4 >> beaeede3841b47efb6b665a1a667e5b1 - default default] 192.168.60.159\ >> ,192.168.60.232 - - [09/Aug/2023 08:35:06] "DELETE >> /v2/images/133d4978-1de3-4cef-8804-8b0f07b12e4e HTTP/1.1" 204 208 0.140910 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From finarffin at gmail.com Wed Aug 9 08:02:49 2023 From: finarffin at gmail.com (Jan Wasilewski) Date: Wed, 9 Aug 2023 10:02:49 +0200 Subject: [nova] Slow nvme performance for local storage instances Message-ID: Hi, I am reaching out to inquire about the performance of our local storage setup. Currently, I am conducting tests using NVMe disks; however, the results appear to be underwhelming. In terms of my setup, I have recently incorporated two NVMe disks into my compute node. These disks have been configured as RAID1 under md127 and subsequently mounted at /var/lib/nova/instances [1]. During benchmarking using the fio tool within this directory, I am achieving approximately 160,000 IOPS [2]. This figure serves as a satisfactory baseline and reference point for upcoming VM tests. As the next phase, I have established a flavor that employs a root disk for my virtual machine [3]. Regrettably, the resulting performance yields around 18,000 IOPS, which is nearly ten times poorer than the compute node results [4]. While I expected some degradation, a tenfold decrease seems excessive. Realistically, I anticipated no more than a twofold reduction compared to the compute node's performance. Hence, I am led to ask: what should be configured to enhance performance? I have already experimented with the settings recommended on the Ceph page for image properties [5]; however, these changes did not yield the desired improvements. In addition, I attempted to modify the CPU architecture within the nova.conf file, switching to Cascade Lake architecture, yet this endeavor also proved ineffective. For your convenience, I have included a link to my current dumpxml results [6]. Your insights and guidance would be greatly appreciated. I am confident that there is a solution to this performance disparity that I may have overlooked. Thank you in advance for your help. /Jan Wasilewski *References:* *[1] nvme allocation and raid configuration: https://paste.openstack.org/show/bMMgGqu5I6LWuoQWV7TV/ * *[2] fio performance inside compute node: https://paste.openstack.org/show/bcMi4zG7QZwuJZX8nyct/ * *[3] Flavor configuration: https://paste.openstack.org/show/b7o9hCKilmJI3qyXsP5u/ * *[4] fio performance inside VM: https://paste.openstack.org/show/bUjqxfU4nEtSFqTlU8oH/ * *[5] image properties: https://docs.ceph.com/en/pacific/rbd/rbd-openstack/#image-properties * *[6] dumpxml of vm: https://paste.openstack.org/show/bRECcaSMqa8TlrPp0xrT/ * -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Aug 9 10:19:47 2023 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 09 Aug 2023 12:19:47 +0200 Subject: [neutron] User survey Neutron related questions Message-ID: <4052694.6v7Htq3KUu@p1> Hi, As we talked on the Neutron team meeting yesterday [1] I prepared etherpad with Neutron related questions and possible answers from the User Survey [2]. I already added there some of my comments. Please check it and comment on it too. We can then discuss it during next week's Neutron team meeting and finally maybe send some proposals to change something there. [1] https://meetings.opendev.org/meetings/networking/2023/networking.2023-08-08-14.00.log.html#l-129 [2] https://etherpad.opendev.org/p/neutron-user-survey-questions-2023 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Wed Aug 9 11:55:11 2023 From: smooney at redhat.com (smooney at redhat.com) Date: Wed, 09 Aug 2023 12:55:11 +0100 Subject: [nova] Slow nvme performance for local storage instances In-Reply-To: References: Message-ID: <76b3f2af79f6ebd1752735cb8fa25934b9648ae4.camel@redhat.com> before digging into your setting have you tried using raw disk images instead of qcow just to understand what overhead qcow is adding. my guess is part of the issue is not preallcoating the qcow space but if you could check the performance with raw images that would elimiate that as a factor. the next step would be to look athe time properites and disk cache mode. you mentioned followin the ceph recomendation which woudl use virtio-scsi isntead of virtio-blk which shoudl help but tweak the cache mode to none would also help. On Wed, 2023-08-09 at 10:02 +0200, Jan Wasilewski wrote: > Hi, > > I am reaching out to inquire about the performance of our local storage > setup. Currently, I am conducting tests using NVMe disks; however, the > results appear to be underwhelming. > > In terms of my setup, I have recently incorporated two NVMe disks into my > compute node. These disks have been configured as RAID1 under md127 and > subsequently mounted at /var/lib/nova/instances [1]. During benchmarking > using the fio tool within this directory, I am achieving approximately > 160,000 IOPS [2]. This figure serves as a satisfactory baseline and > reference point for upcoming VM tests. > > As the next phase, I have established a flavor that employs a root disk for > my virtual machine [3]. Regrettably, the resulting performance yields > around 18,000 IOPS, which is nearly ten times poorer than the compute node > results [4]. While I expected some degradation, a tenfold decrease seems > excessive. Realistically, I anticipated no more than a twofold reduction > compared to the compute node's performance. Hence, I am led to ask: what > should be configured to enhance performance? > > I have already experimented with the settings recommended on the Ceph page > for image properties [5]; however, these changes did not yield the desired > improvements. In addition, I attempted to modify the CPU architecture > within the nova.conf file, switching to Cascade Lake architecture, yet this > endeavor also proved ineffective. For your convenience, I have included a > link to my current dumpxml results [6]. > > Your insights and guidance would be greatly appreciated. I am confident > that there is a solution to this performance disparity that I may have > overlooked. Thank you in advance for your help. > /Jan Wasilewski > > *References:* > *[1] nvme allocation and raid configuration: > https://paste.openstack.org/show/bMMgGqu5I6LWuoQWV7TV/ > * > *[2] fio performance inside compute node: > https://paste.openstack.org/show/bcMi4zG7QZwuJZX8nyct/ > * > *[3] Flavor configuration: > https://paste.openstack.org/show/b7o9hCKilmJI3qyXsP5u/ > * > *[4] fio performance inside VM: > https://paste.openstack.org/show/bUjqxfU4nEtSFqTlU8oH/ > * > *[5] image properties: > https://docs.ceph.com/en/pacific/rbd/rbd-openstack/#image-properties > * > *[6] dumpxml of vm: https://paste.openstack.org/show/bRECcaSMqa8TlrPp0xrT/ > * From kieske at osism.tech Wed Aug 9 11:59:55 2023 From: kieske at osism.tech (Sven Kieske) Date: Wed, 09 Aug 2023 13:59:55 +0200 Subject: [nova] Slow nvme performance for local storage instances In-Reply-To: References: Message-ID: Hi, I can't cover everything here, because performance is a huge topic, but here are some questions which I didn't find the answer to: which nvme is this? is this a consumer device by chance? which openstack release are you running, which hypervisor os and which guest os and kernel versions? which deployment method do you use? > *[5] image properties: > https://docs.ceph.com/en/pacific/rbd/rbd-openstack/#image-properties >