From tkajinam at redhat.com Sat Jan 2 09:42:51 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sat, 2 Jan 2021 18:42:51 +0900 Subject: [puppet][neutron] Status of networking-cisco and networking-bigswitch Message-ID: Hello, While cleaning up outdated implementations in puppet-neutron, I noticed that the following 2 plugins are supported by puppet-neutron but the plugins are no longer maintained actively these days. 1) networking-cisco https://opendev.org/x/networking-cisco This repo has not got any updates for these 2 year except fo opendev migration and supports not Python 3.6 but only Python 2.7 and Python 3.5. 2) networking-bigswitch https://opendev.org/x/networking-bigswitch This repo was updated 10 months ago, but doesn't have ussuri and victoria branch created. If I don't hear any plan to keep maintaining these 2 repositories I'll deprecate support for these 2 plugins in this release and remove the support completely in the next cycle. Thank you, Takashi Kajinami -------------- next part -------------- An HTML attachment was scrubbed... URL: From tempjohnsmith1377 at gmail.com Sat Jan 2 17:41:00 2021 From: tempjohnsmith1377 at gmail.com (john smith) Date: Sat, 2 Jan 2021 21:11:00 +0330 Subject: failed to launch openstack instance when integrate with odl Message-ID: hello I tried to integrate Openstack train with Opendaylight magnesium and my references were [1] and [2] but after doing all the steps, the instance failed to launch. I changed port_binding_controller in [ml2_odl] section in the ml2_conf.ini file from pseudo-agentdb-binding to legacy-port-binding and then the instance launched but the status of router interfaces was still down. I have a controller node and a compute node. and Opendaylight runs on the controller node. nova-compute.log: INFO nova.virt.libvirt.driver [-] [instance: 975fa79e-6567-4385-be87-9d12a8eb3e94] Instance destroyed successfully. 2021-01-02 12:33:23.383 25919 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a0a7ebf0-7e63-4c60-a8d2-07c05f1aa4f4 04c7685a2166481a9ace54eb5e71f6e5 ca28ee1038254649ad133d5f09f7a186 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp: 127.0.0.1:6640', '--', '--if-exists', 'del-port', 'br-int', 'tap50eb0b68-a4']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp: 127.0.0.1:6640 -- --if-exists del-port br-int tap50eb0b68-a4 Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. controller ovs: Manager "tcp:192.168.222.48:6640" is_connected: true Bridge br-int Controller "tcp:192.168.222.48:6653" is_connected: true fail_mode: secure Port tune52c5c73a50 Interface tune52c5c73a50 type: vxlan options: {key=flow, local_ip="10.0.0.31", remote_ip="10.0.0.11"} Port br-int Interface br-int type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Port ens160 Interface ens160 ovs_version: "2.13.1" compute ovs: Manager "tcp:192.168.222.48:6640" is_connected: true Bridge br-int Controller "tcp:192.168.222.48:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port tun34b3712d975 Interface tun34b3712d975 type: vxlan options: {key=flow, local_ip="10.0.0.11", remote_ip="10.0.0.11"} Port tund5123ce5b8a Interface tund5123ce5b8a type: vxlan options: {key=flow, local_ip="10.0.0.11", remote_ip="10.0.0.31"} ovs_version: "2.13.1" -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Sun Jan 3 00:54:58 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Sun, 3 Jan 2021 00:54:58 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: Are you sure you aren't just looking at the connection pool expanding? Each worker has a max number of connections it can use. Maybe look at lowering rpc_conn_pool_size. By default I believe each worker might create a pool of up to 30 connections. Looking at the code it could also be have something to do with the k8s client. Since it creates a new instance each time it does an health check. What version of the k8s client do you have installed? ________________________________ From: Ionut Biru Sent: Tuesday, December 29, 2020 2:20 PM To: feilong Cc: openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Not sure if my suspicion is true but I think for each update a new notifier is prepared and used without closing the connection but my understanding of oslo is nonexistent. https://opendev.org/openstack/magnum/src/branch/master/magnum/conductor/utils.py#L147 https://opendev.org/openstack/magnum/src/branch/master/magnum/common/rpc.py#L173 On Tue, Dec 29, 2020 at 11:52 PM Ionut Biru > wrote: Hi Feilong, I found out that each time the update_health_status periodic task is run, a new connection(for each uwsgi) is made to rabbitmq. root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 229 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 234 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 238 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 241 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 244 Not sure Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - -] Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG magnum.conductor.handlers.cluster_conductor [req-284ac12b-d76a-4e50-8e74-5bfb Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG magnum.conductor.handlers.cluster_conductor [req-3fc29ee9-4051-42e7-ae19-3a49 Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 122 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 121 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 118 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672 - uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' authenticated and granted access to vhost '/magnum' 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672 - uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' authenticated and granted access to vhost '/magnum' On Tue, Dec 22, 2020 at 4:12 AM feilong > wrote: Hi Ionut, I didn't see this before on our production. Magnum auto healer just simply sends a POST request to Magnum api to update the health status. So I would suggest write a small script or even use curl to see if you can reproduce this firstly. On 19/12/20 2:27 am, Ionut Biru wrote: Hi again, I failed to mention that is stable/victoria with couples of patches from review. Ignore the fact that in logs it shows the 19.1.4 version in venv path. On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru > wrote: Hi guys, I have an issue with magnum api returning an error after a while: Server-side error: "[('system library', 'fopen', 'Too many open files'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]" Log file: https://paste.xinu.at/6djE/ This started to appear after I enabled the template auto_healing_controller = magnum-auto-healer, magnum_auto_healer_tag = v1.19.0. Currently, I only have 4 clusters. After that the API is in error state and doesn't work unless I restart it. -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Sun Jan 3 01:06:07 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Sun, 3 Jan 2021 01:06:07 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , , Message-ID: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? ________________________________ From: Erik Olof Gunnar Andersson Sent: Saturday, January 2, 2021 4:54 PM To: Ionut Biru ; feilong Cc: openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Are you sure you aren't just looking at the connection pool expanding? Each worker has a max number of connections it can use. Maybe look at lowering rpc_conn_pool_size. By default I believe each worker might create a pool of up to 30 connections. Looking at the code it could also be have something to do with the k8s client. Since it creates a new instance each time it does an health check. What version of the k8s client do you have installed? ________________________________ From: Ionut Biru Sent: Tuesday, December 29, 2020 2:20 PM To: feilong Cc: openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Not sure if my suspicion is true but I think for each update a new notifier is prepared and used without closing the connection but my understanding of oslo is nonexistent. https://opendev.org/openstack/magnum/src/branch/master/magnum/conductor/utils.py#L147 https://opendev.org/openstack/magnum/src/branch/master/magnum/common/rpc.py#L173 On Tue, Dec 29, 2020 at 11:52 PM Ionut Biru > wrote: Hi Feilong, I found out that each time the update_health_status periodic task is run, a new connection(for each uwsgi) is made to rabbitmq. root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 229 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 234 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 238 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 241 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 244 Not sure Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - -] Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG magnum.conductor.handlers.cluster_conductor [req-284ac12b-d76a-4e50-8e74-5bfb Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG magnum.conductor.handlers.cluster_conductor [req-3fc29ee9-4051-42e7-ae19-3a49 Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 122 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 121 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 118 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672 - uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' authenticated and granted access to vhost '/magnum' 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672 - uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' authenticated and granted access to vhost '/magnum' On Tue, Dec 22, 2020 at 4:12 AM feilong > wrote: Hi Ionut, I didn't see this before on our production. Magnum auto healer just simply sends a POST request to Magnum api to update the health status. So I would suggest write a small script or even use curl to see if you can reproduce this firstly. On 19/12/20 2:27 am, Ionut Biru wrote: Hi again, I failed to mention that is stable/victoria with couples of patches from review. Ignore the fact that in logs it shows the 19.1.4 version in venv path. On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru > wrote: Hi guys, I have an issue with magnum api returning an error after a while: Server-side error: "[('system library', 'fopen', 'Too many open files'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]" Log file: https://paste.xinu.at/6djE/ This started to appear after I enabled the template auto_healing_controller = magnum-auto-healer, magnum_auto_healer_tag = v1.19.0. Currently, I only have 4 clusters. After that the API is in error state and doesn't work unless I restart it. -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Jan 4 08:17:13 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 4 Jan 2021 09:17:13 +0100 Subject: [neutron] Team meeting - Tuesday 05.01.2021 Message-ID: <20210104081713.o6lgldb577f6nsnx@p1.localdomain> Hi, I just found out that I have some internal training tomorrow and will not be able to chair our tomorrow's team meeting. As this is just second day after holidays season for many of us, lets cancel this one more meeting. See You all on the meeting next week. Happy New Year to all of You :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Mon Jan 4 08:18:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 4 Jan 2021 09:18:31 +0100 Subject: [neutron] CI meeting - Tuesday 05.01.2021 Message-ID: <20210104081831.5raqgjrvddjtleaf@p1.localdomain> Hi, I just found out that I have some internal training tomorrow and will not be able to chair our tomorrow's CI meeting. As this is just second day after holidays season for many of us, lets cancel this one more meeting. If there is anything urgent related to the CI, please ping me on IRC or send me an email about it. See You all on the meeting next week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From katonalala at gmail.com Mon Jan 4 08:44:44 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 4 Jan 2021 09:44:44 +0100 Subject: [neutron] Bug deputy report December 28th to January 3rd Message-ID: Hi, I was Neutron bug deputy for this week, and it was a really quiet week. Opinion: - https://bugs.launchpad.net/neutron/+bug/1909160 : high cpu usage when listing security groups - The issues is reported from Rocky - On openstack-discuss was some mail about slow security-group API on Stein for example: http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019573.html - I checked some rally execution from latest master runs and haven't seen such Happy New Year! Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From deepa.kr at fingent.com Mon Jan 4 09:23:59 2021 From: deepa.kr at fingent.com (Deepa KR) Date: Mon, 4 Jan 2021 14:53:59 +0530 Subject: [Ussuri] Auto shutdown VM In-Reply-To: References: <158621605687826@mail.yandex.com> <14800219-BC67-4B94-88CE-81FE5D8A6AB2@fingent.com> Message-ID: Hi All Any suggestions highly appreciated. We are facing these issues very frequently now . On Mon, Nov 23, 2020 at 10:03 AM Deepa KR wrote: > Hi > > Can see only shutting down, reason=crashed in libvirt/qemu logs .Nothing > else . > Couldn't find anything else in neutron logs as well > > > On Wed, Nov 18, 2020 at 5:23 PM Deepa KR wrote: > >> Thanks for pointing out. Have 70 + vms and has issue with just 3 vms so i >> am really confused >> >> Sent from my iPhone >> >> On 18-Nov-2020, at 1:56 PM, rui zang wrote: >> >>  >> [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received unexpected >> event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 for >> instance with vm_state active and task_state None.* >> >> >> Clearly the network virtual interface was somehow removed or unplugged. >> What you should look into is OVS or whatever the network solution you are >> using. >> >> >> 18.11.2020, 01:44, "Deepa KR" : >> >>  Hi All >> >> We have a Openstack setup with the *Ussuri Version* and I am *regularly >> facing auto shutdown of a few VMs (ubuntu16.04) randomly* . >> If I restart then the instance is back . >> >> From logs I was able to see the messages below . >> >> WARNING nova.compute.manager [req-2a21d455-ac04-44aa-b248-4776e5109013 >> 813f3fb52c434e38991bb90aa4771541 10b5279cb6f64ca19871f132a2cee1a3 - default >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received >> unexpected event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 >> for instance with vm_state active and task_state None.* >> INFO nova.compute.manager [-] [instance: >> 28cd861c-ef15-444a-a902-9cac643c72b5] VM Stopped (Lifecycle Event) >> INFO nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - >> - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *During >> _sync_instance_power_state the DB power_state (1) does not match the >> vm_power_state from the hypervisor (4). Updating power_state in the DB to >> match the hypervisor.* >> syslog:Nov 13 07:01:07 fgshwbucehyp04 nova-compute[2680204]: 2020-11-13 >> 07:01:07.684 2680204 WARNING nova.compute.manager >> [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance shutdown by itself. >> Calling the stop API. Current vm_state: active, current task_state: None, >> original DB power_state: 1, current VM power_state: 4* >> nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - >> -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance is already >> powered off in the hypervisor when stop is called.* >> nova.virt.libvirt.driver [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - >> - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance already >> shutdown.* >> nova.virt.libvirt.driver [-] [instance: >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed successfully.* >> nova.compute.manager [req-7a0a0d03-e286-42f0-9e36-38a432f236f3 >> d9ca03b9d0884d51a26a39b6c82f02eb 304d859c43df4de4944ca5623f7f455c - default >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Get console output >> nova.virt.libvirt.driver [-] [instance: >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed successfully.* >> >> I searched a few blogs and forums but couldn't find a solution to it . >> >> Few mentioned to add s*ync_power_state_interval=-1 in * >> */etc/nova/nova.conf *.But understood that this will help only when nova >> stops vm. >> But in this case vm itself is shutting down (*Instance shutdown by >> itself. Calling the stop API*) >> Also no memory issue in VM nor the hypervisor. >> Also did apt-get upgrade . >> >> It would be great if anyone can shed light to this issue. >> >> Regards, >> Deepa K R >> >> Sent from my iPhone >> >> > > -- > > > Regards, > > Deepa K R | DevOps Team Lead > > > > USA | UAE | INDIA | AUSTRALIA > > > -- Regards, Deepa K R | DevOps Team Lead USA | UAE | INDIA | AUSTRALIA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature-1.gif Type: image/gif Size: 566 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_for_signature.png Type: image/png Size: 10509 bytes Desc: not available URL: From lyarwood at redhat.com Mon Jan 4 09:44:48 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 4 Jan 2021 09:44:48 +0000 Subject: [Ussuri] Auto shutdown VM In-Reply-To: References: <158621605687826@mail.yandex.com> <14800219-BC67-4B94-88CE-81FE5D8A6AB2@fingent.com> Message-ID: <20210104094448.7ukywqmrnasyufqm@lyarwood-laptop.usersys.redhat.com> On 04-01-21 14:53:59, Deepa KR wrote: > Hi All > > Any suggestions highly appreciated. > We are facing these issues very frequently now . Can you pastebin the domain QEMU log from /var/log/libvirt/qemu/$domain.log? That should detail why the domain is crashing. I'd also recommend reviewing the following docs from libvirt on how to enable debug logs etc: https://libvirt.org/kbase/debuglogs.html > On Mon, Nov 23, 2020 at 10:03 AM Deepa KR wrote: > > Hi > > > > Can see only shutting down, reason=crashed in libvirt/qemu logs .Nothing > > else . > > Couldn't find anything else in neutron logs as well > > > > > > On Wed, Nov 18, 2020 at 5:23 PM Deepa KR wrote: > > > >> Thanks for pointing out. Have 70 + vms and has issue with just 3 vms so i > >> am really confused > >> > >> Sent from my iPhone > >> > >> On 18-Nov-2020, at 1:56 PM, rui zang wrote: > >> > >>  > >> [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received unexpected > >> event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 for > >> instance with vm_state active and task_state None.* > >> > >> > >> Clearly the network virtual interface was somehow removed or unplugged. > >> What you should look into is OVS or whatever the network solution you are > >> using. > >> > >> > >> 18.11.2020, 01:44, "Deepa KR" : > >> > >>  Hi All > >> > >> We have a Openstack setup with the *Ussuri Version* and I am *regularly > >> facing auto shutdown of a few VMs (ubuntu16.04) randomly* . > >> If I restart then the instance is back . > >> > >> From logs I was able to see the messages below . > >> > >> WARNING nova.compute.manager [req-2a21d455-ac04-44aa-b248-4776e5109013 > >> 813f3fb52c434e38991bb90aa4771541 10b5279cb6f64ca19871f132a2cee1a3 - default > >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received > >> unexpected event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 > >> for instance with vm_state active and task_state None.* > >> INFO nova.compute.manager [-] [instance: > >> 28cd861c-ef15-444a-a902-9cac643c72b5] VM Stopped (Lifecycle Event) > >> INFO nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - > >> - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *During > >> _sync_instance_power_state the DB power_state (1) does not match the > >> vm_power_state from the hypervisor (4). Updating power_state in the DB to > >> match the hypervisor.* > >> syslog:Nov 13 07:01:07 fgshwbucehyp04 nova-compute[2680204]: 2020-11-13 > >> 07:01:07.684 2680204 WARNING nova.compute.manager > >> [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance shutdown by itself. > >> Calling the stop API. Current vm_state: active, current task_state: None, > >> original DB power_state: 1, current VM power_state: 4* > >> nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - > >> -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance is already > >> powered off in the hypervisor when stop is called.* > >> nova.virt.libvirt.driver [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - > >> - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance already > >> shutdown.* > >> nova.virt.libvirt.driver [-] [instance: > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed successfully.* > >> nova.compute.manager [req-7a0a0d03-e286-42f0-9e36-38a432f236f3 > >> d9ca03b9d0884d51a26a39b6c82f02eb 304d859c43df4de4944ca5623f7f455c - default > >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Get console output > >> nova.virt.libvirt.driver [-] [instance: > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed successfully.* > >> > >> I searched a few blogs and forums but couldn't find a solution to it . > >> > >> Few mentioned to add s*ync_power_state_interval=-1 in * > >> */etc/nova/nova.conf *.But understood that this will help only when nova > >> stops vm. > >> But in this case vm itself is shutting down (*Instance shutdown by > >> itself. Calling the stop API*) > >> Also no memory issue in VM nor the hypervisor. > >> Also did apt-get upgrade . > >> > >> It would be great if anyone can shed light to this issue. > >> > >> Regards, > >> Deepa K R > >> > >> Sent from my iPhone > >> > >> > > > > -- > > > > > > Regards, > > > > Deepa K R | DevOps Team Lead > > > > > > > > USA | UAE | INDIA | AUSTRALIA > > > > > > > > -- > > > Regards, > > Deepa K R | DevOps Team Lead > > > > USA | UAE | INDIA | AUSTRALIA -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From pierre at stackhpc.com Mon Jan 4 10:14:47 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 4 Jan 2021 11:14:47 +0100 Subject: [Ussuri] Auto shutdown VM In-Reply-To: <9A9C606C-3658-473C-83DC-6305D496A6CD@fingent.com> References: <9A9C606C-3658-473C-83DC-6305D496A6CD@fingent.com> Message-ID: Hi Deepa, You mention checking dmesg *inside* the VM. But have you checked dmesg on the hypervisor? It's possible your qemu-kvm processes are terminated by the kernel out-of-memory (OOM) killer because they try to allocate more memory than available. Best wishes, Pierre Riteau (priteau) On Wed, 18 Nov 2020 at 03:44, Deepa KR wrote: > > Hello Mohammed > > Thanks for the response. > No error message inside vm. Have checked dmesg, syslog etc . > > I mentioned vm is shutting down itself because of error messages Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 from hypervisor. > > Sent from my iPhone > > > On 17-Nov-2020, at 11:35 PM, Mohammed Naser wrote: > > > > On Tue, Nov 17, 2020 at 12:46 PM Deepa KR wrote: > >> > >>  Hi All > >> > >> We have a Openstack setup with the Ussuri Version and I am regularly facing auto shutdown of a few VMs (ubuntu16.04) randomly . > >> If I restart then the instance is back . > >> > >> From logs I was able to see the messages below . > >> > >> WARNING nova.compute.manager [req-2a21d455-ac04-44aa-b248-4776e5109013 813f3fb52c434e38991bb90aa4771541 10b5279cb6f64ca19871f132a2cee1a3 - default default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Received unexpected event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 for instance with vm_state active and task_state None. > >> INFO nova.compute.manager [-] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] VM Stopped (Lifecycle Event) > >> INFO nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor. > >> syslog:Nov 13 07:01:07 fgshwbucehyp04 nova-compute[2680204]: 2020-11-13 07:01:07.684 2680204 WARNING nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4 > >> nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance is already powered off in the hypervisor when stop is called. > >> nova.virt.libvirt.driver [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance already shutdown. > >> nova.virt.libvirt.driver [-] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance destroyed successfully. > >> nova.compute.manager [req-7a0a0d03-e286-42f0-9e36-38a432f236f3 d9ca03b9d0884d51a26a39b6c82f02eb 304d859c43df4de4944ca5623f7f455c - default default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Get console output > >> nova.virt.libvirt.driver [-] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance destroyed successfully. > >> > >> I searched a few blogs and forums but couldn't find a solution to it . > >> > >> Few mentioned to add sync_power_state_interval=-1 in /etc/nova/nova.conf .But understood that this will help only when nova stops vm. > >> But in this case vm itself is shutting down (Instance shutdown by itself. Calling the stop API) > >> Also no memory issue in VM nor the hypervisor. > >> Also did apt-get upgrade . > >> > >> It would be great if anyone can shed light to this issue. > > > > You should check and see if there is anything inside `dmesg` that > > shows the VM dying (any segfaults?). Also, it's possible that the VM > > itself is shutting off so maybe you should check ni its logs. > > > >> Regards, > >> Deepa K R > >> > >> Sent from my iPhone > > > > > > > > -- > > Mohammed Naser > > VEXXHOST, Inc. > From ionut at fleio.com Mon Jan 4 12:07:24 2021 From: ionut at fleio.com (Ionut Biru) Date: Mon, 4 Jan 2021 14:07:24 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > What does lsof say? > > ------------------------------ > *From:* Erik Olof Gunnar Andersson > *Sent:* Saturday, January 2, 2021 4:54 PM > *To:* Ionut Biru ; feilong > *Cc:* openstack-discuss > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Are you sure you aren't just looking at the connection pool expanding? > Each worker has a max number of connections it can use. Maybe look at > lowering rpc_conn_pool_size. By default I believe each worker might > create a pool of up to 30 connections. > > Looking at the code it could also be have something to do with the k8s > client. Since it creates a new instance each time it does an health check. > What version of the k8s client do you have installed? > > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, December 29, 2020 2:20 PM > *To:* feilong > *Cc:* openstack-discuss > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > Not sure if my suspicion is true but I think for each update a new > notifier is prepared and used without closing the connection but my > understanding of oslo is nonexistent. > > > https://opendev.org/openstack/magnum/src/branch/master/magnum/conductor/utils.py#L147 > > > https://opendev.org/openstack/magnum/src/branch/master/magnum/common/rpc.py#L173 > > > On Tue, Dec 29, 2020 at 11:52 PM Ionut Biru wrote: > > Hi Feilong, > > I found out that each time the update_health_status periodic task is run, > a new connection(for each uwsgi) is made to rabbitmq. > > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 229 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 234 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 238 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 241 > root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l > 244 > > Not sure > > Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG > magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - > -] > Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG > oslo_service.periodic_task [-] Running periodic task > MagnumPeriodicTasks.sync > Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG > magnum.conductor.handlers.cluster_conductor > [req-284ac12b-d76a-4e50-8e74-5bfb > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG > magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG > magnum.conductor.handlers.cluster_conductor > [req-3fc29ee9-4051-42e7-ae19-3a49 > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG > magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG > magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY > ({'api' > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 122 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 121 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG > magnum.service.periodic [-] Updating health status for cluster 118 > update_hea > Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a > magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG > magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - > > > 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection > <0.953.1293> (172.29.93.14:48474 > > -> 172.29.95.38:5672 > > ) > 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> ( > 172.29.93.14:48474 > > -> 172.29.95.38:5672 > ) > has a client-provided name: > uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 > 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> ( > 172.29.93.14:48474 > > -> 172.29.95.38:5672 > > - uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' > authenticated and granted access to vhost '/magnum' > 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection > <0.1656.1283> (172.29.93.14:48548 > > -> 172.29.95.38:5672 > > ) > 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> ( > 172.29.93.14:48548 > > -> 172.29.95.38:5672 > ) > has a client-provided name: > uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 > 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> ( > 172.29.93.14:48548 > > -> 172.29.95.38:5672 > > - uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' > authenticated and granted access to vhost '/magnum' > > On Tue, Dec 22, 2020 at 4:12 AM feilong wrote: > > Hi Ionut, > > I didn't see this before on our production. Magnum auto healer just simply > sends a POST request to Magnum api to update the health status. So I would > suggest write a small script or even use curl to see if you can reproduce > this firstly. > > > On 19/12/20 2:27 am, Ionut Biru wrote: > > Hi again, > > I failed to mention that is stable/victoria with couples of patches from > review. Ignore the fact that in logs it shows the 19.1.4 version in venv > path. > > On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru wrote: > > Hi guys, > > I have an issue with magnum api returning an error after a while: > Server-side error: "[('system library', 'fopen', 'Too many open files'), > ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate > routines', 'X509_load_cert_crl_file', 'system lib')]" > > Log file: https://paste.xinu.at/6djE/ > > > This started to appear after I enabled the > template auto_healing_controller = magnum-auto-healer, > magnum_auto_healer_tag = v1.19.0. > > Currently, I only have 4 clusters. > > After that the API is in error state and doesn't work unless I restart it. > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ikatzir at infinidat.com Mon Jan 4 13:31:12 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Mon, 4 Jan 2021 15:31:12 +0200 Subject: [tripleO] Customised Cinder-Volume fails at 'Paunch 5' during overcloud deployment In-Reply-To: References: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> Message-ID: Hello Alan, Thanks for your reply! I am afraid that the reason for my deployment failure might be concerned with the environment file I use to configure my cinder backend. The configuration is quite similar to https://github.com/Infinidat/tripleo-deployment-configs/blob/dev/RHOSP15/cinder-infinidat-config.yaml So I wonder if it is possible to run a deployment where I tell 'TripleO' to use my customize container, using containers-prepare-parameter.yaml, but without the environment file =cinder-infinidat-config.yaml, and configure the backend / start cinder-volume services manually? Or I must have a minimum config as I find in: '/usr/share/openstack-tripleo-heat-templates/deployment/cinder/' (for other vendors)? If I do need such a cinder-volume-VENDOR-puppet.yaml config to be integrated during overcloud deployment, where is documentation that explains how to construct this? Do I need to use cinder-base.yaml as a template? When looking at the web for "cinder-volume-container-puppet.yaml" I found the Git Page of overcloud-resource-registry-puppet.j2.yaml and found also https://opendev.org/openstack/tripleo-heat-templates/../deployment but it is not so explanatory. I have opened a case with RedHat as well and they are checking who from their R&D could help since it's out of the scope of support. Regards, Igal On Thu, Dec 31, 2020 at 9:15 PM Alan Bishop wrote: > > > On Thu, Dec 31, 2020 at 5:26 AM Igal Katzir wrote: > >> Hello all, >> >> I am trying to deploy RHOSP16.1 (based on ‘*train’ *distribution) for Certification >> purposes. >> I have build a container for our cinder driver and trying to deploy it. >> Deployment runs almost till the end and fails at stage when it tries to >> configure Pacemaker; >> Here is the last message: >> >> "Info: Applying configuration version '1609231063'", "Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created", "Info: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]: Scheduling refresh of Service[pcsd]", "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling all events on Service[pcsd]", "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: Dependency Pcmk_property[property-overcloud-controller-0-cinder-volume-role] has failures: true", "Info: Creating state file /var/lib/puppet/state/state.yaml", "Notice: Applied catalog in 382.92 seconds", "Changes:", " Total: 1", "Events:", " Success: 1", " Failure: 2", " Total: 3", >> >> >> I have verified that all packages on my container-image >> (Pacemaker,Corosync, libqb,and pcs) are installed with same versions as >> the overcloud-controller. >> > > Hi Igal, > > Thank you for checking these package versions and stating they match the > ones installed on the overcloud node. This rules out one of the common > reasons for failures when trying to run a customized cinder-volume > container image. > > But seems that something is still missing, because deployment with the >> default openstack-cinder-volume image completes successfully. >> > > This is also good to know. > > Can anyone help with debugging this? Let me know if more info needed. >> > > More info is needed, but it's hard to predict exactly where to look for > the root cause of the failure. I'd start by looking for something at the > cinder log file > to determine whether the cinder-volume service is even trying to start. > Look for /var/log/containers/cinder/cinder-volume.log on the node where > pacemaker is trying to run the service. Are there logs indicating the > service is trying to start? Or maybe the service is launched, but fails > early during startup? > > Another possibility is podman fails to launch the container itself. If > that's happening then check for errors in /var/log/messages. One source of > this type of failure is you've specified a container bind mount, but the > source directory doesn't exist (docker would auto-create the source > directory, but podman does not). > > You specifically mentioned RHOSP, so if you need additional support then I > recommend opening a support case with Red Hat. That will provide a forum > for posting private data, such as details of your overcloud deployment and > full sosreports. > > Alan > > >> >> Thanks in advance, >> Igal >> > -- Regards, *Igal Katzir* Cell +972-54-5597086 Interoperability Team *INFINIDAT* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrhosseini at hotmail.com Mon Jan 4 13:37:58 2021 From: hrhosseini at hotmail.com (Hamidreza Hosseini) Date: Mon, 4 Jan 2021 13:37:58 +0000 Subject: [swift] Openstack controller/keystone cause not uploading file to swift storage Message-ID: Hi, I've installed swift storage and swift-proxy and keystone for object storage I'm using memcached for caching keystone's data But suddenly in some moments my cpu of keystone servers goes up till 50% (it is apache and keystone proccess) and the swift proxies wouldn't upload anything!!! As I said servers have free cpu and ram and kernel tuned well which parameter should I check for my problem? How can I solve this strange problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Mon Jan 4 09:44:37 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Mon, 4 Jan 2021 10:44:37 +0100 Subject: some error abount ironic-python-agent-builder In-Reply-To: References: Message-ID: Hello Ankele, The IPA_SOURCE_DIR variable is used to set the location of the ironic-python-agent source code that will be installed in the ironic-python-agent tinycore-based image (tinyipa). It defaults to /opt/stack/ironic-python-agent but you can use any other dir as far as you set the correct path in the environment variable. You need to have that populated before trying to build the image. FYI tinyipa is a test-oriented image thought specifically to be used in CI, and should not be used in production environments. If you need an image suited for production on real hardware, I suggest to build one based on diskimage-builder https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html Cheers, Riccardo On Mon, Dec 28, 2020 at 4:59 PM Ankele zhang wrote: > Hi~ > I have an OpenStack platform in Rocky version. > I use ironic-python-agent-builder to build a tinyipa image to customing > HardwareManager for 'RAID configuration' in Ironic cleaning steps. > While I follow the steps ' > https://docs.openstack.org/ironic-python-agent-builder/latest/admin/tinyipa.html#building-ramdisk' > to build tinyipa image, it occurs error as following: > > [image: image.png] > > [image: image.png] > > so, what the IPA_SOURCE_DIR means? Do I need to download the source code > of the ironic-python-agent and copy it to /opt/stack/ before this? > > Looking forward to your reply. > > Ankele > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 11770 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 63866 bytes Desc: not available URL: From marios at redhat.com Mon Jan 4 14:58:49 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 4 Jan 2021 16:58:49 +0200 Subject: [tripleo] next irc meeting Tuesday Jan 05 @ 1400 UTC in #tripleo Message-ID: The next TripleO irc meeting is: ** Tuesday 05th January at 1400 UTC in #tripleo. ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add any one-off items you want to hilight at https://etherpad.opendev.org/p/tripleo-meeting-items This can include review requests, blocking issues or to socialise ongoing or planned work, or anything else you want to share. Our last meeting was held on Dec 22nd - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-12-22-14.00.html Hope to see you there! -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jan 4 15:15:45 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 4 Jan 2021 16:15:45 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 Message-ID: Dear release managers, We are currently experiencing an issue with the new pip resolver in our release job (publish-openstack-releasenotes-python3), so, please hold all the release's validations for now. Indeed, today we faced an issue [1][2] during the releasing tripleo-image-elements [3]. The problem here is that this repos haven't doc/requirements.txt file and by default in this case zuul will use the test-requirements.txt file to pull requirements [4] This requirements file contains extra requirements like flake8 that collided with those allowed in our job environment and so the new pip resolver fails to install these requirements and the job exits in error. All the repo who fit the same conditions (no doc/requirements.txt available) will fail too. We've almost identified all of these repos [5], I'll get the full list to you in a while. In parallel, fungi (Jeremy Stanley) will bring this topic on the zuul side too, to see if we can add something there to override this rule (test-requirements.txt as the default case). Thanks for reading [1] https://zuul.opendev.org/t/openstack/build/d82e8c8db7754394907459895f3f58fa [2} http://paste.openstack.org/show/801385/ [3] https://review.opendev.org/c/openstack/releases/+/768237 [4] https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-sphinx/tasks/main.yaml#L36 [5] http://paste.openstack.org/show/801396/ -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jan 4 15:34:06 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 4 Jan 2021 16:34:06 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: Here is the filtered list of projects that meet the conditions leading to the bug, and who should be fixed to completely solve our issue: adjutant-ui api-site blazar blazar-nova blazar-tempest-plugin ceilometermiddleware cinder-tempest-plugin cloudkitty cloudkitty-tempest-plugin contributor-guide cyborg-tempest-plugin debtcollector designate-tempest-plugin etcd3gw freezer-api freezer-dr glance-tempest-plugin governance governance-sigs governance-website i18n ideas ironic-python-agent-builder kuryr-libnetwork manila-image-elements manila-tempest-plugin molteniron monasca-api monasca-events-api monasca-log-api monasca-persister monasca-statsd monasca-tempest-plugin murano-dashboard networking-baremetal networking-hyperv neutron-dynamic-routing neutron-tempest-plugin neutron-vpnaas openstack-manuals openstack-virtual-baremetal openstack-zuul-roles os-apply-config os-collect-config os-refresh-config os-service-types osprofiler ossa pycadf pyeclib pymod2pkg python-cyborgclient python-masakariclient python-monascaclient release-test security-analysis security-doc senlin-tempest-plugin solum-dashboard sushy sushy-tools telemetry-tempest-plugin tempest-lib tempest-stress training-guides tripleo-common tripleo-common-tempest-plugin tripleo-heat-templates tripleo-image-elements tripleo-puppet-elements tripleo-quickstart-extras tripleo-repos tripleo-upgrade trove-dashboard virtualbmc vitrage-tempest-plugin whereto workload-ref-archs zaqar-tempest-plugin zun-tempest-plugin Notice that some of these projects aren't deliverables but if possible it could be worth fixing them too. These projects have an incompatibility between entries in their test-requirements.txt, and they're missing a doc/requirements.txt file. The more straightforward path to unlock our job " publish-openstack-releasenotes-python3" is to create a doc/requirements.txt file that only contains the needed dependencies to reduce the possibility of pip resolver issues. I personally think that we could use the latest allowed version of requirements (sphinx, reno, etc...). I propose to track the related advancement by using the "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we would be able to update our status. Also it could be worth fixing test-requirements.txt incompatibilities but this task is more on the projects teams sides and this task could be done with a follow up patch. Thoughts? Le lun. 4 janv. 2021 à 16:15, Herve Beraud a écrit : > Dear release managers, > > We are currently experiencing an issue with the new pip resolver in our > release job (publish-openstack-releasenotes-python3), so, please hold all > the release's validations for now. > > Indeed, today we faced an issue [1][2] during the releasing > tripleo-image-elements [3]. > > The problem here is that this repos haven't doc/requirements.txt file and > by default in this case zuul will use the test-requirements.txt file to > pull requirements [4] > > This requirements file contains extra requirements like flake8 that > collided with those allowed in our job environment and so the new pip > resolver fails to install these requirements and the job exits in error. > > All the repo who fit the same conditions (no doc/requirements.txt > available) will fail too. We've almost identified all of these repos [5], I'll > get the full list to you in a while. > > In parallel, fungi (Jeremy Stanley) will bring this topic on the zuul side > too, to see if we can add something there to override this rule > (test-requirements.txt as the default case). > > Thanks for reading > > [1] > https://zuul.opendev.org/t/openstack/build/d82e8c8db7754394907459895f3f58fa > [2} http://paste.openstack.org/show/801385/ > [3] https://review.opendev.org/c/openstack/releases/+/768237 > [4] > https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/ensure-sphinx/tasks/main.yaml#L36 > [5] http://paste.openstack.org/show/801396/ > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jan 4 15:54:05 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jan 2021 15:54:05 +0000 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: <20210104155404.elmbglroodeihbwv@yuggoth.org> On 2021-01-04 16:15:45 +0100 (+0100), Herve Beraud wrote: [...] > In parallel, fungi (Jeremy Stanley) will bring this topic on the zuul side > too, to see if we can add something there to override this rule > (test-requirements.txt as the default case). [...] Well, what I was originally suggesting was that we'd want some way to have it use doc/requirements.txt instead of test-requirements.txt, but on closer reading of the role it already actually does that automatically, so I'm not sure there's much else to be done on the Zuul end of things. Really as you point out this needs to be fixed in the individual projects by either correcting these incompatibilities between intentionally unconstrained linters/static analyzers in their test-requirements.txt files, or by following OpenStack's long-standing docs PTI recommendations (and ideally doing both): "List python dependencies needed for documentation in doc/requirements.txt" https://governance.openstack.org/tc/reference/project-testing-interface.html#documentation -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From abishop at redhat.com Mon Jan 4 16:44:15 2021 From: abishop at redhat.com (Alan Bishop) Date: Mon, 4 Jan 2021 08:44:15 -0800 Subject: [tripleO] Customised Cinder-Volume fails at 'Paunch 5' during overcloud deployment In-Reply-To: References: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> Message-ID: On Mon, Jan 4, 2021 at 5:31 AM Igal Katzir wrote: > Hello Alan, > Thanks for your reply! > > I am afraid that the reason for my deployment failure might be concerned > with the environment file I use to configure my cinder backend. > The configuration is quite similar to > https://github.com/Infinidat/tripleo-deployment-configs/blob/dev/RHOSP15/cinder-infinidat-config.yaml > So I wonder if it is possible to run a deployment where I tell 'TripleO' > to use my customize container, using containers-prepare-parameter.yaml, but > without the environment file =cinder-infinidat-config.yaml, and configure > the backend / start cinder-volume services manually? > No, your cinder-infinidat-config.yaml file looks fine. It's responsible for getting TripleO to configure cinder to use your driver, and that phase was completed successfully prior to the deployment failure. > Or I must have a minimum config as I find in: > '/usr/share/openstack-tripleo-heat-templates/deployment/cinder/' (for other > vendors)? > If I do need such a cinder-volume-VENDOR-puppet.yaml config to be > integrated during overcloud deployment, where is documentation that > explains how to construct this? Do I need to use cinder-base.yaml as a > template? > When looking at the web for "cinder-volume-container-puppet.yaml" I found > the Git Page of overcloud-resource-registry-puppet.j2.yaml > > > and found also > https://opendev.org/openstack/tripleo-heat-templates/../deployment > > but it is not so explanatory. > Your cinder-infinidat-config.yaml uses a low-level puppet mechanism for configuring what's referred to as a "custom" block storage backend. This is perfectly fine. If you want better integration with TripleO (and puppet) then you'll need to develop 3 separate patches, 1 each in puppet-cinder, puppet-tripleo and tripleo-heat-templates. Undertaking that would be a good future goal, but isn't necessary in order for you to get past your current deployment issue. > I have opened a case with RedHat as well and they are checking who from > their R&D could help since it's out of the scope of support. > I think you're starting to see responses from Red Hat that should help identify and resolve the problem. Alan > > Regards, > Igal > > On Thu, Dec 31, 2020 at 9:15 PM Alan Bishop wrote: > >> >> >> On Thu, Dec 31, 2020 at 5:26 AM Igal Katzir >> wrote: >> >>> Hello all, >>> >>> I am trying to deploy RHOSP16.1 (based on ‘*train’ *distribution) for Certification >>> purposes. >>> I have build a container for our cinder driver and trying to deploy it. >>> Deployment runs almost till the end and fails at stage when it tries to >>> configure Pacemaker; >>> Here is the last message: >>> >>> "Info: Applying configuration version '1609231063'", "Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created", "Info: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]: Scheduling refresh of Service[pcsd]", "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling all events on Service[pcsd]", "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: Dependency Pcmk_property[property-overcloud-controller-0-cinder-volume-role] has failures: true", "Info: Creating state file /var/lib/puppet/state/state.yaml", "Notice: Applied catalog in 382.92 seconds", "Changes:", " Total: 1", "Events:", " Success: 1", " Failure: 2", " Total: 3", >>> >>> >>> I have verified that all packages on my container-image >>> (Pacemaker,Corosync, libqb,and pcs) are installed with same versions as >>> the overcloud-controller. >>> >> >> Hi Igal, >> >> Thank you for checking these package versions and stating they match the >> ones installed on the overcloud node. This rules out one of the common >> reasons for failures when trying to run a customized cinder-volume >> container image. >> >> But seems that something is still missing, because deployment with the >>> default openstack-cinder-volume image completes successfully. >>> >> >> This is also good to know. >> >> Can anyone help with debugging this? Let me know if more info needed. >>> >> >> More info is needed, but it's hard to predict exactly where to look for >> the root cause of the failure. I'd start by looking for something at the >> cinder log file >> to determine whether the cinder-volume service is even trying to start. >> Look for /var/log/containers/cinder/cinder-volume.log on the node where >> pacemaker is trying to run the service. Are there logs indicating the >> service is trying to start? Or maybe the service is launched, but fails >> early during startup? >> >> Another possibility is podman fails to launch the container itself. If >> that's happening then check for errors in /var/log/messages. One source of >> this type of failure is you've specified a container bind mount, but the >> source directory doesn't exist (docker would auto-create the source >> directory, but podman does not). >> >> You specifically mentioned RHOSP, so if you need additional support then >> I recommend opening a support case with Red Hat. That will provide a forum >> for posting private data, such as details of your overcloud deployment and >> full sosreports. >> >> Alan >> >> >>> >>> Thanks in advance, >>> Igal >>> >> > > -- > Regards, > > *Igal Katzir* > Cell +972-54-5597086 > Interoperability Team > *INFINIDAT* > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jan 4 17:02:23 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 4 Jan 2021 18:02:23 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: > > Here is the filtered list of projects that meet the conditions leading to the bug, and who should be fixed to completely solve our issue: > > ... > etcd3gw > ... > python-masakariclient > ... > > Notice that some of these projects aren't deliverables but if possible it could be worth fixing them too. > > These projects have an incompatibility between entries in their test-requirements.txt, and they're missing a doc/requirements.txt file. > > The more straightforward path to unlock our job "publish-openstack-releasenotes-python3" is to create a doc/requirements.txt file that only contains the needed dependencies to reduce the possibility of pip resolver issues. I personally think that we could use the latest allowed version of requirements (sphinx, reno, etc...). > > I propose to track the related advancement by using the "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we would be able to update our status. > > Also it could be worth fixing test-requirements.txt incompatibilities but this task is more on the projects teams sides and this task could be done with a follow up patch. > > Thoughts? Thanks, Hervé! Done for python-masakariclient in [1]. etcd3gw needs more love in general but I will have this split in mind. [1] https://review.opendev.org/c/openstack/python-masakariclient/+/769163 -yoctozepto From hberaud at redhat.com Mon Jan 4 17:23:18 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 4 Jan 2021 18:23:18 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: Thanks all! Here we can track our advancement: https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek a écrit : > On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: > > > > Here is the filtered list of projects that meet the conditions leading > to the bug, and who should be fixed to completely solve our issue: > > > > ... > > etcd3gw > > ... > > python-masakariclient > > ... > > > > Notice that some of these projects aren't deliverables but if possible > it could be worth fixing them too. > > > > These projects have an incompatibility between entries in their > test-requirements.txt, and they're missing a doc/requirements.txt file. > > > > The more straightforward path to unlock our job > "publish-openstack-releasenotes-python3" is to create a > doc/requirements.txt file that only contains the needed dependencies to > reduce the possibility of pip resolver issues. I personally think that we > could use the latest allowed version of requirements (sphinx, reno, etc...). > > > > I propose to track the related advancement by using the > "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we > would be able to update our status. > > > > Also it could be worth fixing test-requirements.txt incompatibilities > but this task is more on the projects teams sides and this task could be > done with a follow up patch. > > > > Thoughts? > > Thanks, Hervé! > > Done for python-masakariclient in [1]. > > etcd3gw needs more love in general but I will have this split in mind. > > [1] https://review.opendev.org/c/openstack/python-masakariclient/+/769163 > > -yoctozepto > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon Jan 4 17:41:06 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 4 Jan 2021 18:41:06 +0100 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> References: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> Message-ID: On Tue, 8 Dec 2020 at 18:24, Jeremy Stanley wrote: > > On 2020-12-08 13:36:09 +0100 (+0100), Spyros Trigazis wrote: > > openstack-tox-lower-constraints fails for bashate and coverage. > > (Maybe more, I bumped bashate and it failed for coverage. I don;t > > want to waste more resources on our CI) > > eg https://review.opendev.org/c/openstack/magnum/+/765881 > > https://review.opendev.org/c/openstack/magnum/+/765979 > > > > Do we miss something? > > Pip 20.3.0, released 8 days ago, turned on a new and much more > thorough dependency resolver. Earlier versions of pip did not try > particularly hard to make sure the dependencies claimed by packages > were all satisfied. Virtualenv 20.2.2 released yesterday and > increased the version of pip it's vendoring to a version which uses > the new solver as well. These changes mean that latent version > conflicts are now being correctly identified as bugs, and these jobs > will do a far better job of actually confirming the declared > versions of dependencies are able to be tested. > > One thing which looks really weird and completely contradictory to > me is that your lower-constraints job on change 765881 is applying > both upper and lower constraints lists to the pip install command. > Maybe the lower constraints list is expected to override the earlier > upper constraints, but is that really going to represent a > compatible set? That aside, trying to reproduce locally I run into > yet a third error: > > Could not find a version that satisfies the requirement > warlock!=1.3.0,<2,>=1.0.1 (from python-glanceclient) > > And indeed, python-glanceclient insists warlock 1.3.0 should be > skipped, while magnum's lower-constraints.txt says you must install > warlock==1.3.0 so that's a clear contradiction as well. > > My recommendation is to work on reproducing this locally first and > play a bit of whack-a-mole with the entries in your > lower-constraints.txt to find versions of things which will actually > be coinstallable with current versions of pip. You don't need to run > the full tox testenv, just try installing your constrainted deps > into a venv with upgraded pip like so: > > python3.8 -m venv foo > foo/bin/pip install -U pip > foo/bin/pip install -c lower-constraints.txt \ > -r test-requirements.txt -r requirements.txt > > You'll also likely want to delete and recreate the venv each time > you try, since pip will now also try to take the requirements of > already installed packages into account, and that might further > change the behavior you see. Hope that helps! > -- > Jeremy Stanley Hi Jeremy, Sorry for hijacking this thread, but I am dealing with the same sort of issues in blazar. The lower-constraints job is failing [1], which has broken our gate. However, I cannot reproduce the issue locally: `tox -e lower-constraints` works fine, and so do your recommended commands with the foo venv. I've tried on multiple operating systems, including Ubuntu 20.04 (with Python 3.8 and pip 20.3.3). So I have been playing whack-a-mole [2] from Zuul logs based on whichever package fails to install, but it's a very slow process. Do you know what I could be missing for reproducing this locally? Thanks, Pierre [1] https://zuul.opendev.org/t/openstack/build/34897587cf954016b8027670117213f6 [2] https://review.opendev.org/c/openstack/blazar/+/767593/ From fungi at yuggoth.org Mon Jan 4 18:08:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jan 2021 18:08:16 +0000 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: References: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> Message-ID: <20210104180816.prb2x3oyuxi4jpkq@yuggoth.org> On 2021-01-04 18:41:06 +0100 (+0100), Pierre Riteau wrote: [...] > I cannot reproduce the issue locally: `tox -e lower-constraints` > works fine, and so do your recommended commands with the foo venv. > I've tried on multiple operating systems, including Ubuntu 20.04 > (with Python 3.8 and pip 20.3.3). [...] > Do you know what I could be missing for reproducing this locally? [...] At this point it may be easier to reproduce with the latest version of virtualenv (20.2.2 at time of writing) since it started pulling in pip 20.3.1 on its own. Just upgrade virtualenv and try your tox command again and see if things are any different for you. It reproduces exactly on my workstation with latest tox/virtualenv just running `tox -e lower-constraints` in a current blazar repo master branch checkout. As for the doubled-constraints I mentioned in the earlier post, looks like blazar is hitting that as well. To correct it you need to move your upper-constraints addition into the default testenv deps list rather than adding it to the default install_command string. Right now it's getting inherited in testenv:lower-constraints too which can't be a good thing. Compare the approach in blazar with that of nova: https://opendev.org/openstack/blazar/src/commit/cb7c142a890e84a2b3171395832d9839b2d66a63/tox.ini#L11 https://opendev.org/openstack/nova/src/commit/b0f241e5425c99866223bae4b404a4aa1abdfddf/tox.ini#L27 Changing it in my local blazar checkout gets farther, though it looks like the old version of markupsafe you're trying to use may not work with latest versions of setuptools (ImportError: cannot import name 'Feature' from 'setuptools'). Hope that helps! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From eandersson at blizzard.com Mon Jan 4 19:49:16 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 4 Jan 2021 19:49:16 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to change it to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. Best Regards, Erik Olof Gunnar Andersson Technical Lead, Senior Cloud Engineer From: Ionut Biru Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson Cc: feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? ________________________________ From: Erik Olof Gunnar Andersson > Sent: Saturday, January 2, 2021 4:54 PM To: Ionut Biru >; feilong > Cc: openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Are you sure you aren't just looking at the connection pool expanding? Each worker has a max number of connections it can use. Maybe look at lowering rpc_conn_pool_size. By default I believe each worker might create a pool of up to 30 connections. Looking at the code it could also be have something to do with the k8s client. Since it creates a new instance each time it does an health check. What version of the k8s client do you have installed? ________________________________ From: Ionut Biru > Sent: Tuesday, December 29, 2020 2:20 PM To: feilong > Cc: openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Not sure if my suspicion is true but I think for each update a new notifier is prepared and used without closing the connection but my understanding of oslo is nonexistent. https://opendev.org/openstack/magnum/src/branch/master/magnum/conductor/utils.py#L147 https://opendev.org/openstack/magnum/src/branch/master/magnum/common/rpc.py#L173 On Tue, Dec 29, 2020 at 11:52 PM Ionut Biru > wrote: Hi Feilong, I found out that each time the update_health_status periodic task is run, a new connection(for each uwsgi) is made to rabbitmq. root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 229 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 234 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 238 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 241 root at ctrl1cj-magnum-container-7a7a412a:~# netstat -npt | grep 5672 | wc -l 244 Not sure Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG magnum.service.periodic [req-3b495326-cf80-481e-b3c6-c741f05b7f0e - - - - -] Dec 29 21:51:22 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:22.024 262800 DEBUG oslo_service.periodic_task [-] Running periodic task MagnumPeriodicTasks.sync Dec 29 21:51:16 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262804]: 2020-12-29 21:51:16.462 262804 DEBUG magnum.conductor.handlers.cluster_conductor [req-284ac12b-d76a-4e50-8e74-5bfb Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.573 262800 DEBUG magnum.service.periodic [-] Status for cluster 118 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262805]: 2020-12-29 21:51:15.572 262805 DEBUG magnum.conductor.handlers.cluster_conductor [req-3fc29ee9-4051-42e7-ae19-3a49 Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 121 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.572 262800 DEBUG magnum.service.periodic [-] Status for cluster 122 updated to HEALTHY ({'api' Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.553 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 122 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.544 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 121 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.535 262800 DEBUG magnum.service.periodic [-] Updating health status for cluster 118 update_hea Dec 29 21:51:15 ctrl1cj-magnum-container-7a7a412a magnum-conductor[262800]: 2020-12-29 21:51:15.494 262800 DEBUG magnum.service.periodic [req-405b1fed-0b8a-4a60-b6ae-834f548b21d1 - - - 2020-12-29 21:51:14.082 [info] <0.953.1293> accepting AMQP connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) 2020-12-29 21:51:14.083 [info] <0.953.1293> Connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71 2020-12-29 21:51:14.084 [info] <0.953.1293> connection <0.953.1293> (172.29.93.14:48474 -> 172.29.95.38:5672 - uwsgi:262739:f86c0570-8739-4b74-8102-76b5357acd71): user 'magnum' authenticated and granted access to vhost '/magnum' 2020-12-29 21:51:15.560 [info] <0.1656.1283> accepting AMQP connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) 2020-12-29 21:51:15.561 [info] <0.1656.1283> Connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672) has a client-provided name: uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3 2020-12-29 21:51:15.561 [info] <0.1656.1283> connection <0.1656.1283> (172.29.93.14:48548 -> 172.29.95.38:5672 - uwsgi:262744:2c9792ab-9198-493a-970c-f6ccfd9947d3): user 'magnum' authenticated and granted access to vhost '/magnum' On Tue, Dec 22, 2020 at 4:12 AM feilong > wrote: Hi Ionut, I didn't see this before on our production. Magnum auto healer just simply sends a POST request to Magnum api to update the health status. So I would suggest write a small script or even use curl to see if you can reproduce this firstly. On 19/12/20 2:27 am, Ionut Biru wrote: Hi again, I failed to mention that is stable/victoria with couples of patches from review. Ignore the fact that in logs it shows the 19.1.4 version in venv path. On Fri, Dec 18, 2020 at 3:22 PM Ionut Biru > wrote: Hi guys, I have an issue with magnum api returning an error after a while: Server-side error: "[('system library', 'fopen', 'Too many open files'), ('BIO routines', 'BIO_new_file', 'system lib'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]" Log file: https://paste.xinu.at/6djE/ This started to appear after I enabled the template auto_healing_controller = magnum-auto-healer, magnum_auto_healer_tag = v1.19.0. Currently, I only have 4 clusters. After that the API is in error state and doesn't work unless I restart it. -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Mon Jan 4 19:52:58 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 4 Jan 2021 19:52:58 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson Cc: feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Jan 4 21:24:07 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 4 Jan 2021 16:24:07 -0500 Subject: [tc] weekly meeting Message-ID: Hi there, Hope everyone enjoyed the holidays! Here's an update on what happened in the OpenStack TC these past few days. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Add Resolution of TC stance on the OpenStackClient | https://review.opendev.org/c/openstack/governance/+/759904 - Remove Karbor project team | https://review.opendev.org/c/openstack/governance/+/767056 - Add glance-tempest-plugin to Glance | https://review.opendev.org/c/openstack/governance/+/767666 ## Other Reminders - Our next TC Weekly meeting is scheduled for January 7th at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, January 6th, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From kennelson11 at gmail.com Mon Jan 4 22:32:13 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 4 Jan 2021 14:32:13 -0800 Subject: [SIG] [First Contact] First Meeting of the Year! Message-ID: Hello! We will be holding our meeting tomorrow (at 00:00 UTC in #openstack-upstream-institute) with a pretty short agenda[1] so feel free to add as you see fit! One thing I do want to try is to set some concrete goal or two for this year to work towards. So if you're new to the SIG this is the perfect time to come meet us and get involved! -Kendall (diablo_rojo) [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From deepa.kr at fingent.com Tue Jan 5 06:41:10 2021 From: deepa.kr at fingent.com (Deepa KR) Date: Tue, 5 Jan 2021 12:11:10 +0530 Subject: [Ussuri] Auto shutdown VM In-Reply-To: <20210104094448.7ukywqmrnasyufqm@lyarwood-laptop.usersys.redhat.com> References: <158621605687826@mail.yandex.com> <14800219-BC67-4B94-88CE-81FE5D8A6AB2@fingent.com> <20210104094448.7ukywqmrnasyufqm@lyarwood-laptop.usersys.redhat.com> Message-ID: Hello Lee Below is the only information from qemu logs. 2020-12-22T07:20:10.251883Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(48CH).vmx-invept-single-context -noglobals [bit 43] 2020-12-22T07:20:10.251887Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(480H).vmx-ins-outs [bit 54] 2020-12-22T07:20:10.251890Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(480H).vmx-true-ctls [bit 55] 2020-12-22T07:20:10.251894Z qemu-system-x86_64: warning: host doesn't support requested feature: MSR(491H).vmx-eptp-switching [bit 0 ] *2021-01-05 01:39:20.896+0000: shutting down, reason=crashed <<< no logs before or after this from Timestamp* 2021-01-05 06:21:37.682+0000: starting up libvirt version: 6.0.0, package: 0ubuntu8~cloud0 (Openstack Ubuntu Testing Bot Mon, 20 Apr 2020 18:44:06 +0000), qemu version: 4.2.0Debian 1:4.2-3ubuntu6~cloud0, kernel: 4.15.0-106-generi c, hostname: fgshwbucehyp02.maas LC_ALL=C \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin \ HOME=/var/lib/libvirt/qemu/domain-147-instance-0000057b \ XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-147-instance-0000057b/.local/share \ XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-147-instance-0000057b/.cache \ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-147-instance-0000057b/.config \ These VMs are migrated from VMware to Openstack (if this tip helps in someway) May VMs get auto shutdown randomly instance-0000032c.log:2021-01-03 17:30:32.980+0000: shutting down, reason=crashed instance-000004d3.log:2020-12-10 21:20:45.807+0000: shutting down, reason=crashed instance-000004d3.log:2020-12-13 21:06:43.683+0000: shutting down, reason=crashed instance-000004d3.log:2020-12-18 23:02:30.727+0000: shutting down, reason=crashed instance-000004d3.log:2020-12-23 16:39:22.194+0000: shutting down, reason=crashed instance-000004d3.log:2020-12-29 23:43:03.554+0000: shutting down, reason=crashed instance-000004d3.log:2021-01-03 19:13:08.850+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-07 04:20:44.540+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-09 18:22:08.652+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-12 19:37:34.824+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-15 16:38:50.268+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-16 17:31:29.975+0000: shutting down, reason=crashed instance-0000057b.log:2020-12-21 18:55:58.644+0000: shutting down, reason=crashed instance-0000057b.log:2021-01-05 01:39:20.896+0000: shutting down, reason=crashed Not sure what could i do more to prevent the auto shutdown On Mon, Jan 4, 2021 at 3:14 PM Lee Yarwood wrote: > On 04-01-21 14:53:59, Deepa KR wrote: > > Hi All > > > > Any suggestions highly appreciated. > > We are facing these issues very frequently now . > > Can you pastebin the domain QEMU log from > /var/log/libvirt/qemu/$domain.log? That should detail why the domain is > crashing. > > I'd also recommend reviewing the following docs from libvirt on how to > enable debug logs etc: > > https://libvirt.org/kbase/debuglogs.html > > > On Mon, Nov 23, 2020 at 10:03 AM Deepa KR wrote: > > > Hi > > > > > > Can see only shutting down, reason=crashed in libvirt/qemu logs > .Nothing > > > else . > > > Couldn't find anything else in neutron logs as well > > > > > > > > > On Wed, Nov 18, 2020 at 5:23 PM Deepa KR wrote: > > > > > >> Thanks for pointing out. Have 70 + vms and has issue with just 3 vms > so i > > >> am really confused > > >> > > >> Sent from my iPhone > > >> > > >> On 18-Nov-2020, at 1:56 PM, rui zang wrote: > > >> > > >>  > > >> [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received unexpected > > >> event network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 for > > >> instance with vm_state active and task_state None.* > > >> > > >> > > >> Clearly the network virtual interface was somehow removed or > unplugged. > > >> What you should look into is OVS or whatever the network solution you > are > > >> using. > > >> > > >> > > >> 18.11.2020, 01:44, "Deepa KR" : > > >> > > >>  Hi All > > >> > > >> We have a Openstack setup with the *Ussuri Version* and I am > *regularly > > >> facing auto shutdown of a few VMs (ubuntu16.04) randomly* . > > >> If I restart then the instance is back . > > >> > > >> From logs I was able to see the messages below . > > >> > > >> WARNING nova.compute.manager > [req-2a21d455-ac04-44aa-b248-4776e5109013 > > >> 813f3fb52c434e38991bb90aa4771541 10b5279cb6f64ca19871f132a2cee1a3 - > default > > >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Received > > >> unexpected event > network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 > > >> for instance with vm_state active and task_state None.* > > >> INFO nova.compute.manager [-] [instance: > > >> 28cd861c-ef15-444a-a902-9cac643c72b5] VM Stopped (Lifecycle Event) > > >> INFO nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 > - - > > >> - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *During > > >> _sync_instance_power_state the DB power_state (1) does not match the > > >> vm_power_state from the hypervisor (4). Updating power_state in the > DB to > > >> match the hypervisor.* > > >> syslog:Nov 13 07:01:07 fgshwbucehyp04 nova-compute[2680204]: > 2020-11-13 > > >> 07:01:07.684 2680204 WARNING nova.compute.manager > > >> [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: > > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance shutdown by itself. > > >> Calling the stop API. Current vm_state: active, current task_state: > None, > > >> original DB power_state: 1, current VM power_state: 4* > > >> nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - > - - > > >> -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance is > already > > >> powered off in the hypervisor when stop is called.* > > >> nova.virt.libvirt.driver [req-8261f607-4f1e-459d-85d4-e269694dd477 - > - - > > >> - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance > already > > >> shutdown.* > > >> nova.virt.libvirt.driver [-] [instance: > > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed > successfully.* > > >> nova.compute.manager [req-7a0a0d03-e286-42f0-9e36-38a432f236f3 > > >> d9ca03b9d0884d51a26a39b6c82f02eb 304d859c43df4de4944ca5623f7f455c - > default > > >> default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Get console > output > > >> nova.virt.libvirt.driver [-] [instance: > > >> 28cd861c-ef15-444a-a902-9cac643c72b5] *Instance destroyed > successfully.* > > >> > > >> I searched a few blogs and forums but couldn't find a solution to it . > > >> > > >> Few mentioned to add s*ync_power_state_interval=-1 in * > > >> */etc/nova/nova.conf *.But understood that this will help only when > nova > > >> stops vm. > > >> But in this case vm itself is shutting down (*Instance shutdown by > > >> itself. Calling the stop API*) > > >> Also no memory issue in VM nor the hypervisor. > > >> Also did apt-get upgrade . > > >> > > >> It would be great if anyone can shed light to this issue. > > >> > > >> Regards, > > >> Deepa K R > > >> > > >> Sent from my iPhone > > >> > > >> > > > > > > -- > > > > > > > > > Regards, > > > > > > Deepa K R | DevOps Team Lead > > > > > > > > > > > > USA | UAE | INDIA | AUSTRALIA > > > > > > > > > > > > > -- > > > > > > Regards, > > > > Deepa K R | DevOps Team Lead > > > > > > > > USA | UAE | INDIA | AUSTRALIA > > > > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 > 2D76 > -- Regards, Deepa K R | DevOps Team Lead USA | UAE | INDIA | AUSTRALIA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature-1.gif Type: image/gif Size: 566 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_for_signature.png Type: image/png Size: 10509 bytes Desc: not available URL: From deepa.kr at fingent.com Tue Jan 5 07:25:31 2021 From: deepa.kr at fingent.com (Deepa KR) Date: Tue, 5 Jan 2021 12:55:31 +0530 Subject: [Ussuri] Auto shutdown VM In-Reply-To: References: <9A9C606C-3658-473C-83DC-6305D496A6CD@fingent.com> Message-ID: Hello Pierre Yeah have checked hypervisor too ..No error related to kernel out-of-memory (OOM) in hypervisor On Mon, Jan 4, 2021 at 3:45 PM Pierre Riteau wrote: > Hi Deepa, > > You mention checking dmesg *inside* the VM. But have you checked dmesg > on the hypervisor? It's possible your qemu-kvm processes are > terminated by the kernel out-of-memory (OOM) killer because they try > to allocate more memory than available. > > Best wishes, > Pierre Riteau (priteau) > > > On Wed, 18 Nov 2020 at 03:44, Deepa KR wrote: > > > > Hello Mohammed > > > > Thanks for the response. > > No error message inside vm. Have checked dmesg, syslog etc . > > > > I mentioned vm is shutting down itself because of error messages > Instance shutdown by itself. Calling the stop API. Current vm_state: > active, current task_state: None, original DB power_state: 1, current VM > power_state: 4 from hypervisor. > > > > Sent from my iPhone > > > > > On 17-Nov-2020, at 11:35 PM, Mohammed Naser > wrote: > > > > > > On Tue, Nov 17, 2020 at 12:46 PM Deepa KR > wrote: > > >> > > >>  Hi All > > >> > > >> We have a Openstack setup with the Ussuri Version and I am regularly > facing auto shutdown of a few VMs (ubuntu16.04) randomly . > > >> If I restart then the instance is back . > > >> > > >> From logs I was able to see the messages below . > > >> > > >> WARNING nova.compute.manager > [req-2a21d455-ac04-44aa-b248-4776e5109013 813f3fb52c434e38991bb90aa4771541 > 10b5279cb6f64ca19871f132a2cee1a3 - default default] [instance: > 28cd861c-ef15-444a-a902-9cac643c72b5] Received unexpected event > network-vif-unplugged-e97839a1-bbc4-4d26-af30-768ca3630ce9 for instance > with vm_state active and task_state None. > > >> INFO nova.compute.manager [-] [instance: > 28cd861c-ef15-444a-a902-9cac643c72b5] VM Stopped (Lifecycle Event) > > >> INFO nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - > - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] During > _sync_instance_power_state the DB power_state (1) does not match the > vm_power_state from the hypervisor (4). Updating power_state in the DB to > match the hypervisor. > > >> syslog:Nov 13 07:01:07 fgshwbucehyp04 nova-compute[2680204]: > 2020-11-13 07:01:07.684 2680204 WARNING nova.compute.manager > [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - - -] [instance: > 28cd861c-ef15-444a-a902-9cac643c72b5] Instance shutdown by itself. Calling > the stop API. Current vm_state: active, current task_state: None, original > DB power_state: 1, current VM power_state: 4 > > >> nova.compute.manager [req-8261f607-4f1e-459d-85d4-e269694dd477 - - - > - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance is already > powered off in the hypervisor when stop is called. > > >> nova.virt.libvirt.driver [req-8261f607-4f1e-459d-85d4-e269694dd477 - > - - - -] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Instance already > shutdown. > > >> nova.virt.libvirt.driver [-] [instance: > 28cd861c-ef15-444a-a902-9cac643c72b5] Instance destroyed successfully. > > >> nova.compute.manager [req-7a0a0d03-e286-42f0-9e36-38a432f236f3 > d9ca03b9d0884d51a26a39b6c82f02eb 304d859c43df4de4944ca5623f7f455c - default > default] [instance: 28cd861c-ef15-444a-a902-9cac643c72b5] Get console output > > >> nova.virt.libvirt.driver [-] [instance: > 28cd861c-ef15-444a-a902-9cac643c72b5] Instance destroyed successfully. > > >> > > >> I searched a few blogs and forums but couldn't find a solution to it . > > >> > > >> Few mentioned to add sync_power_state_interval=-1 in > /etc/nova/nova.conf .But understood that this will help only when nova > stops vm. > > >> But in this case vm itself is shutting down (Instance shutdown by > itself. Calling the stop API) > > >> Also no memory issue in VM nor the hypervisor. > > >> Also did apt-get upgrade . > > >> > > >> It would be great if anyone can shed light to this issue. > > > > > > You should check and see if there is anything inside `dmesg` that > > > shows the VM dying (any segfaults?). Also, it's possible that the VM > > > itself is shutting off so maybe you should check ni its logs. > > > > > >> Regards, > > >> Deepa K R > > >> > > >> Sent from my iPhone > > > > > > > > > > > > -- > > > Mohammed Naser > > > VEXXHOST, Inc. > > > -- Regards, Deepa K R | DevOps Team Lead USA | UAE | INDIA | AUSTRALIA -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature-1.gif Type: image/gif Size: 566 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logo_for_signature.png Type: image/png Size: 10509 bytes Desc: not available URL: From cjeanner at redhat.com Tue Jan 5 07:43:05 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Jan 2021 08:43:05 +0100 Subject: [tripleo] Package dependencies, podman, dnf modules Message-ID: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> Hello there, Since we introduced Podman instead of Docker in tripleo, we are running after the right version. For now, the version we should use is provided by a specific container-tools module stream (container-tools:2.0). The default stream being container-tools:rhel8, we might end with a deployed undercloud running the wrong podman version, leading to some nice issues, especially in a SELinux-enabled environment. Currently, the main dependency tree is: python3-tripleoclient requires podman That's mostly all. While we can, of course, edit the python3-tripleoclient spec file in order to pin the precise podman version (or, at least, set upper and lower constraints) in order to target the version provided by the right stream, it's not ideal: - package install error will lead to more questions (since ppl might skip doc for #reason) - if we change the target version, we'll need to ensure it's present in a stream (and, well, update the doc if we need to switch the stream) - and probably some other reasons. Since we can NOT depend on a specific stream in rpm (for instance, we can't put "Requires @container-tools:2.0/podman" like we might in ansible), we're a bit stuck. An ugly hack is tested in the spec file[1], using a bash thing in order to check the activated stream, but this is not ideal either, especially since it makes the whole package building/checking fail during the "mock" stage. In order to make it pass, we'd need to modify the whole RDO in order to take into account stream switching. This isn't impossible, but it leads to other concerns, especially when it comes to "hey we need to switch the stream for ". And I'm pretty sure no one will want to dig into it, for good reasons ;). This leads to a last possibility: drop the "Requires podman" from tripleo dependency tree, and rely on tripleo-ansible to actually enable the correct module, and install podman. This means podman will be installed during the undercloud deploy, for instance as a host_prep_tasks (step_0). While I'm not that happy with this idea, it's probably not as bad as the other hacks we've tested so far, and would, at last, prevent once for all the usual "it's not working" due to a wrong podman version (and, yes, we get plenty of them, especially in the OSP version). What do you think? Any other way of solving this issue? Remember, we're talking about "ensuring we get the right version, coming from a specific stream" and, especially, "ensure operator can't install the wrong one" (if they don't follow the doc, of if they are using automation that doesn't check actual requirements in the doc"... Thank you for your thoughts! Cheers, C. [1] https://review.rdoproject.org/r/#/c/31411/ -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From ionut at fleio.com Tue Jan 5 08:35:42 2021 From: ionut at fleio.com (Ionut Biru) Date: Tue, 5 Jan 2021 10:35:42 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Jan 5 08:56:47 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 5 Jan 2021 10:56:47 +0200 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> Message-ID: On Tue, Jan 5, 2021 at 9:44 AM Cédric Jeanneret wrote: > Hello there, > > Since we introduced Podman instead of Docker in tripleo, we are running > after the right version. > For now, the version we should use is provided by a specific > container-tools module stream (container-tools:2.0). The default stream > being container-tools:rhel8, we might end with a deployed undercloud > running the wrong podman version, leading to some nice issues, > especially in a SELinux-enabled environment. > o/ Tengu thanks for socialising that issue, it's a fun one for sure... first question is _where_ is that the default and it is mostly rhetorical as I can guess. In general though i think it's worth filing a LP bug with some of these details so we can also track whatever is decided as a fix here. > Currently, the main dependency tree is: > python3-tripleoclient requires podman > That's mostly all. > > While we can, of course, edit the python3-tripleoclient spec file in > order to pin the precise podman version (or, at least, set upper and > lower constraints) in order to target the version provided by the right > stream, it's not ideal: > - package install error will lead to more questions (since ppl might > skip doc for #reason) > - if we change the target version, we'll need to ensure it's present in > a stream (and, well, update the doc if we need to switch the stream) > - and probably some other reasons. > > Since we can NOT depend on a specific stream in rpm (for instance, we > can't put "Requires @container-tools:2.0/podman" like we might in > ansible), we're a bit stuck. > > An ugly hack is tested in the spec file[1], using a bash thing in order > to check the activated stream, but this is not ideal either, especially > since it makes the whole package building/checking fail during the > "mock" stage. In order to make it pass, we'd need to modify the whole > RDO in order to take into account stream switching. This isn't > impossible, but it leads to other concerns, especially when it comes to > "hey we need to switch the stream for ". And I'm pretty sure no > one will want to dig into it, for good reasons ;). > > This leads to a last possibility: > drop the "Requires podman" from tripleo dependency tree, and rely on > so how does that work in practice though I mean what about python-tripleoclient. Quick grep just now tells me at least the heat-launcher is directly using podman. How do we drop that requirement. > tripleo-ansible to actually enable the correct module, and install podman. > This means podman will be installed during the undercloud deploy, for > instance as a host_prep_tasks (step_0). > > I am missing how using tripleo-ansible helps us with the stream question. Do you mean add logic in tripleo-ansible that tests the stream and sets the correct version ? While I'm not that happy with this idea, it's probably not as bad as the > other hacks we've tested so far, and would, at last, prevent once for > all the usual "it's not working" due to a wrong podman version (and, > yes, we get plenty of them, especially in the OSP version). > > What do you think? Any other way of solving this issue? Remember, we're > talking about "ensuring we get the right version, coming from a specific > stream" and, especially, "ensure operator can't install the wrong one" > (if they don't follow the doc, of if they are using automation that > doesn't check actual requirements in the doc"... > in general i lean more towards 'ansible tasks' rather than 'bash thing in the specfile' to solve that. Either way though this is us reacting (with some fix/hack/workaround) to a problem that isn't anything to do with TripleO or even OpenStack but rather an OS/packaging issue. So do we have any longer term plans on what you mentioned on IRC just now regarding getting module support for RPM deps (do you mean something in the spec file here?) or is that a complete dead end? thanks, marios > Thank you for your thoughts! > > Cheers, > > C. > > > [1] https://review.rdoproject.org/r/#/c/31411/ > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Tue Jan 5 08:59:47 2021 From: strigazi at gmail.com (Spyros Trigazis) Date: Tue, 5 Jan 2021 09:59:47 +0100 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > Hi, > > I tried with process=1 and it reached 1016 connections to rabbitmq. > lsof > https://paste.xinu.at/jGg/ > > i think it goes into error when it reaches 1024 file descriptors. > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > >> Sure looks like RabbitMQ. How many workers do have you configured? >> >> >> >> Could you try to changing the uwsgi configuration to workers=1 (or >> processes=1) and then see if it goes beyond 30 connections to amqp. >> >> >> >> *From:* Ionut Biru >> *Sent:* Monday, January 4, 2021 4:07 AM >> *To:* Erik Olof Gunnar Andersson >> *Cc:* feilong ; openstack-discuss < >> openstack-discuss at lists.openstack.org> >> *Subject:* Re: [magnum][api] Error system library fopen too many open >> files with magnum-auto-healer >> >> >> >> Hi Erik, >> >> >> >> Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ >> >> >> >> >> I have kubernetes 12.0.1 installed in env. >> >> >> >> >> >> On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < >> eandersson at blizzard.com> wrote: >> >> Maybe something similar to this? >> https://github.com/kubernetes-client/python/issues/1158 >> >> >> What does lsof say? >> >> >> >> >> > > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Tue Jan 5 09:22:57 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Tue, 5 Jan 2021 09:22:57 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru Cc: Erik Olof Gunnar Andersson ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Tue Jan 5 09:50:54 2021 From: ionut at fleio.com (Ionut Biru) Date: Tue, 5 Jan 2021 11:50:54 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > workers = 2 > > > ------------------------------ > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > I tried with process=1 and it reached 1016 connections to rabbitmq. > lsof > https://paste.xinu.at/jGg/ > > > i think it goes into error when it reaches 1024 file descriptors. > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > * Maybe your rabbit is flooded with notifications that are not consumed? > * You can use way more than 1024 file descriptors, maybe 2^10? > > Spyros > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > -- > Ionut Biru - https://fleio.com > > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jan 5 10:12:56 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 5 Jan 2021 11:12:56 +0100 Subject: [infra][magnum][ci] Issues installing bashate and coverage In-Reply-To: <20210104180816.prb2x3oyuxi4jpkq@yuggoth.org> References: <20201208171248.6dffedoymqj7dgkr@yuggoth.org> <20210104180816.prb2x3oyuxi4jpkq@yuggoth.org> Message-ID: On Mon, 4 Jan 2021 at 19:18, Jeremy Stanley wrote: > > On 2021-01-04 18:41:06 +0100 (+0100), Pierre Riteau wrote: > [...] > > I cannot reproduce the issue locally: `tox -e lower-constraints` > > works fine, and so do your recommended commands with the foo venv. > > I've tried on multiple operating systems, including Ubuntu 20.04 > > (with Python 3.8 and pip 20.3.3). > [...] > > Do you know what I could be missing for reproducing this locally? > [...] > > At this point it may be easier to reproduce with the latest version > of virtualenv (20.2.2 at time of writing) since it started pulling > in pip 20.3.1 on its own. Just upgrade virtualenv and try your tox > command again and see if things are any different for you. It > reproduces exactly on my workstation with latest tox/virtualenv just > running `tox -e lower-constraints` in a current blazar repo master > branch checkout. > > As for the doubled-constraints I mentioned in the earlier post, > looks like blazar is hitting that as well. To correct it you need to > move your upper-constraints addition into the default testenv deps > list rather than adding it to the default install_command string. > Right now it's getting inherited in testenv:lower-constraints too > which can't be a good thing. Compare the approach in blazar with > that of nova: > > https://opendev.org/openstack/blazar/src/commit/cb7c142a890e84a2b3171395832d9839b2d66a63/tox.ini#L11 > > https://opendev.org/openstack/nova/src/commit/b0f241e5425c99866223bae4b404a4aa1abdfddf/tox.ini#L27 > > Changing it in my local blazar checkout gets farther, though it > looks like the old version of markupsafe you're trying to use may > not work with latest versions of setuptools (ImportError: cannot > import name 'Feature' from 'setuptools'). Hope that helps! > -- > Jeremy Stanley Hi Jeremy, Thank you very much, this helped a lot. Combining the install_command fix with the latest virtualenv, I managed to reproduce the job failure locally and updated outdated lower constraints. Somehow my patch was working locally while still failing in Zuul with "oslo-db 4.40.0 depends on alembic>=0.9.6", but after bumping alembic it now succeeds. Cheers, Pierre From marios at redhat.com Tue Jan 5 10:13:42 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 5 Jan 2021 12:13:42 +0200 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: On Mon, Jan 4, 2021 at 7:25 PM Herve Beraud wrote: > Thanks all! > > Here we can track our advancement: > > https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) > > Herve thanks very much for your efforts - I was just getting caught up on this to post the tripleo changes but I see you already beat me to it. I'll check your list to see if there are any more and will post. I'll bring this to the tripleo irc meeting today as well so we can get help merging those asap. thank you marios > Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek < > radoslaw.piliszek at gmail.com> a écrit : > >> On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: >> > >> > Here is the filtered list of projects that meet the conditions leading >> to the bug, and who should be fixed to completely solve our issue: >> > >> > ... >> > etcd3gw >> > ... >> > python-masakariclient >> > ... >> > >> > Notice that some of these projects aren't deliverables but if possible >> it could be worth fixing them too. >> > >> > These projects have an incompatibility between entries in their >> test-requirements.txt, and they're missing a doc/requirements.txt file. >> > >> > The more straightforward path to unlock our job >> "publish-openstack-releasenotes-python3" is to create a >> doc/requirements.txt file that only contains the needed dependencies to >> reduce the possibility of pip resolver issues. I personally think that we >> could use the latest allowed version of requirements (sphinx, reno, etc...). >> > >> > I propose to track the related advancement by using the >> "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we >> would be able to update our status. >> > >> > Also it could be worth fixing test-requirements.txt incompatibilities >> but this task is more on the projects teams sides and this task could be >> done with a follow up patch. >> > >> > Thoughts? >> >> Thanks, Hervé! >> >> Done for python-masakariclient in [1]. >> >> etcd3gw needs more love in general but I will have this split in mind. >> >> [1] https://review.opendev.org/c/openstack/python-masakariclient/+/769163 >> >> -yoctozepto >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Tue Jan 5 10:21:22 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Tue, 5 Jan 2021 10:21:22 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: Sorry, being repetitive here, but maybe try adding this to your magnum config as well. If you have A LOT of cores it could add up to a crazy amount of connections. [conductor] workers = 2 ________________________________ From: Ionut Biru Sent: Tuesday, January 5, 2021 1:50 AM To: Erik Olof Gunnar Andersson Cc: Spyros Trigazis ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson > wrote: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis > Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru > Cc: Erik Olof Gunnar Andersson >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue Jan 5 10:48:09 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Jan 2021 11:48:09 +0100 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> Message-ID: <2f0efe53-be85-b9d7-4f77-68b665cb50b5@redhat.com> On 1/5/21 9:56 AM, Marios Andreou wrote: > > > On Tue, Jan 5, 2021 at 9:44 AM Cédric Jeanneret > wrote: > > Hello there, > > Since we introduced Podman instead of Docker in tripleo, we are running > after the right version. > For now, the version we should use is provided by a specific > container-tools module stream (container-tools:2.0). The default stream > being container-tools:rhel8, we might end with a deployed undercloud > running the wrong podman version, leading to some nice issues, > especially in a SELinux-enabled environment. > > > o/ Tengu thanks for socialising that issue, it's a fun one for sure... > > first question is _where_ is that the default and it is mostly > rhetorical as I can guess. In general though i think it's worth filing a > LP bug with some of these details so we can also track whatever is > decided as a fix here. Right - always hard to find the right starting point ;). Though a mail would be a nice thing in order to get directions/ideas - and, hey, you just pointed some already! Here's the LP: https://bugs.launchpad.net/tripleo/+bug/1910217 > > > Currently, the main dependency tree is: > python3-tripleoclient requires podman > That's mostly all. > > While we can, of course, edit the python3-tripleoclient spec file in > order to pin the precise podman version (or, at least, set upper and > lower constraints) in order to target the version provided by the right > stream, it's not ideal: > - package install error will lead to more questions (since ppl might > skip doc for #reason) > - if we change the target version, we'll need to ensure it's present in > a stream (and, well, update the doc if we need to switch the stream) > - and probably some other reasons. > > Since we can NOT depend on a specific stream in rpm (for instance, we > can't put "Requires @container-tools:2.0/podman" like we might in > ansible), we're a bit stuck. > > An ugly hack is tested in the spec file[1], using a bash thing in order > to check the activated stream, but this is not ideal either, especially > since it makes the whole package building/checking fail during the > "mock" stage. In order to make it pass, we'd need to modify the whole > RDO in order to take into account stream switching. This isn't > impossible, but it leads to other concerns, especially when it comes to > "hey we need to switch the stream for ". And I'm pretty sure no > one will want to dig into it, for good reasons ;). > > This leads to a last possibility: > drop the "Requires podman" from tripleo dependency tree, and rely on > > > so how does that work in practice though I mean what about > python-tripleoclient. Quick grep just now tells me at least the > heat-launcher is directly using podman. How do we drop that requirement. Good catch. Maybe by ordering a bit things and calling the podman install bit before calling the heat-launcher? Might be ugly and, well, against some of the common practices, but at least it would ensure we get what we actually want regarding versions and sources. >   > > tripleo-ansible to actually enable the correct module, and install > podman. > This means podman will be installed during the undercloud deploy, for > instance as a host_prep_tasks (step_0). > > > I am missing how using tripleo-ansible helps us with the stream > question. Do you mean add logic in tripleo-ansible that tests the stream > and sets the correct version ? exactly. Ansible is able to switch module streams, and install packages. Basically, we'd "just" need to extend a bit this role/task: https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_podman/tasks/tripleo_podman_install.yml That would make things cleaner, nicer, and more reliable. > > While I'm not that happy with this idea, it's probably not as bad as the > other hacks we've tested so far, and would, at last, prevent once for > all the usual "it's not working" due to a wrong podman version (and, > yes, we get plenty of them, especially in the OSP version). > > What do you think? Any other way of solving this issue? Remember, we're > talking about "ensuring we get the right version, coming from a specific > stream" and, especially, "ensure operator can't install the wrong one" > (if they don't follow the doc, of if they are using automation that > doesn't check actual requirements in the doc"... > > > in general i lean more towards 'ansible tasks' rather than 'bash thing > in the specfile' to solve that. Either way though this is us reacting > (with some fix/hack/workaround) to a problem that isn't anything to do > with TripleO or even OpenStack but rather an OS/packaging issue. So do > we have any longer term plans on what you mentioned on IRC just now > regarding  getting module support for RPM deps (do you mean something in > the spec file here?) or is that a complete dead end? AFAIK, module support within RPM spec file as a standard thing is not in the roadmap. Apparently, the ppl behind this "module" concept aren't willing to add any sort of support. I've worked with Lon (Red Hat side) for the patch in the spec file, we've tried multiple ways and, well...... it's really not something I'm happy with. One idea I just got was about rpm hooks - iirc there's a thing in dnf that might help, have to read some more things about that. But I have the feeling this is still the wrong way - unless it's considered as a "last security net just to be really-really sure", instead of the current usage we have (which is, basically, ensure ppl do read the doc and follow it). So the main thing: it's not a TripleO issue per se - we're just using a feature that has a weak support in some cases, and, well, we happen to hit one of those cases. As usuall I'd say ;). Cheers, C. > > thanks, marios > > > Thank you for your thoughts! > > Cheers, > > C. > > > [1] https://review.rdoproject.org/r/#/c/31411/ > > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Tue Jan 5 11:02:52 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Jan 2021 12:02:52 +0100 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: <2f0efe53-be85-b9d7-4f77-68b665cb50b5@redhat.com> References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> <2f0efe53-be85-b9d7-4f77-68b665cb50b5@redhat.com> Message-ID: <11ff7da7-e6ec-ceff-06f1-90ea6f8e7305@redhat.com> On 1/5/21 11:48 AM, Cédric Jeanneret wrote: > > > On 1/5/21 9:56 AM, Marios Andreou wrote: >> >> >> On Tue, Jan 5, 2021 at 9:44 AM Cédric Jeanneret > > wrote: >> >> Hello there, >> >> Since we introduced Podman instead of Docker in tripleo, we are running >> after the right version. >> For now, the version we should use is provided by a specific >> container-tools module stream (container-tools:2.0). The default stream >> being container-tools:rhel8, we might end with a deployed undercloud >> running the wrong podman version, leading to some nice issues, >> especially in a SELinux-enabled environment. >> >> >> o/ Tengu thanks for socialising that issue, it's a fun one for sure... >> >> first question is _where_ is that the default and it is mostly >> rhetorical as I can guess. In general though i think it's worth filing a >> LP bug with some of these details so we can also track whatever is >> decided as a fix here. > > Right - always hard to find the right starting point ;). Though a mail > would be a nice thing in order to get directions/ideas - and, hey, you > just pointed some already! > Here's the LP: > https://bugs.launchpad.net/tripleo/+bug/1910217 > >> >> >> Currently, the main dependency tree is: >> python3-tripleoclient requires podman >> That's mostly all. >> >> While we can, of course, edit the python3-tripleoclient spec file in >> order to pin the precise podman version (or, at least, set upper and >> lower constraints) in order to target the version provided by the right >> stream, it's not ideal: >> - package install error will lead to more questions (since ppl might >> skip doc for #reason) >> - if we change the target version, we'll need to ensure it's present in >> a stream (and, well, update the doc if we need to switch the stream) >> - and probably some other reasons. >> >> Since we can NOT depend on a specific stream in rpm (for instance, we >> can't put "Requires @container-tools:2.0/podman" like we might in >> ansible), we're a bit stuck. >> >> An ugly hack is tested in the spec file[1], using a bash thing in order >> to check the activated stream, but this is not ideal either, especially >> since it makes the whole package building/checking fail during the >> "mock" stage. In order to make it pass, we'd need to modify the whole >> RDO in order to take into account stream switching. This isn't >> impossible, but it leads to other concerns, especially when it comes to >> "hey we need to switch the stream for ". And I'm pretty sure no >> one will want to dig into it, for good reasons ;). >> >> This leads to a last possibility: >> drop the "Requires podman" from tripleo dependency tree, and rely on >> >> >> so how does that work in practice though I mean what about >> python-tripleoclient. Quick grep just now tells me at least the >> heat-launcher is directly using podman. How do we drop that requirement. > > Good catch. Maybe by ordering a bit things and calling the podman > install bit before calling the heat-launcher? Might be ugly and, well, > against some of the common practices, but at least it would ensure we > get what we actually want regarding versions and sources. > >>   >> >> tripleo-ansible to actually enable the correct module, and install >> podman. >> This means podman will be installed during the undercloud deploy, for >> instance as a host_prep_tasks (step_0). >> >> >> I am missing how using tripleo-ansible helps us with the stream >> question. Do you mean add logic in tripleo-ansible that tests the stream >> and sets the correct version ? > > exactly. Ansible is able to switch module streams, and install packages. > Basically, we'd "just" need to extend a bit this role/task: > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_podman/tasks/tripleo_podman_install.yml > > That would make things cleaner, nicer, and more reliable. Maybe we can keep the dependency as-is, and "just" ensure tripleo-ansible "tripleo_podman" role sets the right things during the deploy time? For instance, we can add a task ensuring we're using the right stream, then install podman (and buildah). That means heat-launcher might use the wrong podman version (bleh...) but the finished undercloud should be fine? Not 100% sure it's really possible, especially since module switching might fail under some conditions when a package is installed :/. Guess some testing will be needed as well. > > >> >> While I'm not that happy with this idea, it's probably not as bad as the >> other hacks we've tested so far, and would, at last, prevent once for >> all the usual "it's not working" due to a wrong podman version (and, >> yes, we get plenty of them, especially in the OSP version). >> >> What do you think? Any other way of solving this issue? Remember, we're >> talking about "ensuring we get the right version, coming from a specific >> stream" and, especially, "ensure operator can't install the wrong one" >> (if they don't follow the doc, of if they are using automation that >> doesn't check actual requirements in the doc"... >> >> >> in general i lean more towards 'ansible tasks' rather than 'bash thing >> in the specfile' to solve that. Either way though this is us reacting >> (with some fix/hack/workaround) to a problem that isn't anything to do >> with TripleO or even OpenStack but rather an OS/packaging issue. So do >> we have any longer term plans on what you mentioned on IRC just now >> regarding  getting module support for RPM deps (do you mean something in >> the spec file here?) or is that a complete dead end? > > AFAIK, module support within RPM spec file as a standard thing is not in > the roadmap. Apparently, the ppl behind this "module" concept aren't > willing to add any sort of support. > I've worked with Lon (Red Hat side) for the patch in the spec file, > we've tried multiple ways and, well...... it's really not something I'm > happy with. One idea I just got was about rpm hooks - iirc there's a > thing in dnf that might help, have to read some more things about that. > But I have the feeling this is still the wrong way - unless it's > considered as a "last security net just to be really-really sure", > instead of the current usage we have (which is, basically, ensure ppl do > read the doc and follow it). > > So the main thing: it's not a TripleO issue per se - we're just using a > feature that has a weak support in some cases, and, well, we happen to > hit one of those cases. As usuall I'd say ;). > > Cheers, > > C. > >> >> thanks, marios >> >> >> Thank you for your thoughts! >> >> Cheers, >> >> C. >> >> >> [1] https://review.rdoproject.org/r/#/c/31411/ >> >> >> >> -- >> Cédric Jeanneret (He/Him/His) >> Sr. Software Engineer - OpenStack Platform >> Deployment Framework TC >> Red Hat EMEA >> https://www.redhat.com/ >> > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From chacon.piza at gmail.com Tue Jan 5 11:05:28 2021 From: chacon.piza at gmail.com (Martin Chacon Piza) Date: Tue, 5 Jan 2021 12:05:28 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: Hi Herve, I have added this topic to the Monasca irc meeting today. Thank you, Martin (chaconpiza) El lun, 4 de ene. de 2021 a la(s) 18:30, Herve Beraud (hberaud at redhat.com) escribió: > Thanks all! > > Here we can track our advancement: > > https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) > > Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek < > radoslaw.piliszek at gmail.com> a écrit : > >> On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: >> > >> > Here is the filtered list of projects that meet the conditions leading >> to the bug, and who should be fixed to completely solve our issue: >> > >> > ... >> > etcd3gw >> > ... >> > python-masakariclient >> > ... >> > >> > Notice that some of these projects aren't deliverables but if possible >> it could be worth fixing them too. >> > >> > These projects have an incompatibility between entries in their >> test-requirements.txt, and they're missing a doc/requirements.txt file. >> > >> > The more straightforward path to unlock our job >> "publish-openstack-releasenotes-python3" is to create a >> doc/requirements.txt file that only contains the needed dependencies to >> reduce the possibility of pip resolver issues. I personally think that we >> could use the latest allowed version of requirements (sphinx, reno, etc...). >> > >> > I propose to track the related advancement by using the >> "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we >> would be able to update our status. >> > >> > Also it could be worth fixing test-requirements.txt incompatibilities >> but this task is more on the projects teams sides and this task could be >> done with a follow up patch. >> > >> > Thoughts? >> >> Thanks, Hervé! >> >> Done for python-masakariclient in [1]. >> >> etcd3gw needs more love in general but I will have this split in mind. >> >> [1] https://review.opendev.org/c/openstack/python-masakariclient/+/769163 >> >> -yoctozepto >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- *Martín Chacón Pizá* *chacon.piza at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Tue Jan 5 11:45:24 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 5 Jan 2021 12:45:24 +0100 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: <11ff7da7-e6ec-ceff-06f1-90ea6f8e7305@redhat.com> References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> <2f0efe53-be85-b9d7-4f77-68b665cb50b5@redhat.com> <11ff7da7-e6ec-ceff-06f1-90ea6f8e7305@redhat.com> Message-ID: <6495030f-0af7-f68e-23a6-5f72f469fabb@redhat.com> On 1/5/21 12:02 PM, Cédric Jeanneret wrote: > > > On 1/5/21 11:48 AM, Cédric Jeanneret wrote: >> >> >> On 1/5/21 9:56 AM, Marios Andreou wrote: >>> >>> >>> On Tue, Jan 5, 2021 at 9:44 AM Cédric Jeanneret >> > wrote: >>> >>> Hello there, >>> >>> Since we introduced Podman instead of Docker in tripleo, we are running >>> after the right version. >>> For now, the version we should use is provided by a specific >>> container-tools module stream (container-tools:2.0). The default stream >>> being container-tools:rhel8, we might end with a deployed undercloud >>> running the wrong podman version, leading to some nice issues, >>> especially in a SELinux-enabled environment. >>> >>> >>> o/ Tengu thanks for socialising that issue, it's a fun one for sure... >>> >>> first question is _where_ is that the default and it is mostly >>> rhetorical as I can guess. In general though i think it's worth filing a >>> LP bug with some of these details so we can also track whatever is >>> decided as a fix here. >> >> Right - always hard to find the right starting point ;). Though a mail >> would be a nice thing in order to get directions/ideas - and, hey, you >> just pointed some already! >> Here's the LP: >> https://bugs.launchpad.net/tripleo/+bug/1910217 >> >>> >>> >>> Currently, the main dependency tree is: >>> python3-tripleoclient requires podman >>> That's mostly all. >>> >>> While we can, of course, edit the python3-tripleoclient spec file in >>> order to pin the precise podman version (or, at least, set upper and >>> lower constraints) in order to target the version provided by the right >>> stream, it's not ideal: >>> - package install error will lead to more questions (since ppl might >>> skip doc for #reason) >>> - if we change the target version, we'll need to ensure it's present in >>> a stream (and, well, update the doc if we need to switch the stream) >>> - and probably some other reasons. >>> >>> Since we can NOT depend on a specific stream in rpm (for instance, we >>> can't put "Requires @container-tools:2.0/podman" like we might in >>> ansible), we're a bit stuck. >>> >>> An ugly hack is tested in the spec file[1], using a bash thing in order >>> to check the activated stream, but this is not ideal either, especially >>> since it makes the whole package building/checking fail during the >>> "mock" stage. In order to make it pass, we'd need to modify the whole >>> RDO in order to take into account stream switching. This isn't >>> impossible, but it leads to other concerns, especially when it comes to >>> "hey we need to switch the stream for ". And I'm pretty sure no >>> one will want to dig into it, for good reasons ;). >>> >>> This leads to a last possibility: >>> drop the "Requires podman" from tripleo dependency tree, and rely on >>> >>> >>> so how does that work in practice though I mean what about >>> python-tripleoclient. Quick grep just now tells me at least the >>> heat-launcher is directly using podman. How do we drop that requirement. >> >> Good catch. Maybe by ordering a bit things and calling the podman >> install bit before calling the heat-launcher? Might be ugly and, well, >> against some of the common practices, but at least it would ensure we >> get what we actually want regarding versions and sources. >> >>> >>> >>> tripleo-ansible to actually enable the correct module, and install >>> podman. >>> This means podman will be installed during the undercloud deploy, for >>> instance as a host_prep_tasks (step_0). >>> >>> >>> I am missing how using tripleo-ansible helps us with the stream >>> question. Do you mean add logic in tripleo-ansible that tests the stream >>> and sets the correct version ? >> >> exactly. Ansible is able to switch module streams, and install packages. >> Basically, we'd "just" need to extend a bit this role/task: >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_podman/tasks/tripleo_podman_install.yml >> >> That would make things cleaner, nicer, and more reliable. > > Maybe we can keep the dependency as-is, and "just" ensure > tripleo-ansible "tripleo_podman" role sets the right things during the > deploy time? > For instance, we can add a task ensuring we're using the right stream, > then install podman (and buildah). That means heat-launcher might use > the wrong podman version (bleh...) but the finished undercloud should be > fine? Not 100% sure it's really possible, especially since module > switching might fail under some conditions when a package is installed :/. I propose to use in RPM specs Requires: podman Conflicts: podman < some_lower_constraint_version that would allow to fail early if, a wrong version is going to be installed via existing repos and streams. And failing early is much better than waiting up to the moment it starts running ansible IMHO. And at that later point it would only re-ensure the wanted version. > > Guess some testing will be needed as well. > > >> >> >>> >>> While I'm not that happy with this idea, it's probably not as bad as the >>> other hacks we've tested so far, and would, at last, prevent once for >>> all the usual "it's not working" due to a wrong podman version (and, >>> yes, we get plenty of them, especially in the OSP version). >>> >>> What do you think? Any other way of solving this issue? Remember, we're >>> talking about "ensuring we get the right version, coming from a specific >>> stream" and, especially, "ensure operator can't install the wrong one" >>> (if they don't follow the doc, of if they are using automation that >>> doesn't check actual requirements in the doc"... >>> >>> >>> in general i lean more towards 'ansible tasks' rather than 'bash thing >>> in the specfile' to solve that. Either way though this is us reacting >>> (with some fix/hack/workaround) to a problem that isn't anything to do >>> with TripleO or even OpenStack but rather an OS/packaging issue. So do >>> we have any longer term plans on what you mentioned on IRC just now >>> regarding  getting module support for RPM deps (do you mean something in >>> the spec file here?) or is that a complete dead end? >> >> AFAIK, module support within RPM spec file as a standard thing is not in >> the roadmap. Apparently, the ppl behind this "module" concept aren't >> willing to add any sort of support. >> I've worked with Lon (Red Hat side) for the patch in the spec file, >> we've tried multiple ways and, well...... it's really not something I'm >> happy with. One idea I just got was about rpm hooks - iirc there's a >> thing in dnf that might help, have to read some more things about that. >> But I have the feeling this is still the wrong way - unless it's >> considered as a "last security net just to be really-really sure", >> instead of the current usage we have (which is, basically, ensure ppl do >> read the doc and follow it). >> >> So the main thing: it's not a TripleO issue per se - we're just using a >> feature that has a weak support in some cases, and, well, we happen to >> hit one of those cases. As usuall I'd say ;). >> >> Cheers, >> >> C. >> >>> >>> thanks, marios >>> >>> >>> Thank you for your thoughts! >>> >>> Cheers, >>> >>> C. >>> >>> >>> [1] https://review.rdoproject.org/r/#/c/31411/ >>> >>> >>> >>> -- >>> Cédric Jeanneret (He/Him/His) >>> Sr. Software Engineer - OpenStack Platform >>> Deployment Framework TC >>> Red Hat EMEA >>> https://www.redhat.com/ >>> >> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From cjeanner at redhat.com Tue Jan 5 12:33:01 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 5 Jan 2021 13:33:01 +0100 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: <6495030f-0af7-f68e-23a6-5f72f469fabb@redhat.com> References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> <2f0efe53-be85-b9d7-4f77-68b665cb50b5@redhat.com> <11ff7da7-e6ec-ceff-06f1-90ea6f8e7305@redhat.com> <6495030f-0af7-f68e-23a6-5f72f469fabb@redhat.com> Message-ID: <59ffb8f5-d0a9-3fd6-1e12-dd72e8a8f2a4@redhat.com> On 1/5/21 12:45 PM, Bogdan Dobrelya wrote: > On 1/5/21 12:02 PM, Cédric Jeanneret wrote: >> >> >> On 1/5/21 11:48 AM, Cédric Jeanneret wrote: >>> >>> >>> On 1/5/21 9:56 AM, Marios Andreou wrote: >>>> >>>> >>>> On Tue, Jan 5, 2021 at 9:44 AM Cédric Jeanneret >>> > wrote: >>>> >>>>      Hello there, >>>> >>>>      Since we introduced Podman instead of Docker in tripleo, we are >>>> running >>>>      after the right version. >>>>      For now, the version we should use is provided by a specific >>>>      container-tools module stream (container-tools:2.0). The >>>> default stream >>>>      being container-tools:rhel8, we might end with a deployed >>>> undercloud >>>>      running the wrong podman version, leading to some nice issues, >>>>      especially in a SELinux-enabled environment. >>>> >>>> >>>> o/ Tengu thanks for socialising that issue, it's a fun one for sure... >>>> >>>> first question is _where_ is that the default and it is mostly >>>> rhetorical as I can guess. In general though i think it's worth >>>> filing a >>>> LP bug with some of these details so we can also track whatever is >>>> decided as a fix here. >>> >>> Right - always hard to find the right starting point ;). Though a mail >>> would be a nice thing in order to get directions/ideas - and, hey, you >>> just pointed some already! >>> Here's the LP: >>> https://bugs.launchpad.net/tripleo/+bug/1910217 >>> >>>> >>>> >>>>      Currently, the main dependency tree is: >>>>      python3-tripleoclient requires podman >>>>      That's mostly all. >>>> >>>>      While we can, of course, edit the python3-tripleoclient spec >>>> file in >>>>      order to pin the precise podman version (or, at least, set >>>> upper and >>>>      lower constraints) in order to target the version provided by >>>> the right >>>>      stream, it's not ideal: >>>>      - package install error will lead to more questions (since ppl >>>> might >>>>      skip doc for #reason) >>>>      - if we change the target version, we'll need to ensure it's >>>> present in >>>>      a stream (and, well, update the doc if we need to switch the >>>> stream) >>>>      - and probably some other reasons. >>>> >>>>      Since we can NOT depend on a specific stream in rpm (for >>>> instance, we >>>>      can't put "Requires @container-tools:2.0/podman" like we might in >>>>      ansible), we're a bit stuck. >>>> >>>>      An ugly hack is tested in the spec file[1], using a bash thing >>>> in order >>>>      to check the activated stream, but this is not ideal either, >>>> especially >>>>      since it makes the whole package building/checking fail during the >>>>      "mock" stage. In order to make it pass, we'd need to modify the >>>> whole >>>>      RDO in order to take into account stream switching. This isn't >>>>      impossible, but it leads to other concerns, especially when it >>>> comes to >>>>      "hey we need to switch the stream for ". And I'm pretty >>>> sure no >>>>      one will want to dig into it, for good reasons ;). >>>> >>>>      This leads to a last possibility: >>>>      drop the "Requires podman" from tripleo dependency tree, and >>>> rely on >>>> >>>> >>>> so how does that work in practice though I mean what about >>>> python-tripleoclient. Quick grep just now tells me at least the >>>> heat-launcher is directly using podman. How do we drop that >>>> requirement. >>> >>> Good catch. Maybe by ordering a bit things and calling the podman >>> install bit before calling the heat-launcher? Might be ugly and, well, >>> against some of the common practices, but at least it would ensure we >>> get what we actually want regarding versions and sources. >>> >>>>   >>>>      tripleo-ansible to actually enable the correct module, and install >>>>      podman. >>>>      This means podman will be installed during the undercloud >>>> deploy, for >>>>      instance as a host_prep_tasks (step_0). >>>> >>>> >>>> I am missing how using tripleo-ansible helps us with the stream >>>> question. Do you mean add logic in tripleo-ansible that tests the >>>> stream >>>> and sets the correct version ? >>> >>> exactly. Ansible is able to switch module streams, and install packages. >>> Basically, we'd "just" need to extend a bit this role/task: >>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_podman/tasks/tripleo_podman_install.yml >>> >>> >>> That would make things cleaner, nicer, and more reliable. >> >> Maybe we can keep the dependency as-is, and "just" ensure >> tripleo-ansible "tripleo_podman" role sets the right things during the >> deploy time? >> For instance, we can add a task ensuring we're using the right stream, >> then install podman (and buildah). That means heat-launcher might use >> the wrong podman version (bleh...) but the finished undercloud should be >> fine? Not 100% sure it's really possible, especially since module >> switching might fail under some conditions when a package is installed >> :/. > > I propose to use in RPM specs > > Requires: podman > Conflicts: podman < some_lower_constraint_version > > that would allow to fail early if, a wrong version is going to be > installed via existing repos and streams. And failing early is much > better than waiting up to the moment it starts running ansible IMHO. And > at that later point it would only re-ensure the wanted version. We'll need to set an upper constraint - in some conditions, the default version is newer than the one we actually want. This is already done, more or less, downstream with an OSP specific package. While it sounds appealing, I still see issues when it comes to version changes. I'd rather ensure we have the proper stream activated, especially since your proposal will lead to more LP/BZ being opened by ppl that don't read the doc© - resulting in endless bounces and more complains about the non-user-friendliness of this approach :(. Ah, also.... we seem to have the same kind of constrain with some "virt" module - default is "rhel" but we apparently need another one (iirc "8") - though this seems to be downstream-only, and only for the overcloud (or standalone)... So the issue isn't "just" for podman, and a better, generic thing would be really nice :). Cheers, C. > >> >> Guess some testing will be needed as well. >> >> >>> >>> >>>> >>>>      While I'm not that happy with this idea, it's probably not as >>>> bad as the >>>>      other hacks we've tested so far, and would, at last, prevent >>>> once for >>>>      all the usual "it's not working" due to a wrong podman version >>>> (and, >>>>      yes, we get plenty of them, especially in the OSP version). >>>> >>>>      What do you think? Any other way of solving this issue? >>>> Remember, we're >>>>      talking about "ensuring we get the right version, coming from a >>>> specific >>>>      stream" and, especially, "ensure operator can't install the >>>> wrong one" >>>>      (if they don't follow the doc, of if they are using automation >>>> that >>>>      doesn't check actual requirements in the doc"... >>>> >>>> >>>> in general i lean more towards 'ansible tasks' rather than 'bash thing >>>> in the specfile' to solve that. Either way though this is us reacting >>>> (with some fix/hack/workaround) to a problem that isn't anything to do >>>> with TripleO or even OpenStack but rather an OS/packaging issue. So do >>>> we have any longer term plans on what you mentioned on IRC just now >>>> regarding  getting module support for RPM deps (do you mean >>>> something in >>>> the spec file here?) or is that a complete dead end? >>> >>> AFAIK, module support within RPM spec file as a standard thing is not in >>> the roadmap. Apparently, the ppl behind this "module" concept aren't >>> willing to add any sort of support. >>> I've worked with Lon (Red Hat side) for the patch in the spec file, >>> we've tried multiple ways and, well...... it's really not something I'm >>> happy with. One idea I just got was about rpm hooks - iirc there's a >>> thing in dnf that might help, have to read some more things about that. >>> But I have the feeling this is still the wrong way - unless it's >>> considered as a "last security net just to be really-really sure", >>> instead of the current usage we have (which is, basically, ensure ppl do >>> read the doc and follow it). >>> >>> So the main thing: it's not a TripleO issue per se - we're just using a >>> feature that has a weak support in some cases, and, well, we happen to >>> hit one of those cases. As usuall I'd say ;). >>> >>> Cheers, >>> >>> C. >>> >>>> >>>> thanks, marios >>>> >>>> >>>>      Thank you for your thoughts! >>>> >>>>      Cheers, >>>> >>>>      C. >>>> >>>> >>>>      [1] https://review.rdoproject.org/r/#/c/31411/ >>>>      >>>> >>>> >>>>      -- >>>>      Cédric Jeanneret (He/Him/His) >>>>      Sr. Software Engineer - OpenStack Platform >>>>      Deployment Framework TC >>>>      Red Hat EMEA >>>>      https://www.redhat.com/ >>>> >>> >> > > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From mkopec at redhat.com Tue Jan 5 14:25:05 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 5 Jan 2021 15:25:05 +0100 Subject: [qa][tempest] Update language in tempest code base In-Reply-To: <175fbcdda1f.11b0cc28f634324.3835419687523632284@ghanshyammann.com> References: <173344b91ec.122943da3630997.4524106110681904507@ghanshyammann.com> <2383106.3VsfAaAtOV@whitebase.usersys.redhat.com> <173348b1df7.b5898c11633886.9175405555090897907@ghanshyammann.com> <175fbcdda1f.11b0cc28f634324.3835419687523632284@ghanshyammann.com> Message-ID: Hi, following the stestr's example I proposed a change in Tempest: https://review.opendev.org/c/openstack/tempest/+/768583 Feel free to review and comment. See also other proposed changes by following the 'inclusive_jargon' topic: https://review.opendev.org/q/topic:inclusive_jargon+(status:open OR status:merged) Kind regards, On Tue, 24 Nov 2020 at 20:56, Ghanshyam Mann wrote: > ---- On Tue, 24 Nov 2020 12:58:47 -0600 Matthew Treinish < > mtreinish at kortar.org> wrote ---- > > On Thu, Jul 09, 2020 at 12:06:39PM -0500, Ghanshyam Mann wrote: > > > ---- On Thu, 09 Jul 2020 11:45:19 -0500 Arx Cruz > wrote ---- > > > > Yes, that's the idea. > > > > We can keep the old interface for a few cycles, with warning > deprecation message advertising to use the new one, and then remove in the > future. > > > > > > Deprecating things leads to two situations which really need some > good reason before doing it: > > > > > > - If we keep the deprecated interfaces working along with new > interfaces then it is confusion for users > > > as well as maintenance effort. In my experience, very less migration > happen to new things if old keep working. > > > > Just a heads up the recent stestr 3.1.0 release > > (https://github.com/mtreinish/stestr/releases/tag/3.1.0) did this > first step > > and deprecated things with: > > > > https://github.com/mtreinish/stestr/pull/297 > > > > There were multiple recent user requests to start this process sooner > rather > > than later. So regardless of what timetable and decision we make for > tempest's > > interfaces we should probably update tempest's internal stestr api > usage to > > reflect the new interface sooner rather than later (it will require > bumping the > > min stestr version to 3.1.0 when that change is made). > > > > > - If we remove them in future then it is breaking change. > > > > For stestr my plan is to make this breaking change eventually as part > of 4.0.0 > > release. The exact timetable for that I'm not clear on yet since we try > to avoid > > making breaking changes. The previous 2 backwards incompatible changes > were > > removing python 2 support (which was 3.0.0) and switching to cliff for > the cli, > > which wasn't strictly backwards incompatible we just made it 2.0.0 as a > > precaution because there were potential edge cases with cliff we were > worried > > about. So there really isn't a established pattern for this kind of > deprecation > > removal. I don't expect it to be a quick process though. > > Thanks matt for the updates and providing new interfaces in stestr, It > will surely help Tempest to > move towards those new deprecate these interface/wording. As discussed in > PTG/Forum for > overall direction in OpenStack, I am ok to do similar changes in Tempest. > > For branchless Tempest, as you mentioned we need to bump the min stestr > version to 3.1.0 > for all supported stable branches which are stein onwards for now. Hope > that is fine from a requirement > perspective. > > We can move Tempest to new stestr 3.1.0 soon and project side usage of > stestr in unit/functional > tests runner is also not much. Seems only two repo: > - > https://codesearch.opendev.org/?q=stestr%20run%20--black&i=nope&files=tox.ini&excludeFiles=&repos= > > -gmann > > > > > > > -Matt Treinish > > > > > IMO, we need to first ask/analyse whether name changes are worth to > do with above things as results. Or in other > > > team we should first define what is 'outdated naming conventions' and > how worth to fix those. > > > > > > -gmann > > > > > > > > > > Kind regards, > > > > > > > > On Thu, Jul 9, 2020 at 6:15 PM Luigi Toscano > wrote: > > > > > > > > > > > > -- > > > > Arx Cruz > > > > Software Engineer > > > > Red Hat EMEA > > > > arxcruz at redhat.com > > > > > @RedHat > Red Hat Red Hat > > > > > On Thursday, 9 July 2020 17:57:14 CEST Ghanshyam Mann > wrote: > > > > > ---- On Thu, 09 Jul 2020 10:14:58 -0500 Arx Cruz < > arxcruz at redhat.com> wrote > > > > > ---- > > > > > > Hello, > > > > > > I would like to start a discussion regarding the topic. > > > > > > At this moment in time we have an opportunity to be a more > open and > > > > > > inclusive project by eliminating outdated naming conventions > from > > > > > > tempest codebase, such as blacklist, whitelist.We should take > the > > > > > > opportunity and do our best to replace outdated terms with > their more > > > > > > inclusive alternatives.As you can see in [1] the TripleO > project is > > > > > > already working on this initiative, and I would like to work > on this as > > > > > > well on the tempest side. > > > > > Thanks Arx for raising it. > > > > > > > > > > I always have hard time to understand the definition of > 'outdated naming > > > > > conventions ' are they outdated from coding language perspective > or > > > > > outdated as English language perspective? I do not see naming > used in > > > > > coding language should be matched with English as > grammar/outdated/new > > > > > style language. As long as they are not so bad (hurt anyone > culture, > > > > > abusing word etc) it is fine to keep them as it is and start > adopting new > > > > > names for new things we code. > > > > > > > > > > For me, naming convention are the things which always can be > improved over > > > > > time, none of the name is best suited for everyone in open > source. But we > > > > > need to understand whether it is worth to do in term of 1. > effort of > > > > > changing those 2. un- comfortness of adopting new names 3. again > changing > > > > > in future. > > > > > > > > > > At least from Tempest perspective, blacklist is very known > common word used > > > > > for lot of interfaces and dependent testing tool. I cannot > debate on how > > > > > good it is or bad but i can debate on not-worth to change now. > For new > > > > > interface, we can always use best-suggested name as per that > > > > > time/culture/maintainers. We have tried few of such improvement > in past but > > > > > end up not-successful. Example: - > > > > > > https://opendev.org/openstack/tempest/src/commit/e1eebfa8451d4c28bef0669e4a > > > > > 7f493b6086cab9/tempest/test.py#L43 > > > > > > > > > > > > > That's not the only used terminology for list of things, though. > We could > > > > always add new interfaces and keep the old ones are deprecated > (but not > > > > advertised) for the foreseable future. The old code won't be > broken and the > > > > new one would use the new terminology, I'd say it's a good > solution. > > > > > > > > > > > > -- > > > > Luigi > > > > > > > > > > > > > > > > > > > -- Martin Kopec Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue Jan 5 14:57:19 2021 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 5 Jan 2021 07:57:19 -0700 Subject: [tripleo] Package dependencies, podman, dnf modules In-Reply-To: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> References: <6ab7507a-b5b8-c124-3e04-c3d410de6ff7@redhat.com> Message-ID: On Tue, Jan 5, 2021 at 12:53 AM Cédric Jeanneret wrote: > > Hello there, > > Since we introduced Podman instead of Docker in tripleo, we are running > after the right version. > For now, the version we should use is provided by a specific > container-tools module stream (container-tools:2.0). The default stream > being container-tools:rhel8, we might end with a deployed undercloud > running the wrong podman version, leading to some nice issues, > especially in a SELinux-enabled environment. > > Currently, the main dependency tree is: > python3-tripleoclient requires podman > That's mostly all. > > While we can, of course, edit the python3-tripleoclient spec file in > order to pin the precise podman version (or, at least, set upper and > lower constraints) in order to target the version provided by the right > stream, it's not ideal: > - package install error will lead to more questions (since ppl might > skip doc for #reason) > - if we change the target version, we'll need to ensure it's present in > a stream (and, well, update the doc if we need to switch the stream) > - and probably some other reasons. > So as mentioned we did this and it leads to poor UX because the conflict message doesn't help direct people to fixing their container-tools module. > Since we can NOT depend on a specific stream in rpm (for instance, we > can't put "Requires @container-tools:2.0/podman" like we might in > ansible), we're a bit stuck. > IMHO This really needs to be in the RPM spec ASAP to make modules viable in the long term but that's not an OpenStack issue. > An ugly hack is tested in the spec file[1], using a bash thing in order > to check the activated stream, but this is not ideal either, especially > since it makes the whole package building/checking fail during the > "mock" stage. In order to make it pass, we'd need to modify the whole > RDO in order to take into account stream switching. This isn't > impossible, but it leads to other concerns, especially when it comes to > "hey we need to switch the stream for ". And I'm pretty sure no > one will want to dig into it, for good reasons ;). > > This leads to a last possibility: > drop the "Requires podman" from tripleo dependency tree, and rely on > tripleo-ansible to actually enable the correct module, and install podman. > This means podman will be installed during the undercloud deploy, for > instance as a host_prep_tasks (step_0). > Since we may or may not consume podman in code we're packaging we likely should not drop the requirement. > While I'm not that happy with this idea, it's probably not as bad as the > other hacks we've tested so far, and would, at last, prevent once for > all the usual "it's not working" due to a wrong podman version (and, > yes, we get plenty of them, especially in the OSP version). > > What do you think? Any other way of solving this issue? Remember, we're > talking about "ensuring we get the right version, coming from a specific > stream" and, especially, "ensure operator can't install the wrong one" > (if they don't follow the doc, of if they are using automation that > doesn't check actual requirements in the doc"... > IMHO, as mentioned on IRC, we need to improve the initial setup process to reduce the likelihood of a misconfiguration. I had chatted with the CI folks on the best way to solve this in a single place that would be useful for users and in CI. We generally agreed to combine the efforts into tripleo-repos because it can be installed independently of everything else and can be a single source of truth for version info. I proposed a WIP for a version information structure so we can have a single source of truth for repo and version information. https://review.opendev.org/c/openstack/tripleo-repos/+/767214 Today we require the user to manually setup repositories and module versions prior to installing the packages. I think it's best if we work on reducing the errors that occur during this process. We could add something into the validation bits, but we will still need a single source of truth for the required versions so we're not duplicating the version information. Thanks, -Alex > Thank you for your thoughts! > > Cheers, > > C. > > > [1] https://review.rdoproject.org/r/#/c/31411/ > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > From knikolla at bu.edu Tue Jan 5 15:15:51 2021 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 5 Jan 2021 15:15:51 +0000 Subject: [keystone] No meeting Jan 5 Message-ID: <4626967D-ED2C-4152-B8C7-4FD1403330F9@bu.edu> Hi all, There will be no keystone IRC meeting on Jan 5. Best, Kristi Nikolla From ankelezhang at gmail.com Tue Jan 5 03:22:52 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Tue, 5 Jan 2021 11:22:52 +0800 Subject: tinyipa cannot boot OS of baremetal node Message-ID: Hi~ My Rocky OpenStack platform deployed with official documents, includes Keystone/Cinder/Neutron/Nova and Ironic. I used to boot my baremetal nodes by CoreOS downloaded on https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/ Since I want to customize my own HardwareManager for configuring RAID, I have build TinyIPA image tinyipa.tar.gz and tinyipa.vmlinuz with ironic-python-agent-builder(master branch) and ironic-python-agent(rocky branch). Here are all the products of the build process. [image: image.png] Then I used these two images to create the baremetal node, and boot nova server, but I didn't get the results I wanted, it couldn't enter the ramdisk and always in 'wait call-back' state. as following [image: image.png] I got nothing in /var/log/ironic/ironig-conductor.log and /var/log/nova/nova-compute.log I don't know if these two image (tinyipa.tar.gz and tinyipa.vmlinuz) are valid for Ironic. If not, how can I customize HardwareManager? Looking forward to hearing from you. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 126537 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 27968 bytes Desc: not available URL: From jxfu at 163.com Tue Jan 5 14:27:16 2021 From: jxfu at 163.com (=?GBK?B?uLbWvruq?=) Date: Tue, 5 Jan 2021 22:27:16 +0800 (CST) Subject: [Nova]ask for help: how to plug vif to ovs when ovs is not together with nova-compute Message-ID: <77980c77.716f.176d2f1e342.Coremail.jxfu@163.com> hello, I encountered some problems in deploying the smart network interface card(SmartNIC)where Nova-compute is not together with OVS. For example, OVS is deployed on the SmartNIC and Nova-compute is on the Host(Compute Node), how does Nova plug the virtual interface of VM to OVS bridge? Does the latest version of openstack support this? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Tue Jan 5 16:36:28 2021 From: ionut at fleio.com (Ionut Biru) Date: Tue, 5 Jan 2021 18:36:28 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi, I found this story: https://storyboard.openstack.org/#!/story/2008308 regarding disabling cluster update notifications in rabbitmq. I think this will help me. On Tue, Jan 5, 2021 at 12:21 PM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Sorry, being repetitive here, but maybe try adding this to your magnum > config as well. If you have A LOT of cores it could add up to a crazy > amount of connections. > > [conductor] > workers = 2 > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 1:50 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > Here is my config. maybe something is fishy. > > I did have around 300 messages in the queue in notification.info > > and notification.err and I purged them. > > https://paste.xinu.at/woMt/ > > > > > On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > workers = 2 > > > ------------------------------ > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > I tried with process=1 and it reached 1016 connections to rabbitmq. > lsof > https://paste.xinu.at/jGg/ > > > i think it goes into error when it reaches 1024 file descriptors. > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > * Maybe your rabbit is flooded with notifications that are not consumed? > * You can use way more than 1024 file descriptors, maybe 2^10? > > Spyros > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 5 17:02:06 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jan 2021 17:02:06 +0000 Subject: [Nova]ask for help: how to plug vif to ovs when ovs is not together with nova-compute In-Reply-To: <77980c77.716f.176d2f1e342.Coremail.jxfu@163.com> References: <77980c77.716f.176d2f1e342.Coremail.jxfu@163.com> Message-ID: On Tue, 2021-01-05 at 22:27 +0800, 付志华 wrote: > hello, >     I encountered some problems in deploying the smart network interface card(SmartNIC)where Nova-compute is not together with OVS. For example, OVS is deployed on the SmartNIC and Nova-compute is on the Host(Compute Node), how does Nova plug the virtual interface of VM to OVS bridge? >     Does the latest version of openstack support this? no it does not. there are ways that you coudl attepmt to make this work namely you could change [os_vif_ovs]/ovsdb_connection https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L59-L64 so that os-vif could talk to ovs on the smart nic but that would not work for cases where libvirt plugggs the interface. even in case where os-vif pluggs the interface qemu/libvirt will create the vm tap device on the host not on the smart nic so ovs will not be able to see it to manage it. running ovs on a smart nic instead of the host cpu would require other chagne as well so this is a non tirival change. there were some propoals to allow the neutron l2 agent to mange remote ovs instance but that was not accepted. as it stands without a lot of work there is no easy way to deploy openstack in that topology and at this point in the cycle im not sure its something that could be completed this cycle even if we wanted too. its certenly somethign that can be discussed buti would suggest that any work to enable this should proably take place in X at the earliest. >     Thanks. > > >     > > > > > >     > > > From rodrigo.barbieri2010 at gmail.com Tue Jan 5 17:17:58 2021 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Tue, 5 Jan 2021 14:17:58 -0300 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 Message-ID: Hi Nova folks and OpenStack operators! I have had some trouble recently where while using the "images_type = rbd" libvirt option my ceph cluster got filled up without I noticing and froze all my nova services and instances. I started digging and investigating why and how I could prevent or workaround this issue, but I didn't find a very reliable clean way. I documented all my steps and investigation in bug 1908133 [0]. It has been marked as a duplicate of 1522307 [1] which has been around for quite some time, so I am wondering if any operators have been using nova + ceph in production with "images_type = rbd" config set and how you have been handling/working around the issue. Thanks in advance! [0] https://bugs.launchpad.net/nova/+bug/1908133 [1] https://bugs.launchpad.net/nova/+bug/1522307 -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 5 18:38:14 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jan 2021 18:38:14 +0000 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: References: Message-ID: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: > Hi Nova folks and OpenStack operators! > > I have had some trouble recently where while using the "images_type = rbd" > libvirt option my ceph cluster got filled up without I noticing and froze > all my nova services and instances. > > I started digging and investigating why and how I could prevent or > workaround this issue, but I didn't find a very reliable clean way. > > I documented all my steps and investigation in bug 1908133 [0]. It has been > marked as a duplicate of 1522307 [1] which has been around for quite some > time, so I am wondering if any operators have been using nova + ceph in > production with "images_type = rbd" config set and how you have been > handling/working around the issue. this is indeed a know issue and the long term plan to fix it was to track shared storae as a sharing resouce provide in plamcent. that never happend so there si currenlty no mechanium available to prevent this explcitly in nova. the disk filter which is nolonger used could prevnet the boot of a vm that would fill the ceph pool but it could not protect against two concurrent request form filling the pool. placement can protect against that due to the transational nature of allocations which serialise all resouce useage however since each host reports the total size of the ceph pool as its local storage that wont work out of the box. as a quick hack what you can do is set the [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio on each of your compute agents configs. that will prevent over subscription however it has other negitve sidefects. mainly that you will fail to scudle instance that could boot if a host exced its 1/n usage so unless you have perfectly blanced consumtion this is not a good approch. a better appoch but one that requires external scripting is to have a chron job that will update the resrved usaage of each of the disk_gb inventores to the actull amount of of stoarge allocated form the pool. the real fix however is for nova to tack its shared usage in placment correctly as a sharing resouce provide. its possible you might be able to do that via the porvider.yaml file by overriding the local disk_gb to 0 on all comupte nodes then creating a singel haring resouce provider of disk_gb that models the ceph pool. https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html currently that does not support the addtion of providers to placment agggreate so while it could be used to 0 out the comptue node disk inventoies and to create a sharing provider it with the MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping which compute nodes can consume form sharing provider via the agggrate but you could do that form. that assume that "sharing resouce provdiers" actully work. bacialy what it comes down to today is you need to monitor the avaiable resouce yourslef externally and ensure you never run out of space. that sucks but untill we proably track things in plamcent there is nothign we can really do. the two approch i suggested above might work for a subset of usecasue but really this is a feature that need native suport in nova to adress properly. > > Thanks in advance! > > [0] https://bugs.launchpad.net/nova/+bug/1908133 > [1] https://bugs.launchpad.net/nova/+bug/1522307 > From pierre at stackhpc.com Tue Jan 5 21:32:58 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 5 Jan 2021 22:32:58 +0100 Subject: [all][tc] Thoughts on Python 3.7 support Message-ID: Hello, There have been many patches submitted to drop the Python 3.7 classifier from setup.cfg: https://review.opendev.org/q/%2522remove+py37%2522 The justification is that Wallaby tested runtimes only include 3.6 and 3.8. Most projects are merging these patches, but I've seen a couple of objections from ironic and horizon: - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 - https://review.opendev.org/c/openstack/horizon/+/769237 What are the thoughts of the TC and of the overall community on this? Should we really drop these classifiers when there are no corresponding CI jobs, even though more Python versions may well be supported? Best wishes, Pierre Riteau (priteau) From fungi at yuggoth.org Tue Jan 5 21:51:08 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 Jan 2021 21:51:08 +0000 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: Message-ID: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > There have been many patches submitted to drop the Python 3.7 > classifier from setup.cfg: > https://review.opendev.org/q/%2522remove+py37%2522 > The justification is that Wallaby tested runtimes only include 3.6 and 3.8. > > Most projects are merging these patches, but I've seen a couple of > objections from ironic and horizon: > > - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > - https://review.opendev.org/c/openstack/horizon/+/769237 > > What are the thoughts of the TC and of the overall community on this? > Should we really drop these classifiers when there are no > corresponding CI jobs, even though more Python versions may well be > supported? My recollection of the many discussions we held was that the runtime document would recommend the default python3 available in our targeted platforms, but that we would also make a best effort to test with the latest python3 available to us at the start of the cycle as well. It was suggested more than once that we should test all minor versions in between, but this was ruled out based on the additional CI resources it would consume for minimal gain. Instead we deemed that testing our target version and the latest available would give us sufficient confidence that, if those worked, the versions in between them were likely fine as well. Based on that, I think the versions projects claim to work with should be contiguous ranges, not contiguous lists of the exact versions tested (noting that those aren't particularly *exact* versions to begin with). Apologies for the lack of references to old discussions, I can probably dig some up from the ML and TC meetings several years back of folks think it will help inform this further. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Jan 5 22:08:15 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 5 Jan 2021 22:08:15 +0000 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: <20210105220815.66kggyddjnwjyoi6@yuggoth.org> On 2021-01-05 21:51:08 +0000 (+0000), Jeremy Stanley wrote: [...] > not contiguous lists of the exact versions tested [...] Er, sorry, "not INcontiguous lists" is what I meant to type. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From janders at redhat.com Wed Jan 6 00:36:22 2021 From: janders at redhat.com (Jacob Anders) Date: Wed, 6 Jan 2021 10:36:22 +1000 Subject: [ironic] inspector auto-discovery disabled by default in bifrost Message-ID: Hi Ironicers, The Ironic team recently decided to change the default inspector auto-discovery behavior in bifrost. In the past, this feature was enabled by default. However, this was causing issues in deployments where the operators had no intention of using auto-discovery but the IPA somehow booted anyway, causing port conflicts and creating failure modes which were particularly tricky to debug. For this reason, inspector auto-discovery has been disabled by default with this change which merged yesterday: https://review.opendev.org/c/openstack/bifrost/+/762998 This is to make sure that the operators utilising inspector auto-discovery in bifrost are aware of this change and can re-enable inspector discovery if desired. Best regards, Jacob -------------- next part -------------- An HTML attachment was scrubbed... URL: From emiller at genesishosting.com Wed Jan 6 01:44:37 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Tue, 5 Jan 2021 19:44:37 -0600 Subject: [kolla-ansible] Upgrading and skipping a release? Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814FB6@gmsxchsvr01.thecreation.com> Hi, We're working on some upgrades of OpenStack deployed with Kolla Ansible (Stein and later), and wasn't sure if it was practical or possible to upgrade, say, from Stein to Victoria, or Train to Victoria, directly, instead of upgrading in 3 steps from Stein->Train->Ussuri->Victoria. Any reason skipping a release will cause any known harm? Anybody done this successfully? Thanks! Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jan 6 08:58:36 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 6 Jan 2021 08:58:36 +0000 Subject: [kolla-ansible] Upgrading and skipping a release? In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814FB6@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814FB6@gmsxchsvr01.thecreation.com> Message-ID: On Wed, 6 Jan 2021 at 01:45, Eric K. Miller wrote: > > Hi, > > > > We're working on some upgrades of OpenStack deployed with Kolla Ansible (Stein and later), and wasn't sure if it was practical or possible to upgrade, say, from Stein to Victoria, or Train to Victoria, directly, instead of upgrading in 3 steps from Stein->Train->Ussuri->Victoria. Any reason skipping a release will cause any known harm? Anybody done this successfully? Hi Eric. What you're referring to is a fast-forward upgrade [1]. This is not officially supported by Kolla Ansible, and not something we test upstream. There was an effort a few years ago to implement it, but it stalled. The test matrix does increase somewhat when you allow FFUs, which is why I believe many projects such as Tripleo define supported FFU jumps. One thing in particular that may catch you out is some DB migrations or cleanup processes that happen at runtime may not get executed. In short, I'd suggest going one release at a time. It's a pain, but upgrades in Kolla are relatively sane. [1] https://wiki.openstack.org/wiki/Fast_forward_upgrades#:~:text=9%20Gotchas-,What%20is%20a%20Fast%20Forward%20Upgrade%3F,to%20your%20desired%20final%20version. Mark > > Thanks! > > > Eric > > From eandersson at blizzard.com Wed Jan 6 10:23:25 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Wed, 6 Jan 2021 10:23:25 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: I pushed a couple of patches that you can try out. This is the most likely culprit. https://review.opendev.org/c/openstack/magnum/+/769471 - Re-use rpc client I also created this one, but doubt this is an issue as the implementation here is the same as I use in Designate https://review.opendev.org/c/openstack/magnum/+/769457 - [WIP] Singleton notifier Finally I also created a PR to add magnum-api testing using uwsgi. https://review.opendev.org/c/openstack/magnum/+/769450 Let me know if any of these patches help! ________________________________ From: Ionut Biru Sent: Tuesday, January 5, 2021 8:36 AM To: Erik Olof Gunnar Andersson Cc: Spyros Trigazis ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, I found this story: https://storyboard.openstack.org/#!/story/2008308 regarding disabling cluster update notifications in rabbitmq. I think this will help me. On Tue, Jan 5, 2021 at 12:21 PM Erik Olof Gunnar Andersson > wrote: Sorry, being repetitive here, but maybe try adding this to your magnum config as well. If you have A LOT of cores it could add up to a crazy amount of connections. [conductor] workers = 2 ________________________________ From: Ionut Biru > Sent: Tuesday, January 5, 2021 1:50 AM To: Erik Olof Gunnar Andersson > Cc: Spyros Trigazis >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson > wrote: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis > Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru > Cc: Erik Olof Gunnar Andersson >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Jan 6 10:34:03 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 6 Jan 2021 11:34:03 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: On Tue, Jan 5, 2021 at 10:53 PM Jeremy Stanley wrote: > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > > There have been many patches submitted to drop the Python 3.7 > > classifier from setup.cfg: > > https://review.opendev.org/q/%2522remove+py37%2522 > > The justification is that Wallaby tested runtimes only include 3.6 and > 3.8. > > > > Most projects are merging these patches, but I've seen a couple of > > objections from ironic and horizon: > > > > - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > > - https://review.opendev.org/c/openstack/horizon/+/769237 > > > > What are the thoughts of the TC and of the overall community on this? > > Should we really drop these classifiers when there are no > > corresponding CI jobs, even though more Python versions may well be > > supported? > > My recollection of the many discussions we held was that the runtime > document would recommend the default python3 available in our > targeted platforms, but that we would also make a best effort to > test with the latest python3 available to us at the start of the > cycle as well. It was suggested more than once that we should test > all minor versions in between, but this was ruled out based on the > additional CI resources it would consume for minimal gain. Instead > we deemed that testing our target version and the latest available > would give us sufficient confidence that, if those worked, the > versions in between them were likely fine as well. Based on that, I > think the versions projects claim to work with should be contiguous > ranges, not contiguous lists of the exact versions tested (noting > that those aren't particularly *exact* versions to begin with). > This is precisely my expectation: if we support 3.6 and 3.8, it's reasonable to suggest we support 3.7. Not supporting it gains us nothing. Dmitry > > Apologies for the lack of references to old discussions, I can > probably dig some up from the ML and TC meetings several years back > of folks think it will help inform this further. > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Jan 6 11:26:35 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 6 Jan 2021 12:26:35 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: Sorry for top posting but just a general remark: Do note Debian 10 is using Python 3.7 and that is what Kolla is testing too. I know Debian is not considered a tested platform but people use it successfully. My opinion is, therefore, that we should keep 3.7 in classifiers. -yoctozepto On Wed, Jan 6, 2021 at 11:37 AM Dmitry Tantsur wrote: > > > > On Tue, Jan 5, 2021 at 10:53 PM Jeremy Stanley wrote: >> >> On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: >> > There have been many patches submitted to drop the Python 3.7 >> > classifier from setup.cfg: >> > https://review.opendev.org/q/%2522remove+py37%2522 >> > The justification is that Wallaby tested runtimes only include 3.6 and 3.8. >> > >> > Most projects are merging these patches, but I've seen a couple of >> > objections from ironic and horizon: >> > >> > - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 >> > - https://review.opendev.org/c/openstack/horizon/+/769237 >> > >> > What are the thoughts of the TC and of the overall community on this? >> > Should we really drop these classifiers when there are no >> > corresponding CI jobs, even though more Python versions may well be >> > supported? >> >> My recollection of the many discussions we held was that the runtime >> document would recommend the default python3 available in our >> targeted platforms, but that we would also make a best effort to >> test with the latest python3 available to us at the start of the >> cycle as well. It was suggested more than once that we should test >> all minor versions in between, but this was ruled out based on the >> additional CI resources it would consume for minimal gain. Instead >> we deemed that testing our target version and the latest available >> would give us sufficient confidence that, if those worked, the >> versions in between them were likely fine as well. Based on that, I >> think the versions projects claim to work with should be contiguous >> ranges, not contiguous lists of the exact versions tested (noting >> that those aren't particularly *exact* versions to begin with). > > > This is precisely my expectation: if we support 3.6 and 3.8, it's reasonable to suggest we support 3.7. Not supporting it gains us nothing. > > Dmitry > >> >> >> Apologies for the lack of references to old discussions, I can >> probably dig some up from the ML and TC meetings several years back >> of folks think it will help inform this further. >> -- >> Jeremy Stanley > > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From ionut at fleio.com Wed Jan 6 11:37:23 2021 From: ionut at fleio.com (Ionut Biru) Date: Wed, 6 Jan 2021 13:37:23 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Erik, Thanks a lot for the patch. Indeed 769471 fixes my problem at first glance. I'll let it run for a couple of days. On Wed, Jan 6, 2021 at 12:23 PM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > I pushed a couple of patches that you can try out. > > This is the most likely culprit. > https://review.opendev.org/c/openstack/magnum/+/769471 - Re-use rpc client > > I also created this one, but doubt this is an issue as the implementation > here is the same as I use in Designate > https://review.opendev.org/c/openstack/magnum/+/769457 - [WIP] Singleton > notifier > > Finally I also created a PR to add magnum-api testing using uwsgi. > https://review.opendev.org/c/openstack/magnum/+/769450 > > Let me know if any of these patches help! > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 8:36 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > I found this story: https://storyboard.openstack.org/#!/story/2008308 > > regarding disabling cluster update notifications in rabbitmq. > > I think this will help me. > > On Tue, Jan 5, 2021 at 12:21 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sorry, being repetitive here, but maybe try adding this to your magnum > config as well. If you have A LOT of cores it could add up to a crazy > amount of connections. > > [conductor] > workers = 2 > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 1:50 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > Here is my config. maybe something is fishy. > > I did have around 300 messages in the queue in notification.info > > and notification.err and I purged them. > > https://paste.xinu.at/woMt/ > > > > > On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > workers = 2 > > > ------------------------------ > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > I tried with process=1 and it reached 1016 connections to rabbitmq. > lsof > https://paste.xinu.at/jGg/ > > > i think it goes into error when it reaches 1024 file descriptors. > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > * Maybe your rabbit is flooded with notifications that are not consumed? > * You can use way more than 1024 file descriptors, maybe 2^10? > > Spyros > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 6 11:47:08 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Jan 2021 12:47:08 +0100 Subject: [release] Status: RED - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: @release mangaers: For now I think we can restart validating projects that aren't present in the previous list (c.f http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html ). Normally they aren't impacted by this problem. I'll move to the "Orange" state when all the projects of list will be patched or at least when a related patch will be present in the list (c.f https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open+OR+status:merged)). For now my monitoring indicates that ~50 projects still need related changes. So, for now, please, ensure that the repos aren't listed here before validate a patch http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html Thanks to everyone who helped here! Much appreciated! Le mar. 5 janv. 2021 à 12:05, Martin Chacon Piza a écrit : > Hi Herve, > > I have added this topic to the Monasca irc meeting today. > > Thank you, > Martin (chaconpiza) > > > > El lun, 4 de ene. de 2021 a la(s) 18:30, Herve Beraud (hberaud at redhat.com) > escribió: > >> Thanks all! >> >> Here we can track our advancement: >> >> https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) >> >> Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek < >> radoslaw.piliszek at gmail.com> a écrit : >> >>> On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: >>> > >>> > Here is the filtered list of projects that meet the conditions leading >>> to the bug, and who should be fixed to completely solve our issue: >>> > >>> > ... >>> > etcd3gw >>> > ... >>> > python-masakariclient >>> > ... >>> > >>> > Notice that some of these projects aren't deliverables but if possible >>> it could be worth fixing them too. >>> > >>> > These projects have an incompatibility between entries in their >>> test-requirements.txt, and they're missing a doc/requirements.txt file. >>> > >>> > The more straightforward path to unlock our job >>> "publish-openstack-releasenotes-python3" is to create a >>> doc/requirements.txt file that only contains the needed dependencies to >>> reduce the possibility of pip resolver issues. I personally think that we >>> could use the latest allowed version of requirements (sphinx, reno, etc...). >>> > >>> > I propose to track the related advancement by using the >>> "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we >>> would be able to update our status. >>> > >>> > Also it could be worth fixing test-requirements.txt incompatibilities >>> but this task is more on the projects teams sides and this task could be >>> done with a follow up patch. >>> > >>> > Thoughts? >>> >>> Thanks, Hervé! >>> >>> Done for python-masakariclient in [1]. >>> >>> etcd3gw needs more love in general but I will have this split in mind. >>> >>> [1] >>> https://review.opendev.org/c/openstack/python-masakariclient/+/769163 >>> >>> -yoctozepto >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > *Martín Chacón Pizá* > *chacon.piza at gmail.com * > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 6 12:00:31 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Jan 2021 13:00:31 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: <1767687c4ad.10b9514f7310538.2670005890023858557@ghanshyammann.com> Message-ID: Hello everyone, Here is an email to officialize our position on Olso concerning our lower-constraints testing policy. Indeed we reached a consensus during our last meeting and all the Oslo cores who spoke agreed with this decision to drop the L.C tests [1]. So we already started to drop lower-constraints jobs on Oslo to unlock our gates. Thanks to everyone who joined the discussion and thanks for reading! [1] http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log.txt #topic Dropping lower-constraints testing Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < moguimar at redhat.com> a écrit : > +1 > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann > wrote: > >> ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud >> wrote ---- >> > Hello, >> > As you already surely know, we (the openstack project) currently face >> some issues with our lower-constraints jobs due to pip's latest resolver >> feature. >> > By discussing this topic with Thierry Carrez (ttx) from an oslo point >> of view, we reached the same conclusion that it is more appropriate to drop >> this kind of tests because the complexity and recurring pain neededto >> maintain them now exceeds the benefits provided by this mechanismes. >> > Also we should notice that the number of active maintainers is >> declining, so we think that this is the shortest path to solve this problem >> on oslo for now and for the future too. >> > >> > In a first time I tried to fix our gates by fixing our >> lower-constraints project by project but with around ~36 projects to >> maintain this is a painful task, especially due to nested oslo layers >> inside oslo himself... I saw the face of the hell of dependencies. >> > >> > So, in a second time I submitted a series of patches to drop these >> tests [1]. >> > But before moving further with that we would appreciate discussing >> this with the TC. For now the patches are ready and we just have to push >> the good button accordingly to our choices (+W or abandon). >> > >> > Normally all the oslo projects that need to be fixed are covered by >> [1]. >> > >> > Thoughts? >> >> +1, I think it's not worth to keep maintaining them which is taking too >> much effort. >> >> -gmann >> >> > >> > Thanks for reading. >> > >> > [1] >> https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%20status:merged) >> > -- >> > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// >> github.com/4383/https://twitter.com/4383hberaud >> > -----BEGIN PGP SIGNATURE----- >> > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> > v6rDpkeNksZ9fFSyoY2o >> > =ECSj >> > -----END PGP SIGNATURE----- >> > >> > >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Jan 6 12:43:18 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 06 Jan 2021 13:43:18 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: Message-ID: <7679064.EuSn42jlT6@p1> Hi, Dnia środa, 6 stycznia 2021 13:00:31 CET Herve Beraud pisze: > Hello everyone, > > Here is an email to officialize our position on Olso concerning our > lower-constraints testing policy. > Indeed we reached a consensus during our last meeting and all the Oslo > cores who spoke agreed with this decision to drop the L.C tests [1]. > > So we already started to drop lower-constraints jobs on Oslo to unlock our > gates. Sorry if my question shouldn't be in that thread but does that mean that other projects should/can drop lower constraints jobs too? Is it some general, OpenStack wide decision or it depends on the project? > > Thanks to everyone who joined the discussion and thanks for reading! > > [1] > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log. > txt #topic Dropping lower-constraints testing > > Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < > > moguimar at redhat.com> a écrit : > > +1 > > > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann > > > > wrote: > >> ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud > >> > >> wrote ---- > >> > >> > Hello, > >> > As you already surely know, we (the openstack project) currently face > >> > >> some issues with our lower-constraints jobs due to pip's latest resolver > >> feature. > >> > >> > By discussing this topic with Thierry Carrez (ttx) from an oslo point > >> > >> of view, we reached the same conclusion that it is more appropriate to > >> drop > >> this kind of tests because the complexity and recurring pain neededto > >> maintain them now exceeds the benefits provided by this mechanismes. > >> > >> > Also we should notice that the number of active maintainers is > >> > >> declining, so we think that this is the shortest path to solve this > >> problem > >> on oslo for now and for the future too. > >> > >> > In a first time I tried to fix our gates by fixing our > >> > >> lower-constraints project by project but with around ~36 projects to > >> maintain this is a painful task, especially due to nested oslo layers > >> inside oslo himself... I saw the face of the hell of dependencies. > >> > >> > So, in a second time I submitted a series of patches to drop these > >> > >> tests [1]. > >> > >> > But before moving further with that we would appreciate discussing > >> > >> this with the TC. For now the patches are ready and we just have to push > >> the good button accordingly to our choices (+W or abandon). > >> > >> > Normally all the oslo projects that need to be fixed are covered by > >> > >> [1]. > >> > >> > Thoughts? > >> > >> +1, I think it's not worth to keep maintaining them which is taking too > >> much effort. > >> > >> -gmann > >> > >> > Thanks for reading. > >> > > >> > [1] > >> > >> https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%2 > >> 0status:merged)>> > >> > -- > >> > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > >> > >> github.com/4383/https://twitter.com/4383hberaud > >> > >> > -----BEGIN PGP SIGNATURE----- > >> > > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > >> > v6rDpkeNksZ9fFSyoY2o > >> > =ECSj > >> > -----END PGP SIGNATURE----- > > > > -- > > > > Moisés Guimarães > > > > Software Engineer > > > > Red Hat > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From victoria at vmartinezdelacruz.com Wed Jan 6 12:46:09 2021 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Wed, 6 Jan 2021 09:46:09 -0300 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: +1 I don't see good reasons for removing py3.7 Thanks! On Wed, Jan 6, 2021 at 8:28 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > Sorry for top posting but just a general remark: > > Do note Debian 10 is using Python 3.7 and that is what Kolla is testing > too. > I know Debian is not considered a tested platform but people use it > successfully. > > My opinion is, therefore, that we should keep 3.7 in classifiers. > > -yoctozepto > > On Wed, Jan 6, 2021 at 11:37 AM Dmitry Tantsur > wrote: > > > > > > > > On Tue, Jan 5, 2021 at 10:53 PM Jeremy Stanley > wrote: > >> > >> On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > >> > There have been many patches submitted to drop the Python 3.7 > >> > classifier from setup.cfg: > >> > https://review.opendev.org/q/%2522remove+py37%2522 > >> > The justification is that Wallaby tested runtimes only include 3.6 > and 3.8. > >> > > >> > Most projects are merging these patches, but I've seen a couple of > >> > objections from ironic and horizon: > >> > > >> > - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > >> > - https://review.opendev.org/c/openstack/horizon/+/769237 > >> > > >> > What are the thoughts of the TC and of the overall community on this? > >> > Should we really drop these classifiers when there are no > >> > corresponding CI jobs, even though more Python versions may well be > >> > supported? > >> > >> My recollection of the many discussions we held was that the runtime > >> document would recommend the default python3 available in our > >> targeted platforms, but that we would also make a best effort to > >> test with the latest python3 available to us at the start of the > >> cycle as well. It was suggested more than once that we should test > >> all minor versions in between, but this was ruled out based on the > >> additional CI resources it would consume for minimal gain. Instead > >> we deemed that testing our target version and the latest available > >> would give us sufficient confidence that, if those worked, the > >> versions in between them were likely fine as well. Based on that, I > >> think the versions projects claim to work with should be contiguous > >> ranges, not contiguous lists of the exact versions tested (noting > >> that those aren't particularly *exact* versions to begin with). > > > > > > This is precisely my expectation: if we support 3.6 and 3.8, it's > reasonable to suggest we support 3.7. Not supporting it gains us nothing. > > > > Dmitry > > > >> > >> > >> Apologies for the lack of references to old discussions, I can > >> probably dig some up from the ML and TC meetings several years back > >> of folks think it will help inform this further. > >> -- > >> Jeremy Stanley > > > > > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jan 6 13:21:07 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 06 Jan 2021 13:21:07 +0000 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <7679064.EuSn42jlT6@p1> References: <7679064.EuSn42jlT6@p1> Message-ID: <2f450b85dae98c9c5ae7d14cd2e734036bf78889.camel@redhat.com> On Wed, 2021-01-06 at 13:43 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia środa, 6 stycznia 2021 13:00:31 CET Herve Beraud pisze: > > Hello everyone, > > > > Here is an email to officialize our position on Olso concerning our > > lower-constraints testing policy. > > Indeed we reached a consensus during our last meeting and all the Oslo > > cores who spoke agreed with this decision to drop the L.C tests [1]. > > > > So we already started to drop lower-constraints jobs on Oslo to unlock our > > gates. > > Sorry if my question shouldn't be in that thread but does that mean that other > projects should/can drop lower constraints jobs too? Is it some general, > OpenStack wide decision or it depends on the project? lower constratis is not a required part fo the PTI https://governance.openstack.org/tc/reference/pti/python.html https://governance.openstack.org/tc/reference/project-testing-interface.html so its technially a per project desision as far as i am aware. while project were encouraged to adopt lower constratit testing we did not require all project to provide it and i belive there are some that dont unless im mistaken. so neutron could drop lc testing too i belive if it desired too. unless im misinterperting things > > > > > Thanks to everyone who joined the discussion and thanks for reading! > > > > [1] > > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log. > > txt #topic Dropping lower-constraints testing > > > > Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < > > > > moguimar at redhat.com> a écrit : > > > +1 > > > > > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann > > > > > > wrote: > > > >  ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud > > > > > > > > wrote ---- > > > > > > > >  > Hello, > > > >  > As you already surely know, we (the openstack project) currently face > > > > > > > > some issues with our lower-constraints jobs due to pip's latest resolver > > > > feature. > > > > > > > >  > By discussing this topic with Thierry Carrez (ttx) from an oslo point > > > > > > > > of view, we reached the same conclusion that it is more appropriate to > > > > drop > > > > this kind of tests because the complexity and recurring pain neededto > > > > maintain them now exceeds the benefits provided by this mechanismes. > > > > > > > >  > Also we should notice that the number of active maintainers is > > > > > > > > declining, so we think that this is the shortest path to solve this > > > > problem > > > > on oslo for now and for the future too. > > > > > > > >  > In a first time I tried to fix our gates by fixing our > > > > > > > > lower-constraints project by project but with around ~36 projects to > > > > maintain this is a painful task, especially due to nested oslo layers > > > > inside oslo himself... I saw the face of the hell of dependencies. > > > > > > > >  > So, in a second time I submitted a series of patches to drop these > > > > > > > > tests [1]. > > > > > > > >  > But before moving further with that we would appreciate discussing > > > > > > > > this with the TC. For now the patches are ready and we just have to push > > > > the good button accordingly to our choices (+W or abandon). > > > > > > > >  > Normally all the oslo projects that need to be fixed are covered by > > > > > > > > [1]. > > > > > > > >  > Thoughts? > > > > > > > > +1, I think it's not worth to keep maintaining them which is taking too > > > > much effort. > > > > > > > > -gmann > > > > > > > >  > Thanks for reading. > > > >  > > > > >  > [1] > > > > > > > > https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%2 > > > > 0status:merged)>> > > > >  > -- > > > >  > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > > > > > > > > github.com/4383/https://twitter.com/4383hberaud > > > > > > > >  > -----BEGIN PGP SIGNATURE----- > > > >  > > > > >  > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > >  > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > >  > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > >  > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > >  > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > >  > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > >  > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > >  > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > >  > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > >  > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > >  > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > >  > v6rDpkeNksZ9fFSyoY2o > > > >  > =ECSj > > > >  > -----END PGP SIGNATURE----- > > > > > > -- > > > > > > Moisés Guimarães > > > > > > Software Engineer > > > > > > Red Hat > > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > From smooney at redhat.com Wed Jan 6 13:27:28 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 06 Jan 2021 13:27:28 +0000 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: On Wed, 2021-01-06 at 09:46 -0300, Victoria Martínez de la Cruz wrote: > +1 > > I don't see good reasons for removing py3.7 based on teh agreed testing runtimes for wallaby https://github.com/openstack/governance/blob/master/reference/runtimes/wallaby.rst there is no requiremetn for project to maintain testing for py 3.7 but that does not mean the cant elect to test it as an option addtional runtime provided they test teh minium reuiqred vers which are python 3.6 and 3.8 py 3.7 could be maintined as an optional runtime as we do for python 3.9 but i think part of the motivation of droping the 3.7 jobs is to conserve ci bandwith in general and ensure we have enough to test with 3.9 were we can. > > Thanks! > > On Wed, Jan 6, 2021 at 8:28 AM Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote: > > > Sorry for top posting but just a general remark: > > > > Do note Debian 10 is using Python 3.7 and that is what Kolla is testing > > too. > > I know Debian is not considered a tested platform but people use it > > successfully. > > > > My opinion is, therefore, that we should keep 3.7 in classifiers. > > > > -yoctozepto > > > > On Wed, Jan 6, 2021 at 11:37 AM Dmitry Tantsur > > wrote: > > > > > > > > > > > > On Tue, Jan 5, 2021 at 10:53 PM Jeremy Stanley > > wrote: > > > > > > > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > > > > > There have been many patches submitted to drop the Python 3.7 > > > > > classifier from setup.cfg: > > > > > https://review.opendev.org/q/%2522remove+py37%2522 > > > > > The justification is that Wallaby tested runtimes only include 3.6 > > and 3.8. > > > > > > > > > > Most projects are merging these patches, but I've seen a couple of > > > > > objections from ironic and horizon: > > > > > > > > > > - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > > > > > - https://review.opendev.org/c/openstack/horizon/+/769237 > > > > > > > > > > What are the thoughts of the TC and of the overall community on this? > > > > > Should we really drop these classifiers when there are no > > > > > corresponding CI jobs, even though more Python versions may well be > > > > > supported? > > > > > > > > My recollection of the many discussions we held was that the runtime > > > > document would recommend the default python3 available in our > > > > targeted platforms, but that we would also make a best effort to > > > > test with the latest python3 available to us at the start of the > > > > cycle as well. It was suggested more than once that we should test > > > > all minor versions in between, but this was ruled out based on the > > > > additional CI resources it would consume for minimal gain. Instead > > > > we deemed that testing our target version and the latest available > > > > would give us sufficient confidence that, if those worked, the > > > > versions in between them were likely fine as well. Based on that, I > > > > think the versions projects claim to work with should be contiguous > > > > ranges, not contiguous lists of the exact versions tested (noting > > > > that those aren't particularly *exact* versions to begin with). > > > > > > > > > This is precisely my expectation: if we support 3.6 and 3.8, it's > > reasonable to suggest we support 3.7. Not supporting it gains us nothing. > > > > > > Dmitry > > > > > > > > > > > > > > > Apologies for the lack of references to old discussions, I can > > > > probably dig some up from the ML and TC meetings several years back > > > > of folks think it will help inform this further. > > > > -- > > > > Jeremy Stanley > > > > > > > > > > > > -- > > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > > O'Neill > > > > From eblock at nde.ag Wed Jan 6 13:48:22 2021 From: eblock at nde.ag (Eugen Block) Date: Wed, 06 Jan 2021 13:48:22 +0000 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> References: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> Message-ID: <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> Hi, we're using OpenStack with Ceph in production and also have customers doing that. From my point of view fixing nova to be able to deal with shared storage of course would improve many things, but it doesn't liberate you from monitoring your systems. Filling up a ceph cluster should be avoided and therefore proper monitoring is required. I assume you were able to resolve the frozen instances? Regards, Eugen Zitat von Sean Mooney : > On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: >> Hi Nova folks and OpenStack operators! >> >> I have had some trouble recently where while using the "images_type = rbd" >> libvirt option my ceph cluster got filled up without I noticing and froze >> all my nova services and instances. >> >> I started digging and investigating why and how I could prevent or >> workaround this issue, but I didn't find a very reliable clean way. >> >> I documented all my steps and investigation in bug 1908133 [0]. It has been >> marked as a duplicate of 1522307 [1] which has been around for quite some >> time, so I am wondering if any operators have been using nova + ceph in >> production with "images_type = rbd" config set and how you have been >> handling/working around the issue. > > this is indeed a know issue and the long term plan to fix it was to > track shared storae > as a sharing resouce provide in plamcent. that never happend so > there si currenlty no mechanium > available to prevent this explcitly in nova. > > the disk filter which is nolonger used could prevnet the boot of a > vm that would fill the ceph pool but > it could not protect against two concurrent request form filling the pool. > > placement can protect against that due to the transational nature of > allocations which serialise > all resouce useage however since each host reports the total size of > the ceph pool as its local storage that wont work out of the box. > > as a quick hack what you can do is set the > [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > on each of your compute agents configs. > > > that will prevent over subscription however it has other negitve sidefects. > mainly that you will fail to scudle instance that could boot if a > host exced its 1/n usage > so unless you have perfectly blanced consumtion this is not a good approch. > > a better appoch but one that requires external scripting is to have > a chron job that will update the resrved > usaage of each of the disk_gb inventores to the actull amount of of > stoarge allocated form the pool. > > the real fix however is for nova to tack its shared usage in > placment correctly as a sharing resouce provide. > > its possible you might be able to do that via the porvider.yaml file > > by overriding the local disk_gb to 0 on all comupte nodes > then creating a singel haring resouce provider of disk_gb that > models the ceph pool. > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html > currently that does not support the addtion of providers to placment > agggreate so while it could be used to 0 out the comptue node > disk inventoies and to create a sharing provider it with the > MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping > which compute nodes can consume form sharing provider via the > agggrate but you could do that form. > that assume that "sharing resouce provdiers" actully work. > > > bacialy what it comes down to today is you need to monitor the > avaiable resouce yourslef externally and ensure you never run out of > space. > that sucks but untill we proably track things in plamcent there is > nothign we can really do. > the two approch i suggested above might work for a subset of > usecasue but really this is a feature that need native suport in > nova to adress properly. > >> >> Thanks in advance! >> >> [0] https://bugs.launchpad.net/nova/+bug/1908133 >> [1] https://bugs.launchpad.net/nova/+bug/1522307 >> From fungi at yuggoth.org Wed Jan 6 15:11:42 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jan 2021 15:11:42 +0000 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: <20210106151141.iqeedudrfkogwnka@yuggoth.org> On 2021-01-06 13:27:28 +0000 (+0000), Sean Mooney wrote: [...] > there is no requiremetn for project to maintain testing for py 3.7 > but that does not mean the cant elect to test it as an option > addtional runtime provided they test teh minium reuiqred vers > which are python 3.6 and 3.8 > > py 3.7 could be maintined as an optional runtime as we do for > python 3.9 but i think part of the motivation of droping the 3.7 > jobs is to conserve ci bandwith in general and ensure we have > enough to test with 3.9 were we can. [...] Sure, but that wasn't the question, it was in fact: On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > There have been many patches submitted to drop the Python 3.7 > classifier from setup.cfg: > https://review.opendev.org/q/%2522remove+py37%2522 The > justification is that Wallaby tested runtimes only include 3.6 and > 3.8. [...] > Should we really drop these classifiers when there are no > corresponding CI jobs, even though more Python versions may well > be supported? [...] -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emiller at genesishosting.com Wed Jan 6 15:48:19 2021 From: emiller at genesishosting.com (Eric K. Miller) Date: Wed, 6 Jan 2021 09:48:19 -0600 Subject: [kolla-ansible] Upgrading and skipping a release? In-Reply-To: References: <046E9C0290DD9149B106B72FC9156BEA04814FB6@gmsxchsvr01.thecreation.com> Message-ID: <046E9C0290DD9149B106B72FC9156BEA04814FBF@gmsxchsvr01.thecreation.com> > Hi Eric. What you're referring to is a fast-forward upgrade [1]. This > is not officially supported by Kolla Ansible, and not something we > test upstream. There was an effort a few years ago to implement it, > but it stalled. The test matrix does increase somewhat when you allow > FFUs, which is why I believe many projects such as Tripleo define > supported FFU jumps. One thing in particular that may catch you out is > some DB migrations or cleanup processes that happen at runtime may not > get executed. > > In short, I'd suggest going one release at a time. It's a pain, but > upgrades in Kolla are relatively sane. Thank you mark! Much appreciated. We were hoping to avoid any potential pitfalls with the Python 2.x to 3.x transition, but it looks like we need to simply test quick upgrades from one to the next. Regarding the QEMU/KVM kernel module version on the host, I'm assuming we need to plan for VM shutdown/restart since I "think" we may need a newer version for Ussuri or Victoria. Looks like we're running: (on CentOS 7) Compiled against library: libvirt 4.5.0 Using library: libvirt 4.5.0 Using API: QEMU 4.5.0 Running hypervisor: QEMU 2.12.0 Out of curiosity, have you used CloudLinux' KernelCare or any other hot-swap kernel module components to avoid downtime of a KVM host? Eric From hberaud at redhat.com Wed Jan 6 15:56:54 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 6 Jan 2021 16:56:54 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <2f450b85dae98c9c5ae7d14cd2e734036bf78889.camel@redhat.com> References: <7679064.EuSn42jlT6@p1> <2f450b85dae98c9c5ae7d14cd2e734036bf78889.camel@redhat.com> Message-ID: Concerning Oslo this is a team decision/position and for now this isn't something general, so, you can still continue to test LC if it works for you. The TC still needs to decide if we should continue with that or not in an official manner. Sorry if my precedent message is misleading some of you. Le mer. 6 janv. 2021 à 14:25, Sean Mooney a écrit : > On Wed, 2021-01-06 at 13:43 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Dnia środa, 6 stycznia 2021 13:00:31 CET Herve Beraud pisze: > > > Hello everyone, > > > > > > Here is an email to officialize our position on Olso concerning our > > > lower-constraints testing policy. > > > Indeed we reached a consensus during our last meeting and all the Oslo > > > cores who spoke agreed with this decision to drop the L.C tests [1]. > > > > > > So we already started to drop lower-constraints jobs on Oslo to unlock > our > > > gates. > > > > Sorry if my question shouldn't be in that thread but does that mean that > other > > projects should/can drop lower constraints jobs too? Is it some general, > > OpenStack wide decision or it depends on the project? > lower constratis is not a required part fo the PTI > https://governance.openstack.org/tc/reference/pti/python.html > > https://governance.openstack.org/tc/reference/project-testing-interface.html > so its technially a per project desision as far as i am aware. > > while project were encouraged to adopt lower constratit testing we did not > require all project > to provide it and i belive there are some that dont unless im mistaken. > > so neutron could drop lc testing too i belive if it desired too. > unless im misinterperting things > > > > > > > > > Thanks to everyone who joined the discussion and thanks for reading! > > > > > > [1] > > > > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log > . > > > txt #topic Dropping lower-constraints testing > > > > > > Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < > > > > > > moguimar at redhat.com> a écrit : > > > > +1 > > > > > > > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann < > gmann at ghanshyammann.com> > > > > > > > > wrote: > > > > > ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud < > hberaud at redhat.com> > > > > > > > > > > wrote ---- > > > > > > > > > > > Hello, > > > > > > As you already surely know, we (the openstack project) > currently face > > > > > > > > > > some issues with our lower-constraints jobs due to pip's latest > resolver > > > > > feature. > > > > > > > > > > > By discussing this topic with Thierry Carrez (ttx) from an oslo > point > > > > > > > > > > of view, we reached the same conclusion that it is more > appropriate to > > > > > drop > > > > > this kind of tests because the complexity and recurring pain > neededto > > > > > maintain them now exceeds the benefits provided by this > mechanismes. > > > > > > > > > > > Also we should notice that the number of active maintainers is > > > > > > > > > > declining, so we think that this is the shortest path to solve this > > > > > problem > > > > > on oslo for now and for the future too. > > > > > > > > > > > In a first time I tried to fix our gates by fixing our > > > > > > > > > > lower-constraints project by project but with around ~36 projects > to > > > > > maintain this is a painful task, especially due to nested oslo > layers > > > > > inside oslo himself... I saw the face of the hell of dependencies. > > > > > > > > > > > So, in a second time I submitted a series of patches to drop > these > > > > > > > > > > tests [1]. > > > > > > > > > > > But before moving further with that we would appreciate > discussing > > > > > > > > > > this with the TC. For now the patches are ready and we just have > to push > > > > > the good button accordingly to our choices (+W or abandon). > > > > > > > > > > > Normally all the oslo projects that need to be fixed are > covered by > > > > > > > > > > [1]. > > > > > > > > > > > Thoughts? > > > > > > > > > > +1, I think it's not worth to keep maintaining them which is > taking too > > > > > much effort. > > > > > > > > > > -gmann > > > > > > > > > > > Thanks for reading. > > > > > > > > > > > > [1] > > > > > > > > > > > https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%2 > > > > > 0status:merged)>> > > > > > > -- > > > > > > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps:// > > > > > > > > > > github.com/4383/https://twitter.com/4383hberaud > > > > > > > > > > > -----BEGIN PGP SIGNATURE----- > > > > > > > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > > > v6rDpkeNksZ9fFSyoY2o > > > > > > =ECSj > > > > > > -----END PGP SIGNATURE----- > > > > > > > > -- > > > > > > > > Moisés Guimarães > > > > > > > > Software Engineer > > > > > > > > Red Hat > > > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed Jan 6 16:30:52 2021 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 6 Jan 2021 10:30:52 -0600 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: <7679064.EuSn42jlT6@p1> <2f450b85dae98c9c5ae7d14cd2e734036bf78889.camel@redhat.com> Message-ID: <67f45735-6270-3c33-62d4-ea07ea9285b6@nemebean.com> On 1/6/21 9:56 AM, Herve Beraud wrote: > Concerning Oslo this is a team decision/position and for now this isn't > something general, so, you can still continue to test LC if it works for > you. The only issue I see is that based on our discussions, the big problem with lower-constraints testing is that it requires all of your dependencies to have compatible l-c too, so if Oslo drops testing of that it's going to make it extremely difficult for any other OpenStack projects to maintain it. > > The TC still needs to  decide if we should continue with that or not in > an official manner. > > Sorry if my precedent message is misleading some of you. > > Le mer. 6 janv. 2021 à 14:25, Sean Mooney > a écrit : > > On Wed, 2021-01-06 at 13:43 +0100, Slawek Kaplonski wrote: > > Hi, > > > > Dnia środa, 6 stycznia 2021 13:00:31 CET Herve Beraud pisze: > > > Hello everyone, > > > > > > Here is an email to officialize our position on Olso concerning our > > > lower-constraints testing policy. > > > Indeed we reached a consensus during our last meeting and all > the Oslo > > > cores who spoke agreed with this decision to drop the L.C tests > [1]. > > > > > > So we already started to drop lower-constraints jobs on Oslo to > unlock our > > > gates. > > > > Sorry if my question shouldn't be in that thread but does that > mean that other > > projects should/can drop lower constraints jobs too? Is it some > general, > > OpenStack wide decision or it depends on the project? > lower constratis is not a required part fo the PTI > https://governance.openstack.org/tc/reference/pti/python.html > https://governance.openstack.org/tc/reference/project-testing-interface.html > so its technially a per project desision as far as i am aware. > > while project were encouraged to adopt lower constratit testing we > did not require all project > to provide it and i belive there are some that dont unless im mistaken. > > so neutron could drop lc testing too i belive if it desired too. > unless im misinterperting things > > > > > > > > > Thanks to everyone who joined the discussion and thanks for > reading! > > > > > > [1] > > > > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log. > > > txt #topic Dropping lower-constraints testing > > > > > > Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < > > > > > > moguimar at redhat.com > a écrit : > > > > +1 > > > > > > > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann > > > > > > > > > > wrote: > > > > >  ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud > > > > > > > > > > > > wrote ---- > > > > > > > > > >  > Hello, > > > > >  > As you already surely know, we (the openstack project) > currently face > > > > > > > > > > some issues with our lower-constraints jobs due to pip's > latest  resolver > > > > > feature. > > > > > > > > > >  > By discussing this topic with Thierry Carrez (ttx) from > an oslo point > > > > > > > > > > of view, we reached the same conclusion that it is more > appropriate to > > > > > drop > > > > > this kind of tests because the complexity and recurring > pain neededto > > > > > maintain them now exceeds the benefits provided by this > mechanismes. > > > > > > > > > >  > Also we should notice that the number of active > maintainers is > > > > > > > > > > declining, so we think that this is the shortest path to > solve this > > > > > problem > > > > > on oslo for now and for the future too. > > > > > > > > > >  > In a first time I tried to fix our gates by fixing our > > > > > > > > > > lower-constraints project by project but with around ~36 > projects to > > > > > maintain this is a painful task, especially due to nested > oslo layers > > > > > inside oslo himself... I saw the face of the hell of > dependencies. > > > > > > > > > >  > So, in a second time I submitted a series of patches to > drop these > > > > > > > > > > tests [1]. > > > > > > > > > >  > But before moving further with that we would appreciate > discussing > > > > > > > > > > this with the TC. For now the patches are ready and we just > have to push > > > > > the good button accordingly to our choices (+W or abandon). > > > > > > > > > >  > Normally all the oslo projects that need to be fixed are > covered by > > > > > > > > > > [1]. > > > > > > > > > >  > Thoughts? > > > > > > > > > > +1, I think it's not worth to keep maintaining them which > is taking too > > > > > much effort. > > > > > > > > > > -gmann > > > > > > > > > >  > Thanks for reading. > > > > >  > > > > > >  > [1] > > > > > > > > > > > https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%2 > > > > > 0status:merged)>> > > > > >  > -- > > > > >  > Hervé BeraudSenior Software Engineer at Red Hatirc: > hberaudhttps:// > > > > > > > > > > github.com/4383/https://twitter.com/4383hberaud > > > > > > > > > > >  > -----BEGIN PGP SIGNATURE----- > > > > >  > > > > > >  > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > > >  > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > > >  > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > > >  > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > > >  > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > > >  > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > > >  > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > > >  > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > > >  > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > > >  > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > > >  > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > > >  > v6rDpkeNksZ9fFSyoY2o > > > > >  > =ECSj > > > > >  > -----END PGP SIGNATURE----- > > > > > > > > -- > > > > > > > > Moisés Guimarães > > > > > > > > Software Engineer > > > > > > > > Red Hat > > > > > > > > > > > > > > -- > > > Hervé Beraud > > > Senior Software Engineer at Red Hat > > > irc: hberaud > > > https://github.com/4383/ > > > https://twitter.com/4383hberaud > > > > > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From openstack at nemebean.com Wed Jan 6 16:34:35 2021 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 6 Jan 2021 10:34:35 -0600 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> On 1/5/21 3:51 PM, Jeremy Stanley wrote: > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: >> There have been many patches submitted to drop the Python 3.7 >> classifier from setup.cfg: >> https://review.opendev.org/q/%2522remove+py37%2522 >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. >> >> Most projects are merging these patches, but I've seen a couple of >> objections from ironic and horizon: >> >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 >> - https://review.opendev.org/c/openstack/horizon/+/769237 >> >> What are the thoughts of the TC and of the overall community on this? >> Should we really drop these classifiers when there are no >> corresponding CI jobs, even though more Python versions may well be >> supported? > > My recollection of the many discussions we held was that the runtime > document would recommend the default python3 available in our > targeted platforms, but that we would also make a best effort to > test with the latest python3 available to us at the start of the > cycle as well. It was suggested more than once that we should test > all minor versions in between, but this was ruled out based on the > additional CI resources it would consume for minimal gain. Instead > we deemed that testing our target version and the latest available > would give us sufficient confidence that, if those worked, the > versions in between them were likely fine as well. Based on that, I > think the versions projects claim to work with should be contiguous > ranges, not contiguous lists of the exact versions tested (noting > that those aren't particularly *exact* versions to begin with). > > Apologies for the lack of references to old discussions, I can > probably dig some up from the ML and TC meetings several years back > of folks think it will help inform this further. > For what little it's worth, that jives with my hazy memories of the discussion too. The assumption was that if we tested the upper and lower bounds of our Python versions then the ones in the middle would be unlikely to break. It was a compromise to support multiple versions of Python without spending a ton of testing resources on it. From fungi at yuggoth.org Wed Jan 6 17:21:39 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jan 2021 17:21:39 +0000 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <67f45735-6270-3c33-62d4-ea07ea9285b6@nemebean.com> References: <7679064.EuSn42jlT6@p1> <2f450b85dae98c9c5ae7d14cd2e734036bf78889.camel@redhat.com> <67f45735-6270-3c33-62d4-ea07ea9285b6@nemebean.com> Message-ID: <20210106172138.33smfj3ba45nse57@yuggoth.org> On 2021-01-06 10:30:52 -0600 (-0600), Ben Nemec wrote: [...] > The only issue I see is that based on our discussions, the big > problem with lower-constraints testing is that it requires all of > your dependencies to have compatible l-c too, so if Oslo drops > testing of that it's going to make it extremely difficult for any > other OpenStack projects to maintain it. [...] I hardly see how projects could ever expect this to be the case anyway, as the vast majority of OpenStack dependencies are maintained outside the OpenStack community and don't maintain any lower-bounds testing of their own. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Jan 6 17:49:34 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jan 2021 11:49:34 -0600 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> Message-ID: <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> ---- On Wed, 06 Jan 2021 10:34:35 -0600 Ben Nemec wrote ---- > > > On 1/5/21 3:51 PM, Jeremy Stanley wrote: > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > >> There have been many patches submitted to drop the Python 3.7 > >> classifier from setup.cfg: > >> https://review.opendev.org/q/%2522remove+py37%2522 > >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. > >> > >> Most projects are merging these patches, but I've seen a couple of > >> objections from ironic and horizon: > >> > >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > >> - https://review.opendev.org/c/openstack/horizon/+/769237 > >> > >> What are the thoughts of the TC and of the overall community on this? > >> Should we really drop these classifiers when there are no > >> corresponding CI jobs, even though more Python versions may well be > >> supported? > > > > My recollection of the many discussions we held was that the runtime > > document would recommend the default python3 available in our > > targeted platforms, but that we would also make a best effort to > > test with the latest python3 available to us at the start of the > > cycle as well. It was suggested more than once that we should test > > all minor versions in between, but this was ruled out based on the > > additional CI resources it would consume for minimal gain. Instead > > we deemed that testing our target version and the latest available > > would give us sufficient confidence that, if those worked, the > > versions in between them were likely fine as well. Based on that, I > > think the versions projects claim to work with should be contiguous > > ranges, not contiguous lists of the exact versions tested (noting > > that those aren't particularly *exact* versions to begin with). > > > > Apologies for the lack of references to old discussions, I can > > probably dig some up from the ML and TC meetings several years back > > of folks think it will help inform this further. > > > > For what little it's worth, that jives with my hazy memories of the > discussion too. The assumption was that if we tested the upper and lower > bounds of our Python versions then the ones in the middle would be > unlikely to break. It was a compromise to support multiple versions of > Python without spending a ton of testing resources on it. Exactly, py3.7 is not broken for OpenStack so declaring it not supported is not the right thing. I remember the discussion when we declared the wallaby (probably from Victoria) testing runtime, we decided if we test py3.6 and py3.8 it means we are not going to break py3.7 support so indirectly it is tested and supported. And testing runtime does not mean we have to drop everything else testing means projects are all welcome to keep running the py3.7 testing job on the gate there is no harm in that. In both cases, either project has an explicit py3.7 job or not we should not remove it from classifiers. -gmann > > From gmann at ghanshyammann.com Wed Jan 6 17:59:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jan 2021 11:59:24 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects Message-ID: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Hello Everyone, You might have seen the discussion around dropping the lower constraints testing as it becomes more challenging than the current value of doing it. Few of the ML thread around this discussion: - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html As Oslo and many other project dropping or already dropped it, we should decide it for all other projects also otherwise it can be more changing than it is currently. We have not defined it in PTI or testing runtime so it is always up to projects if they still want to keep it but we should decide a general recommendation here. -gmann From gmann at ghanshyammann.com Wed Jan 6 18:00:12 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jan 2021 12:00:12 -0600 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <7679064.EuSn42jlT6@p1> References: <7679064.EuSn42jlT6@p1> Message-ID: <176d8db30c5.11e87476b874375.1651955499533176435@ghanshyammann.com> ---- On Wed, 06 Jan 2021 06:43:18 -0600 Slawek Kaplonski wrote ---- > Hi, > > Dnia środa, 6 stycznia 2021 13:00:31 CET Herve Beraud pisze: > > Hello everyone, > > > > Here is an email to officialize our position on Olso concerning our > > lower-constraints testing policy. > > Indeed we reached a consensus during our last meeting and all the Oslo > > cores who spoke agreed with this decision to drop the L.C tests [1]. > > > > So we already started to drop lower-constraints jobs on Oslo to unlock our > > gates. > > Sorry if my question shouldn't be in that thread but does that mean that other > projects should/can drop lower constraints jobs too? Is it some general, > OpenStack wide decision or it depends on the project? Yeah this is a good question till now we have been discussing specific project wise dropping like oslo in this thread and Ironic and other projects in another discussion. I am starting a separate thread to discuss it for all the projects as a general recommendation. -http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html -gmann > > > > > Thanks to everyone who joined the discussion and thanks for reading! > > > > [1] > > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log. > > txt #topic Dropping lower-constraints testing > > > > Le ven. 18 déc. 2020 à 20:34, Moises Guimaraes de Medeiros < > > > > moguimar at redhat.com> a écrit : > > > +1 > > > > > > On Fri, Dec 18, 2020 at 4:46 PM Ghanshyam Mann > > > > > > wrote: > > >> ---- On Fri, 18 Dec 2020 08:54:26 -0600 hberaud > > >> > > >> wrote ---- > > >> > > >> > Hello, > > >> > As you already surely know, we (the openstack project) currently face > > >> > > >> some issues with our lower-constraints jobs due to pip's latest resolver > > >> feature. > > >> > > >> > By discussing this topic with Thierry Carrez (ttx) from an oslo point > > >> > > >> of view, we reached the same conclusion that it is more appropriate to > > >> drop > > >> this kind of tests because the complexity and recurring pain neededto > > >> maintain them now exceeds the benefits provided by this mechanismes. > > >> > > >> > Also we should notice that the number of active maintainers is > > >> > > >> declining, so we think that this is the shortest path to solve this > > >> problem > > >> on oslo for now and for the future too. > > >> > > >> > In a first time I tried to fix our gates by fixing our > > >> > > >> lower-constraints project by project but with around ~36 projects to > > >> maintain this is a painful task, especially due to nested oslo layers > > >> inside oslo himself... I saw the face of the hell of dependencies. > > >> > > >> > So, in a second time I submitted a series of patches to drop these > > >> > > >> tests [1]. > > >> > > >> > But before moving further with that we would appreciate discussing > > >> > > >> this with the TC. For now the patches are ready and we just have to push > > >> the good button accordingly to our choices (+W or abandon). > > >> > > >> > Normally all the oslo projects that need to be fixed are covered by > > >> > > >> [1]. > > >> > > >> > Thoughts? > > >> > > >> +1, I think it's not worth to keep maintaining them which is taking too > > >> much effort. > > >> > > >> -gmann > > >> > > >> > Thanks for reading. > > >> > > > >> > [1] > > >> > > >> https://review.opendev.org/q/topic:%22oslo_lc_drop%22+(status:open%20OR%2 > > >> 0status:merged)>> > > >> > -- > > >> > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > > >> > > >> github.com/4383/https://twitter.com/4383hberaud > > >> > > >> > -----BEGIN PGP SIGNATURE----- > > >> > > > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > >> > v6rDpkeNksZ9fFSyoY2o > > >> > =ECSj > > >> > -----END PGP SIGNATURE----- > > > > > > -- > > > > > > Moisés Guimarães > > > > > > Software Engineer > > > > > > Red Hat > > > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From strigazi at gmail.com Wed Jan 6 18:22:48 2021 From: strigazi at gmail.com (Spyros Trigazis) Date: Wed, 6 Jan 2021 19:22:48 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: Should it also be dropped from stable branches? eg in magnum, it blocks the gate for stable/victoria atm. Cheers, Spyros On Wed, Jan 6, 2021 at 7:00 PM Ghanshyam Mann wrote: > Hello Everyone, > > You might have seen the discussion around dropping the lower constraints > testing as it becomes more challenging than the current value of doing it. > > Few of the ML thread around this discussion: > > - > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > - > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > As Oslo and many other project dropping or already dropped it, we should > decide it for all > other projects also otherwise it can be more changing than it is > currently. > > We have not defined it in PTI or testing runtime so it is always up to > projects if they still > want to keep it but we should decide a general recommendation here. > > -gmann > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jan 6 18:33:42 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jan 2021 18:33:42 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: <20210106183342.x53p5vbf5sw4nxz7@yuggoth.org> On 2021-01-06 19:22:48 +0100 (+0100), Spyros Trigazis wrote: > Should it also be dropped from stable branches? > eg in magnum, it blocks the gate for stable/victoria atm. [...] If it's broken, I'd say you have three choices: 1. fix the job on those branches 2. drop the job from those branches 3. EOL those branches It's a bit soon to be considering #3 for stable/victoria, so you're really left with options #1 and #2 there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mighani6406 at gmail.com Wed Jan 6 07:08:19 2021 From: mighani6406 at gmail.com (mohammad mighani) Date: Wed, 6 Jan 2021 10:38:19 +0330 Subject: failed to launch openstack instance when integrating with odl Message-ID: Hi everyone I tried to integrate Openstack train with Opendaylight magnesium and my references were [1] and [2] but after doing all the steps, the instance failed to launch. I changed port_binding_controller in [ml2_odl] section in the ml2_conf.ini file from pseudo-agentdb-binding to legacy-port-binding and then the instance launched but the status of router interfaces was still down. I have a controller node and a compute node. and Opendaylight runs on the controller node. nova-compute.log: INFO nova.virt.libvirt.driver [-] [instance: 975fa79e-6567-4385-be87-9d12a8eb3e94] Instance destroyed successfully. 2021-01-02 12:33:23.383 25919 ERROR vif_plug_ovs.ovsdb.impl_vsctl [req-a0a7ebf0-7e63-4c60-a8d2-07c05f1aa4f4 04c7685a2166481a9ace54eb5e71f6e5 ca28ee1038254649ad133d5f09f7a186 - default default] Unable to execute ['ovs-vsctl', '--timeout=120', '--oneline', '--format=json', '--db=tcp: 127.0.0.1:6640', '--', '--if-exists', 'del-port', 'br-int', 'tap50eb0b68-a4']. Exception: Unexpected error while running command. Command: ovs-vsctl --timeout=120 --oneline --format=json --db=tcp: 127.0.0.1:6640 -- --if-exists del-port br-int tap50eb0b68-a4 Exit code: 1 Stdout: '' Stderr: 'ovs-vsctl: tcp:127.0.0.1:6640: database connection failed (Connection refused)\n': oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. controller ovs: Manager "tcp:192.168.222.48:6640" is_connected: true Bridge br-int Controller "tcp:192.168.222.48:6653" is_connected: true fail_mode: secure Port tune52c5c73a50 Interface tune52c5c73a50 type: vxlan options: {key=flow, local_ip="10.0.0.31", remote_ip="10.0.0.11"} Port br-int Interface br-int type: internal Bridge br-ex Port br-ex Interface br-ex type: internal Port ens160 Interface ens160 ovs_version: "2.13.1" compute ovs: Manager "tcp:192.168.222.48:6640" is_connected: true Bridge br-int Controller "tcp:192.168.222.48:6653" is_connected: true fail_mode: secure Port br-int Interface br-int type: internal Port tun34b3712d975 Interface tun34b3712d975 type: vxlan options: {key=flow, local_ip="10.0.0.11", remote_ip="10.0.0.11"} Port tund5123ce5b8a Interface tund5123ce5b8a type: vxlan options: {key=flow, local_ip="10.0.0.11", remote_ip="10.0.0.31"} ovs_version: "2.13.1" -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Wed Jan 6 09:09:56 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Wed, 6 Jan 2021 10:09:56 +0100 Subject: tinyipa cannot boot OS of baremetal node In-Reply-To: References: Message-ID: Hello Ankele, if you're using ironic on production servers I suggest you to build and use ironic-python-agent images based on diskimage-builder as explained here: https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html The tinyipa image is best suited for testing and development and not recommended for production usage. Thanks, Riccardo On Tue, Jan 5, 2021 at 5:45 PM Ankele zhang wrote: > Hi~ > > My Rocky OpenStack platform deployed with official documents, includes > Keystone/Cinder/Neutron/Nova and Ironic. > > I used to boot my baremetal nodes by CoreOS downloaded on > https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/ > > > Since I want to customize my own HardwareManager for configuring RAID, I > have build TinyIPA image tinyipa.tar.gz and tinyipa.vmlinuz with > ironic-python-agent-builder(master branch) and ironic-python-agent(rocky > branch). Here are all the products of the build process. > [image: image.png] > Then I used these two images to create the baremetal node, and boot nova > server, but I didn't get the results I wanted, it couldn't enter the > ramdisk and always in 'wait call-back' state. as following > > [image: image.png] > I got nothing in /var/log/ironic/ironig-conductor.log and > /var/log/nova/nova-compute.log > > I don't know if these two image (tinyipa.tar.gz and tinyipa.vmlinuz) are > valid for Ironic. If not, how can I customize HardwareManager? > > Looking forward to hearing from you. > > Ankele > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 126537 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 27968 bytes Desc: not available URL: From dtantsur at redhat.com Wed Jan 6 17:08:57 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 6 Jan 2021 18:08:57 +0100 Subject: tinyipa cannot boot OS of baremetal node In-Reply-To: References: Message-ID: Hi, TinyIPA may not work for real hardware, you need to either build a custom CoreOS image or a DIB-based image. Dmitry On Tue, Jan 5, 2021 at 5:39 PM Ankele zhang wrote: > Hi~ > > My Rocky OpenStack platform deployed with official documents, includes > Keystone/Cinder/Neutron/Nova and Ironic. > > I used to boot my baremetal nodes by CoreOS downloaded on > https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/ > > > Since I want to customize my own HardwareManager for configuring RAID, I > have build TinyIPA image tinyipa.tar.gz and tinyipa.vmlinuz with > ironic-python-agent-builder(master branch) and ironic-python-agent(rocky > branch). Here are all the products of the build process. > [image: image.png] > Then I used these two images to create the baremetal node, and boot nova > server, but I didn't get the results I wanted, it couldn't enter the > ramdisk and always in 'wait call-back' state. as following > > [image: image.png] > I got nothing in /var/log/ironic/ironig-conductor.log and > /var/log/nova/nova-compute.log > > I don't know if these two image (tinyipa.tar.gz and tinyipa.vmlinuz) are > valid for Ironic. If not, how can I customize HardwareManager? > > Looking forward to hearing from you. > > Ankele > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 126537 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 27968 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Jan 6 18:55:15 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jan 2021 12:55:15 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210106183342.x53p5vbf5sw4nxz7@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <20210106183342.x53p5vbf5sw4nxz7@yuggoth.org> Message-ID: <176d90d9718.b0b2e245876074.7056276878731396330@ghanshyammann.com> ---- On Wed, 06 Jan 2021 12:33:42 -0600 Jeremy Stanley wrote ---- > On 2021-01-06 19:22:48 +0100 (+0100), Spyros Trigazis wrote: > > Should it also be dropped from stable branches? > > eg in magnum, it blocks the gate for stable/victoria atm. > [...] > > If it's broken, I'd say you have three choices: > > 1. fix the job on those branches > > 2. drop the job from those branches > > 3. EOL those branches > > It's a bit soon to be considering #3 for stable/victoria, so you're > really left with options #1 and #2 there. I will suggest dropping the job as it can end up spending the same amount of effort to fix the job on stable or master. If we keep fixing on stable then we can fix on the master and backport so it is better to drop from all gates including the stable branch. -gmann > -- > Jeremy Stanley > From strigazi at gmail.com Wed Jan 6 19:01:50 2021 From: strigazi at gmail.com (Spyros Trigazis) Date: Wed, 6 Jan 2021 20:01:50 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d90d9718.b0b2e245876074.7056276878731396330@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <20210106183342.x53p5vbf5sw4nxz7@yuggoth.org> <176d90d9718.b0b2e245876074.7056276878731396330@ghanshyammann.com> Message-ID: Let's decide on a topic and/or story to track it? eg lc_drop Spyros On Wed, Jan 6, 2021 at 7:55 PM Ghanshyam Mann wrote: > ---- On Wed, 06 Jan 2021 12:33:42 -0600 Jeremy Stanley > wrote ---- > > On 2021-01-06 19:22:48 +0100 (+0100), Spyros Trigazis wrote: > > > Should it also be dropped from stable branches? > > > eg in magnum, it blocks the gate for stable/victoria atm. > > [...] > > > > If it's broken, I'd say you have three choices: > > > > 1. fix the job on those branches > > > > 2. drop the job from those branches > > > > 3. EOL those branches > > > > It's a bit soon to be considering #3 for stable/victoria, so you're > > really left with options #1 and #2 there. > > I will suggest dropping the job as it can end up spending the same amount > of > effort to fix the job on stable or master. If we keep fixing on stable then > we can fix on the master and backport so it is better to drop from all > gates > including the stable branch. > > -gmann > > > -- > > Jeremy Stanley > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Jan 6 20:32:43 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 6 Jan 2021 21:32:43 +0100 Subject: [neutron] Next step for BGP routed networks over segmented provider infrastructure segments Message-ID: Hi Ryan, and all of the Neutron team, Today, I'm happy to let you know that I've been able to finish the patch and that it's merged: https://review.opendev.org/c/openstack/neutron/+/669395 I also managed to add some docs to Neutron about it: https://docs.openstack.org/neutron/latest/admin/config-bgp-floating-ip-over-l2-segmented-network.html We've used it in a pre-production environment, and it just works as expected, it's kind of great. However, there's some feature gaps that would need to be addressed. Namely: - external-gateway of routers aren't advertized - we can't do direct attach of public IPs to VMs - I failed adding IPv6 dual stack to this setup Let me go into more details for each of these 3 points. 1/ No BGP advertizing for the router default gateways When doing: openstack router set --external-gateway we then get this type of port: # openstack port show -c binding_vif_details -c binding_vif_type +---------------------+-------------------------------------------------------------------------------------------------------------+ | Field | Value | +---------------------+-------------------------------------------------------------------------------------------------------------+ | binding_vif_details | bridge_name='br-int', connectivity='l2', datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type | ovs | +---------------------+-------------------------------------------------------------------------------------------------------------+ which doesn't match the type of port we have for floating IPs: # openstack port show -c binding_vif_details -c binding_vif_type +---------------------+---------+ | Field | Value | +---------------------+---------+ | binding_vif_details | | | binding_vif_type | unbound | +---------------------+---------+ and then, the next HOP for the router gateway isn't advertized over BGP. Do you know how we could get neutron-dynamic-routing to do that advertizing, with the next HOP on the network node(s)? Where should that code be patch? Inside Neutron, or in neutron-dynamic-routing? Is this really related to the port type as I've showed above? 2/ No direct attach to VM ports We can't attach a port with an IP network:routed directly to a VM. I tried to add the subnet type "compute:nova" to the floating IP subnet, but that didn't do it: Neutron refuses to attach the port to a VM. Do you know why? How and what and where should we patch Neutron to fix this? 3/ IPv6 dual stack I tried to setup a dual-stack network, and failed. How should this be done? Should we add v6 subnets to segments and one subnet with the type --service-type 'network:router_gateway' as well? This is what I tried but it didn't work for me. Should tenants create their own v6 subnet out of the v6 subnet pool I provisioned as admin? Cheers, Thomas Goirand (zigo) P.S: Please keep my Infomaniak colleagues as Cc. From zigo at debian.org Wed Jan 6 20:54:22 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 6 Jan 2021 21:54:22 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> Message-ID: <8d1356d3-7836-19d1-860e-cdad80be71f0@debian.org> On 1/6/21 12:26 PM, Radosław Piliszek wrote: > Sorry for top posting but just a general remark: > > Do note Debian 10 is using Python 3.7 and that is what Kolla is testing too. > I know Debian is not considered a tested platform but people use it > successfully. > > My opinion is, therefore, that we should keep 3.7 in classifiers. > > -yoctozepto I also confirm that I do run Debian 10 in production with all versions of OpenStack from Rocky to Wallaby, and that I will report if I see anything that doesn't work. I also would like to remind everyone that Bullseye is currently running Python 3.9, and that we should switch the upper bound to at least that... Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed Jan 6 20:58:03 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 6 Jan 2021 21:58:03 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: Message-ID: <5755ea96-5cc7-2756-970e-dcbf5184b2a2@debian.org> On 12/18/20 3:54 PM, hberaud wrote: > In a first time I tried to fix our gates by fixing our lower-constraints > project by project but with around ~36 projects to maintain this is a > painful task, especially due to nested oslo layers inside oslo > himself... I saw the face of the hell of dependencies. Welcome to my world! > Thoughts? Couldn't someone address the dependency loops in Oslo? It's IMO anyway needed. Just my 2 cents, not sure if that helps... Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed Jan 6 21:04:34 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 6 Jan 2021 22:04:34 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> Hi, On 1/6/21 6:59 PM, Ghanshyam Mann wrote: > Hello Everyone, > > You might have seen the discussion around dropping the lower constraints > testing as it becomes more challenging than the current value of doing it. As a downstream distribution package maintainer, I see this as a major regression of the code quality that upstream is shipping. Without l-c tests, there's no assurance of the reality of a lower-bound dependency. So then we're back to 5 years ago, when OpenStack just artificially was setting very high lower bound because we just didn't know... Please don't do it. Cheers, Thomas Goirand (zigo) From dhana.sys at gmail.com Wed Jan 6 21:20:48 2021 From: dhana.sys at gmail.com (Dhanasekar Kandasamy) Date: Thu, 7 Jan 2021 02:50:48 +0530 Subject: [neutron] Performance impact for attaching Security Group with more number of rules Message-ID: Hi, We have an OpenStack Environment with 5000+ VMs running currently. I want to apply some common Security Group to all my running VMs. - Common Security Group (SEC_GRP_COMMON) has around 700 rules. - This Security Group (SEC_GRP_COMMON) has been shared to all the OpenStack Projects using Role-Based Access Control (RBAC). - Wanted to apply this Security Group (SEC_GRP_COMMON) to all the running VMs in the Cloud *Question 1*: With the above scenario, what will happen if I attach this Security Group(with 700+ rules) to all the 5000+ VMs? Will there be any performance issue/impact for the same (CPU utilization, Memory etc. in the Compute Server or Performance issues in application running in the VMs) *Question 2*: Is there any recommendations or benchmark data for maximum number of rules in the Security Group in OpenStack cloud? Thanks, Dhana -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Wed Jan 6 21:23:56 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Jan 2021 22:23:56 +0100 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> Message-ID: On Wed, 6 Jan 2021 at 18:58, Ghanshyam Mann wrote: > > ---- On Wed, 06 Jan 2021 10:34:35 -0600 Ben Nemec wrote ---- > > > > > > On 1/5/21 3:51 PM, Jeremy Stanley wrote: > > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: > > >> There have been many patches submitted to drop the Python 3.7 > > >> classifier from setup.cfg: > > >> https://review.opendev.org/q/%2522remove+py37%2522 > > >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. > > >> > > >> Most projects are merging these patches, but I've seen a couple of > > >> objections from ironic and horizon: > > >> > > >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 > > >> - https://review.opendev.org/c/openstack/horizon/+/769237 > > >> > > >> What are the thoughts of the TC and of the overall community on this? > > >> Should we really drop these classifiers when there are no > > >> corresponding CI jobs, even though more Python versions may well be > > >> supported? > > > > > > My recollection of the many discussions we held was that the runtime > > > document would recommend the default python3 available in our > > > targeted platforms, but that we would also make a best effort to > > > test with the latest python3 available to us at the start of the > > > cycle as well. It was suggested more than once that we should test > > > all minor versions in between, but this was ruled out based on the > > > additional CI resources it would consume for minimal gain. Instead > > > we deemed that testing our target version and the latest available > > > would give us sufficient confidence that, if those worked, the > > > versions in between them were likely fine as well. Based on that, I > > > think the versions projects claim to work with should be contiguous > > > ranges, not contiguous lists of the exact versions tested (noting > > > that those aren't particularly *exact* versions to begin with). > > > > > > Apologies for the lack of references to old discussions, I can > > > probably dig some up from the ML and TC meetings several years back > > > of folks think it will help inform this further. > > > > > > > For what little it's worth, that jives with my hazy memories of the > > discussion too. The assumption was that if we tested the upper and lower > > bounds of our Python versions then the ones in the middle would be > > unlikely to break. It was a compromise to support multiple versions of > > Python without spending a ton of testing resources on it. > > > Exactly, py3.7 is not broken for OpenStack so declaring it not supported is not the right thing. > I remember the discussion when we declared the wallaby (probably from Victoria) testing runtime, > we decided if we test py3.6 and py3.8 it means we are not going to break py3.7 support so indirectly > it is tested and supported. > > And testing runtime does not mean we have to drop everything else testing means projects are all > welcome to keep running the py3.7 testing job on the gate there is no harm in that. > > In both cases, either project has an explicit py3.7 job or not we should not remove it from classifiers. > > > -gmann Thanks everyone for your input. Then should we request that those patches dropping the 3.7 classifier are abandoned, or reverted if already merged? From pierre at stackhpc.com Wed Jan 6 21:33:38 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 6 Jan 2021 22:33:38 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: On Wed, 6 Jan 2021 at 19:07, Ghanshyam Mann wrote: > > Hello Everyone, > > You might have seen the discussion around dropping the lower constraints > testing as it becomes more challenging than the current value of doing it. > > Few of the ML thread around this discussion: > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > As Oslo and many other project dropping or already dropped it, we should decide it for all > other projects also otherwise it can be more changing than it is currently. > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > want to keep it but we should decide a general recommendation here. I would suggest dropping the lower-constraints job only in projects where it becomes too difficult to maintain. I fixed those jobs in Blazar yesterday. It was a bit painful, but in the process I discovered several requirements were incorrectly defined, as we were using features not available in the minimum version required… From fungi at yuggoth.org Wed Jan 6 21:40:49 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 6 Jan 2021 21:40:49 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> Message-ID: <20210106214048.3gau2fzexn5ivdek@yuggoth.org> On 2021-01-06 22:04:34 +0100 (+0100), Thomas Goirand wrote: [...] > As a downstream distribution package maintainer, I see this as a major > regression of the code quality that upstream is shipping. Without l-c > tests, there's no assurance of the reality of a lower-bound dependency. > > So then we're back to 5 years ago, when OpenStack just artificially was > setting very high lower bound because we just didn't know... > > Please don't do it. The tidbit you're missing here is that we never actually had working lower-bounds checks. The recent update to make pip correctly confirm requested versions of packages get installed, which has caused these jobs to all fall over, proves that. So it's not a regression. I'm personally in favor of doing lower-bounds checking of our software, always have been, but nobody's done the work to correctly implement it. The old jobs we had punted on it, and now we can see clearly that they weren't actually testing what people thought. Properly calculating transitive dependency lower-bounds requires a modified dependency solver with consistent inverse sorting, and that doesn't presently exist. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jp.methot at planethoster.info Wed Jan 6 23:26:24 2021 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Wed, 6 Jan 2021 18:26:24 -0500 Subject: [nova] Nova evacuate issue Message-ID: Hi, We’re running Openstack Rocky on a high-availability setup with neutron openvswitch. The setup has roughly 50 compute nodes and 2 controller nodes. We’ve run into an issue when we’re trying to evacuate a dead compute node where the first instance evacuate goes through, but the second one fails (we evacuate our instances one by one). The reason why the second one fails seems to be because Neutron is trying to plug the port back on the dead compute, as nova instructs it to do. Here’s an example of nova-api log output after compute22 died and we’ve been trying to evacuate an instance. f3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] Creating event network-vif-unplugged:80371c01-930d-4ea2-9d28-14438e948b65 for instance 4aeb7761-cb23-4c51-93dd-79b55afbc7dc on compute22 2021-01-06 13:31:31.750 2858 INFO nova.osapi_compute.wsgi.server [req-4f9b3e17-1a9d-48f0-961a-bbabdf922ad6 0d0ef3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] 10.30.1.224 "POST /v2.1/os-server-external-events HTTP/1.1" status: 200 len: 1091 time: 0.4987640 2021-01-06 13:31:40.145 2863 INFO nova.osapi_compute.wsgi.server [req-abaac9df-7338-4d10-9326-4006021ff54d 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1 HTTP/1.1" status: 302 len: 318 time: 0.0072701 2021-01-06 13:31:40.156 2863 INFO nova.osapi_compute.wsgi.server [req-c393e74b-a118-4a98-8a83-be6007913dc0 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/ HTTP/1.1" status: 200 len: 789 time: 0.0070350 2021-01-06 13:31:43.289 2865 INFO nova.osapi_compute.wsgi.server [req-b87268b7-a673-44c1-9162-f9564647ec33 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/servers/4aeb7761-cb23-4c51-93dd-79b55afbc7dc HTTP/1.1" status: 200 len: 5654 time: 2.7543190 2021-01-06 13:31:43.413 2863 INFO nova.osapi_compute.wsgi.server [req-4cab23ba-c5cb-4dda-bf42-bc452d004783 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/servers/4aeb7761-cb23-4c51-93dd-79b55afbc7dc/os-volume_attachments HTTP/1.1" status: 200 len: 770 time: 0.1135709 2021-01-06 13:31:43.883 2865 INFO nova.osapi_compute.wsgi.server [req-f5e5a586-65f3-4798-b03b-98e01326a00b 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/flavors/574a7152-f079-4337-b1eb-b7eca4370b73 HTTP/1.1" status: 200 len: 877 time: 0.5751688 2021-01-06 13:31:47.194 2864 INFO nova.api.openstack.compute.server_external_events [req-7e639b1f-8408-4e8e-9bb8-54588290edfe 0d0ef3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] Creating event network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 for instance 4aeb7761-cb23-4c51-93dd-79b55afbc7dc on compute22 As you can see, Nova "creates an event" as the virtual interface is unplugged but then immediately creates another event to plug the virtual interface in the same compute node that is dead. However, at the same time, the instance is being created on another compute node. Is this a known bug? I have not found anything about this in the bug database. Additionally, I am not able to reproduce in our staging environment which is smaller and running on Stein. Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada TEL : +1.514.802.1644 - Poste : 2644 FAX : +1.514.612.0678 CA/US : 1.855.774.4678 FR : 01 76 60 41 43 UK : 0808 189 0423 -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Thu Jan 7 03:12:10 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Thu, 7 Jan 2021 03:12:10 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: Glad it helped . Going to work with the magnum team to get it merged. Would it be possible for you to document the issue and create a bug here https://storyboard.openstack.org/#!/project/openstack/magnum ________________________________ From: Ionut Biru Sent: Wednesday, January 6, 2021 3:37 AM To: Erik Olof Gunnar Andersson Cc: Spyros Trigazis ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Thanks a lot for the patch. Indeed 769471 fixes my problem at first glance. I'll let it run for a couple of days. On Wed, Jan 6, 2021 at 12:23 PM Erik Olof Gunnar Andersson > wrote: I pushed a couple of patches that you can try out. This is the most likely culprit. https://review.opendev.org/c/openstack/magnum/+/769471 - Re-use rpc client I also created this one, but doubt this is an issue as the implementation here is the same as I use in Designate https://review.opendev.org/c/openstack/magnum/+/769457 - [WIP] Singleton notifier Finally I also created a PR to add magnum-api testing using uwsgi. https://review.opendev.org/c/openstack/magnum/+/769450 Let me know if any of these patches help! ________________________________ From: Ionut Biru > Sent: Tuesday, January 5, 2021 8:36 AM To: Erik Olof Gunnar Andersson > Cc: Spyros Trigazis >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, I found this story: https://storyboard.openstack.org/#!/story/2008308 regarding disabling cluster update notifications in rabbitmq. I think this will help me. On Tue, Jan 5, 2021 at 12:21 PM Erik Olof Gunnar Andersson > wrote: Sorry, being repetitive here, but maybe try adding this to your magnum config as well. If you have A LOT of cores it could add up to a crazy amount of connections. [conductor] workers = 2 ________________________________ From: Ionut Biru > Sent: Tuesday, January 5, 2021 1:50 AM To: Erik Olof Gunnar Andersson > Cc: Spyros Trigazis >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson > wrote: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis > Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru > Cc: Erik Olof Gunnar Andersson >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Thu Jan 7 07:15:07 2021 From: ionut at fleio.com (Ionut Biru) Date: Thu, 7 Jan 2021 09:15:07 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Erik, Here is the story: https://storyboard.openstack.org/#!/story/2008494 On Thu, Jan 7, 2021 at 5:12 AM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Glad it helped . Going to work with the magnum team to get it merged. > > Would it be possible for you to document the issue and create a bug here > https://storyboard.openstack.org/#!/project/openstack/magnum > > ------------------------------ > *From:* Ionut Biru > *Sent:* Wednesday, January 6, 2021 3:37 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi Erik, > > Thanks a lot for the patch. Indeed 769471 fixes my problem at first glance. > > I'll let it run for a couple of days. > > > On Wed, Jan 6, 2021 at 12:23 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > I pushed a couple of patches that you can try out. > > This is the most likely culprit. > https://review.opendev.org/c/openstack/magnum/+/769471 > > - Re-use rpc client > > I also created this one, but doubt this is an issue as the implementation > here is the same as I use in Designate > https://review.opendev.org/c/openstack/magnum/+/769457 > > - [WIP] Singleton notifier > > Finally I also created a PR to add magnum-api testing using uwsgi. > https://review.opendev.org/c/openstack/magnum/+/769450 > > > Let me know if any of these patches help! > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 8:36 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > I found this story: https://storyboard.openstack.org/#!/story/2008308 > > regarding disabling cluster update notifications in rabbitmq. > > I think this will help me. > > On Tue, Jan 5, 2021 at 12:21 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sorry, being repetitive here, but maybe try adding this to your magnum > config as well. If you have A LOT of cores it could add up to a crazy > amount of connections. > > [conductor] > workers = 2 > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 1:50 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi, > > Here is my config. maybe something is fishy. > > I did have around 300 messages in the queue in notification.info > > and notification.err and I purged them. > > https://paste.xinu.at/woMt/ > > > > > On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > workers = 2 > > > ------------------------------ > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > I tried with process=1 and it reached 1016 connections to rabbitmq. > lsof > https://paste.xinu.at/jGg/ > > > i think it goes into error when it reaches 1024 file descriptors. > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > * Maybe your rabbit is flooded with notifications that are not consumed? > * You can use way more than 1024 file descriptors, maybe 2^10? > > Spyros > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jan 7 08:17:59 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 7 Jan 2021 09:17:59 +0100 Subject: [neutron] Performance impact for attaching Security Group with more number of rules In-Reply-To: References: Message-ID: <20210107081759.igz5skypxaahlvlf@p1.localdomain> Hi, On Thu, Jan 07, 2021 at 02:50:48AM +0530, Dhanasekar Kandasamy wrote: > Hi, > > We have an OpenStack Environment with 5000+ VMs running currently. I want > to apply some common Security Group to all my running VMs. > > - Common Security Group (SEC_GRP_COMMON) has around 700 rules. > - This Security Group (SEC_GRP_COMMON) has been shared to all the > OpenStack Projects using Role-Based Access Control (RBAC). > - Wanted to apply this Security Group (SEC_GRP_COMMON) to all the > running VMs in the Cloud > > *Question 1*: With the above scenario, what will happen if I attach this > Security Group(with 700+ rules) to all the 5000+ VMs? Will there be any > performance issue/impact for the same (CPU utilization, Memory etc. in the > Compute Server or Performance issues in application running in the VMs) It all depends on what plugin and agents You are using. E.g. if You are using ML2 plugin with openvswitch agents on compute nodes, for sure You will see issues with performance of agent as it will take some time to apply those rules to all ports on the compute nodes. Of course if You have many compute nodes and only few VMs on each of them, then it should be pretty fast. If You have compute nodes with e.g. 100-200 VMs on one compute, it can take more than 10 minutes to apply those SG to all ports. Another question is what firewall driver You are using. In case of ML2/OVS it can be iptables_hybrid or openvswitch driver and performance of both can be different. You can find some comparisons and benchmarks in the Internet. For example [1] or [2]. IIRC there were also some talks about that on summits so You can look for some recordings too. > > *Question 2*: Is there any recommendations or benchmark data for maximum > number of rules in the Security Group in OpenStack cloud? > > > Thanks, > > Dhana [1] https://thesaitech.wordpress.com/2019/02/15/a-comparative-study-of-openstack-networking-architectures/ [2] https://software.intel.com/content/www/us/en/develop/articles/implementing-an-openstack-security-group-firewall-driver-using-ovs-learn-actions.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+ISNMain+%28Intel+Developer+Zone+Articles+Feed%29 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From mark at stackhpc.com Thu Jan 7 08:59:49 2021 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 7 Jan 2021 08:59:49 +0000 Subject: [kolla-ansible] Upgrading and skipping a release? In-Reply-To: <046E9C0290DD9149B106B72FC9156BEA04814FBF@gmsxchsvr01.thecreation.com> References: <046E9C0290DD9149B106B72FC9156BEA04814FB6@gmsxchsvr01.thecreation.com> <046E9C0290DD9149B106B72FC9156BEA04814FBF@gmsxchsvr01.thecreation.com> Message-ID: On Wed, 6 Jan 2021 at 15:48, Eric K. Miller wrote: > > > Hi Eric. What you're referring to is a fast-forward upgrade [1]. This > > is not officially supported by Kolla Ansible, and not something we > > test upstream. There was an effort a few years ago to implement it, > > but it stalled. The test matrix does increase somewhat when you allow > > FFUs, which is why I believe many projects such as Tripleo define > > supported FFU jumps. One thing in particular that may catch you out is > > some DB migrations or cleanup processes that happen at runtime may not > > get executed. > > > > In short, I'd suggest going one release at a time. It's a pain, but > > upgrades in Kolla are relatively sane. > > Thank you mark! Much appreciated. We were hoping to avoid any potential pitfalls with the Python 2.x to 3.x transition, but it looks like we need to simply test quick upgrades from one to the next. > > Regarding the QEMU/KVM kernel module version on the host, I'm assuming we need to plan for VM shutdown/restart since I "think" we may need a newer version for Ussuri or Victoria. Looks like we're running: > > (on CentOS 7) > Compiled against library: libvirt 4.5.0 > Using library: libvirt 4.5.0 > Using API: QEMU 4.5.0 > Running hypervisor: QEMU 2.12.0 If you are running CentOS then the CentOS 8 upgrade is quite involved. See https://docs.openstack.org/kolla-ansible/train/user/centos8.html. You won't need to worry about qemu/kvm versions if you migrate VMs from CentOS 7 to 8 host as you go. > > Out of curiosity, have you used CloudLinux' KernelCare or any other hot-swap kernel module components to avoid downtime of a KVM host? Haven't tried it. > > Eric > From stephenfin at redhat.com Thu Jan 7 10:15:59 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 07 Jan 2021 10:15:59 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> On Wed, 2021-01-06 at 11:59 -0600, Ghanshyam Mann wrote: > Hello Everyone, > > You might have seen the discussion around dropping the lower constraints > testing as it becomes more challenging than the current value of doing it. > > Few of the ML thread around this discussion: > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > As Oslo and many other project dropping or already dropped it, we should decide it for all > other projects also otherwise it can be more changing than it is currently. > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > want to keep it but we should decide a general recommendation here. Out of curiosity, would limiting the list in lower-constraints to the set of requirements listed in 'requirements.txt' help matters? That would at least ensure the lower version of our explicit dependencies worked. The main issue I could see with this is potentially a lot of thrashing from pip as it attempts to find versions of implicit dependencies that satisfy the various constraints, but I guess we'll have to address that when we come to it. Stephen From martin.golasowski at vsb.cz Thu Jan 7 12:20:29 2021 From: martin.golasowski at vsb.cz (Golasowski Martin) Date: Thu, 7 Jan 2021 12:20:29 +0000 Subject: OpenStack Ansible - Telemetry Message-ID: <2A904BE8-8679-4C8A-89C3-DF9D7AF89E44@vsb.cz> Dear All, I would like to know which monitoring solution is currently supported by OSA? We are operating a small cloud (~ 6 nodes) and we are interested in collecting performance metrics, events and logs. So, as far as I know, the official OSA solution is ceilometer/aodh/panko with Gnocchi as DB backend. However Gnocchi project seems abandoned at the moment and the grafana plugin is not compatible with latest Grafana. Then there is solution based on collectd with this plugin (https://github.com/signalfx/collectd-openstack ) with Graphite or InfluxDB as backend. This supports only performance metrics and not the events. Then there are also some Prometheus exporters available, again, metrics only. What do you guys use these days? What would you recommend? Thanks! Best regards, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3632 bytes Desc: not available URL: From skaplons at redhat.com Thu Jan 7 12:30:04 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 7 Jan 2021 13:30:04 +0100 Subject: [neutron] Drivers team meeting agenda Message-ID: <20210107123004.aeahdzu4a3zro2gj@p1.localdomain> Hi, Agenda for first drivers team meeting in 2021 is available at [1] We have 3 RFEs to discuss: * https://bugs.launchpad.net/neutron/+bug/1909100 * https://bugs.launchpad.net/neutron/+bug/1900934 For that one there is also spec proposed https://review.opendev.org/c/openstack/neutron-specs/+/768588 and we already discussed that few times. So I think it's time to decide if we want to go with that solution or not :) * https://bugs.launchpad.net/neutron/+bug/1910533 Thx and see You tomorrow on the meeting. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dtantsur at redhat.com Thu Jan 7 13:28:45 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 7 Jan 2021 14:28:45 +0100 Subject: [ironic] SPUC in 2021 Message-ID: Hi folks! I feel like SPUC (our Sanity Preserving Un-Conference) has been very successful in 2020, what do you think about continuing it as before: Friday, 10am UTC (in https://bluejeans.com/643711802) Friday, 5pm UTC (in https://bluejeans.com/313987753) ? Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Jan 7 14:26:38 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 7 Jan 2021 14:26:38 +0000 Subject: [nova] Nova evacuate issue In-Reply-To: References: Message-ID: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> On 06-01-21 18:26:24, Jean-Philippe Méthot wrote: > Hi, > > We’re running Openstack Rocky on a high-availability setup with > neutron openvswitch. The setup has roughly 50 compute nodes and 2 > controller nodes. We’ve run into an issue when we’re trying to > evacuate a dead compute node where the first instance evacuate goes > through, but the second one fails (we evacuate our instances one by > one). The reason why the second one fails seems to be because Neutron > is trying to plug the port back on the dead compute, as nova instructs > it to do. Here’s an example of nova-api log output after compute22 > died and we’ve been trying to evacuate an instance. > > f3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] Creating event network-vif-unplugged:80371c01-930d-4ea2-9d28-14438e948b65 for instance 4aeb7761-cb23-4c51-93dd-79b55afbc7dc on compute22 > 2021-01-06 13:31:31.750 2858 INFO nova.osapi_compute.wsgi.server [req-4f9b3e17-1a9d-48f0-961a-bbabdf922ad6 0d0ef3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] 10.30.1.224 "POST /v2.1/os-server-external-events HTTP/1.1" status: 200 len: 1091 time: 0.4987640 > 2021-01-06 13:31:40.145 2863 INFO nova.osapi_compute.wsgi.server [req-abaac9df-7338-4d10-9326-4006021ff54d 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1 HTTP/1.1" status: 302 len: 318 time: 0.0072701 > 2021-01-06 13:31:40.156 2863 INFO nova.osapi_compute.wsgi.server [req-c393e74b-a118-4a98-8a83-be6007913dc0 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/ HTTP/1.1" status: 200 len: 789 time: 0.0070350 > 2021-01-06 13:31:43.289 2865 INFO nova.osapi_compute.wsgi.server [req-b87268b7-a673-44c1-9162-f9564647ec33 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/servers/4aeb7761-cb23-4c51-93dd-79b55afbc7dc HTTP/1.1" status: 200 len: 5654 time: 2.7543190 > 2021-01-06 13:31:43.413 2863 INFO nova.osapi_compute.wsgi.server [req-4cab23ba-c5cb-4dda-bf42-bc452d004783 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/servers/4aeb7761-cb23-4c51-93dd-79b55afbc7dc/os-volume_attachments HTTP/1.1" status: 200 len: 770 time: 0.1135709 > 2021-01-06 13:31:43.883 2865 INFO nova.osapi_compute.wsgi.server [req-f5e5a586-65f3-4798-b03b-98e01326a00b 6cb55894e59c47b3800f97a27c9c4ee9 ccfa9d8d76b8409f8c5a8d71ce32625a - default default] 10.30.1.224 "GET /v2.1/flavors/574a7152-f079-4337-b1eb-b7eca4370b73 HTTP/1.1" status: 200 len: 877 time: 0.5751688 > 2021-01-06 13:31:47.194 2864 INFO nova.api.openstack.compute.server_external_events [req-7e639b1f-8408-4e8e-9bb8-54588290edfe 0d0ef3839ca64f58ac779f6f810758c0 61e62a49d34a44f9b1161a338a7f1fdd - default default] Creating event network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 for instance 4aeb7761-cb23-4c51-93dd-79b55afbc7dc on compute22 > > As you can see, Nova "creates an event" as the virtual interface is > unplugged but then immediately creates another event to plug the > virtual interface in the same compute node that is dead. However, at > the same time, the instance is being created on another compute node. > Is this a known bug? I have not found anything about this in the bug > database. Additionally, I am not able to reproduce in our staging > environment which is smaller and running on Stein. Would you be able to trace an example evacuation request fully and pastebin it somewhere using `openstack server event list $instance [1]` output to determine the request-id etc? Feel free to also open a bug about this and we can just triage there instead of the ML. The fact that q-api has sent the network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 to n-api suggests that the q-agt is actually alive on compute22, was that the case? Note that a pre-condition of calling the evacuation API is that the source host has been fenced [2]. That all said I wonder if this is somehow related too the following stein change: https://review.opendev.org/c/openstack/nova/+/603844 Cheers, Lee [1] https://docs.openstack.org/python-openstackclient/rocky/cli/command-objects/server-event.html#server-event-list [2] https://docs.openstack.org/api-ref/compute/?expanded=evacuate-server-evacuate-action-detail#evacuate-server-evacuate-action -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ralonsoh at redhat.com Thu Jan 7 14:45:22 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Thu, 7 Jan 2021 15:45:22 +0100 Subject: [neutron] SR-IOV mechanism driver configuration, plugin.ini In-Reply-To: <3641168.yvBDRoByMW@p1> References: <3641168.yvBDRoByMW@p1> Message-ID: Hello: You need to specify two config files: the neutron.conf and the ML2 plugin one; in this case /etc/neutron/plugins/ml2/ml2_conf.ini. Other installers create a ML2 plugin config file per backend but this is not necessary. Regards. On Wed, Dec 30, 2020 at 8:29 AM Slawek Kaplonski wrote: > Hi, > > Dnia środa, 30 grudnia 2020 01:07:42 CET Gabriel Omar Gamero Montenegro > pisze: > > Dear all, > > > > I'm following the OpenStack guide for > > the implementation of SR-IOV mechanism driver. > > I'm planning to incorporate this driver to > > my current OpenStack deployment (Queens). > > > > Config SR-IOV Guide: > > https://docs.openstack.org/neutron/queens/admin/config-sriov.html > > > > At point 2, section "Configure neutron-server (Controller)" > > they said that I have to add the 'plugin.ini' file > > as a parameter to the neutron-server service. > > To do this they require to > > < > neutron-server service to load the plugin configuration file>>: > > --config-file /etc/neutron/neutron.conf > > --config-file /etc/neutron/plugin.ini > > > > I'd like to know a few things: > > > > (1) Which plugin.ini file are talking about? > > That is IMO good question. I see this file for the first time now :) > Looking at the commit [1] and commits which this patch reference to I > think > that this may be some old leftover which should be cleaned. > But maybe Rodolfo will know more as he s our SR-IOV expert in the team. > > > (2) How to set up the neutron-server initialization script > > to add the plugin.ini file? > > I understand that this varies between OS distro > > (I'm currently using Ubuntu 16.04 LTS server) > > > > Here are some things I tried... > > > > I got the following results executing this command: > > > > systemctl status neutron-server.service > > ● neutron-server.service - OpenStack Neutron Server > > Loaded: loaded (/lib/systemd/system/neutron-server.service; > > enabled; vendor preset: enabled) > > Active: active (running) since Tue 2020-12-29 18:13:50 -05 > > Main PID: 38590 (neutron-server) > > Tasks: 44 > > Memory: 738.8M > > CPU: 29.322s > > CGroup: /system.slice/neutron-server.service > > ├─38590 /usr/bin/python2 /usr/bin/neutron-server > > --config-file=/etc/neutron/neutron.conf > > --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini > > --log-file=/var/log/neutron/neutron-server.log > > ... > > > > I see 2 things: > > > > (i) When neutron-server is exectured, > > the following parameters are passed: > > --config-file=/etc/neutron/neutron.conf > > --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini > > --log-file=/var/log/neutron/neutron-server.log > > > > (ii) The file '/lib/systemd/system/neutron-server.service' > > is loaded and it has the following content: > > ... > > ExecStart=/etc/init.d/neutron-server systemd-start > > ... > > > > This indicates me that it's executing > > '/etc/init.d/neutron-server' script. > > So I suppose this is the file indicated to add the parameters > > of the SR-IOV OpenStack documentation, > > but I have no idea where to put them. > > > > For Red-Hat distros I found this documentation > > with the following configuration: > > https://access.redhat.com/documentation/en-us/ > > red_hat_enterprise_linux_openstack_platform/7/html/networking_guide > > /sr-iov-support-for-virtual-networking > > > > vi /usr/lib/systemd/system/neutron-server.service > > ... > > ExecStart=/usr/bin/neutron-server > > --config-file /usr/share/neutron/neutron-dist.conf > > --config-file /etc/neutron/neutron.conf > > --config-file /etc/neutron/plugin.ini > > --config-file /etc/neutron/plugins/ml2/ml2_conf_sriov.ini > > --log-file /var/log/neutron/server.log > > > > Thanks in advance, > > Gabriel Gamero > > [1] https://github.com/openstack/neutron/commit/ > c4e76908ae0d8c1e5bcb7f839df5e22094805299 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jan 7 15:26:23 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Jan 2021 09:26:23 -0600 Subject: [stable][grenade][qa][nova][swift] s-proxy unable to start due to missing runtime deps In-Reply-To: <20201223095407.jdlkppm66t7gizye@lyarwood-laptop.usersys.redhat.com> References: <20201222170602.bqlwjlikjwdhoc7c@lyarwood-laptop.usersys.redhat.com> <1768bc057cf.f1a8241c453709.6689913672331717101@ghanshyammann.com> <1768c8071d4.1064de03a458204.3580547345653933493@ghanshyammann.com> <20201223095407.jdlkppm66t7gizye@lyarwood-laptop.usersys.redhat.com> Message-ID: <176dd74bb72.ea0374eb914568.5337396488943512176@ghanshyammann.com> ---- On Wed, 23 Dec 2020 03:54:07 -0600 Lee Yarwood wrote ---- > On 22-12-20 16:09:56, Ghanshyam Mann wrote: > > ---- On Tue, 22 Dec 2020 12:40:07 -0600 Ghanshyam Mann wrote ---- > > > ---- On Tue, 22 Dec 2020 11:06:02 -0600 Lee Yarwood wrote ---- > > > > Hello all, > > > > > > > > I wanted to raise awareness of the following issue and to seek some > > > > feedback on my approach to workaround it: > > > > > > > > ImportError: No module named keystonemiddleware.auth_token > > > > https://bugs.launchpad.net/swift/+bug/1909018 > > > > > > > > This was introduced after I landed the following devstack backport > > > > stopping projects from installing their test-requirements.txt deps: > > > > > > > > Stop installing test-requirements with projects > > > > https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb > > > > > > > > For the time being to workaround this in various other gates I've > > > > suggested that we disable Swift in Grenade on stable/train: > > > > > > > > zuul: Disable swift services until bug #1909018 is resolved > > > > https://review.opendev.org/c/openstack/grenade/+/768224 > > > > > > > > This finally allowed openstack/nova to pass on stable/train with the > > > > following changes to lower-constraints.txt and test-requirements.txt: > > > > > > > > [stable-only] Cap bandit to 1.6.2 and raise hacking, flake8 and stestr > > > > https://review.opendev.org/c/openstack/nova/+/766171/ > > > > > > > > Are there any objections to disabling Swift in Grenade for the time > > > > being on stable/train? > > > > > > > > Would anyone have any objections to also disabling it on stable/stein > > > > via devstack-gate? > > > > > > Thanks, Lee for reporting this. > > > > > > keystonemiddleware is listed as an extras requirement in swift > > > - https://github.com/openstack/swift/blob/e0d46d77fa740768f1dd5b989a63be85ff1fec20/setup.cfg#L79 > > > > > > But devstack does not install any extras requirement for swift. I am trying to install > > > the swift's keystone extras and see if it work fine. > > > > > > - https://review.opendev.org/q/I02c692e95d70017eea03d82d75ae6c5e87bde8b1 > > > > This fix working fine tested in https://review.opendev.org/c/openstack/swift/+/766214 > > > > grenade job will be working once we merge the devstack fixes in stable branches > > ACK thanks, I hope you don't mind but I've addressed some nits raised in > the review this morning. I'll repropose backports once it's in the gate. Devstack fixes until stable/stein are merged. This is not occurring on stable/rocky and queens so I will abandon the fix for those branches. - https://review.opendev.org/q/topic:%22bug%252F1909018%22+(status:open%20OR%20status:merged) -gmann > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 > From ikatzir at infinidat.com Thu Jan 7 15:33:49 2021 From: ikatzir at infinidat.com (Igal Katzir) Date: Thu, 7 Jan 2021 17:33:49 +0200 Subject: [tripleO] Customised Cinder-Volume fails at 'Paunch 5' during overcloud deployment In-Reply-To: References: <2D1F2693-49C0-4CA2-8F8E-F9E837D6A232@infinidat.com> Message-ID: Just an update on this issue; The problem was fixed after I removed the following line from my Dockerfile- 'RUN pip install --no-cache-dir -U setuptools' Apparently, This caused a problem to run /usr/sbin/pcs command which is required during overcloud deployment. Another question I have is about re-starting a container, if I have re-built the openstack-cinder-volume image on my overcloud-controller and want to test something, how can I start the container from the new image? Do I need to redeploy the entire overcloud? (it doesn't make sense) Thanks for the help, Igal On Mon, Jan 4, 2021 at 6:44 PM Alan Bishop wrote: > > On Mon, Jan 4, 2021 at 5:31 AM Igal Katzir wrote: > >> Hello Alan, >> Thanks for your reply! >> >> I am afraid that the reason for my deployment failure might be concerned >> with the environment file I use to configure my cinder backend. >> The configuration is quite similar to >> https://github.com/Infinidat/tripleo-deployment-configs/blob/dev/RHOSP15/cinder-infinidat-config.yaml >> So I wonder if it is possible to run a deployment where I tell 'TripleO' >> to use my customize container, using containers-prepare-parameter.yaml, but >> without the environment file =cinder-infinidat-config.yaml, and configure >> the backend / start cinder-volume services manually? >> > > No, your cinder-infinidat-config.yaml file looks fine. It's responsible > for getting TripleO to configure cinder to use your driver, and that phase > was completed successfully prior to the deployment failure. > > >> Or I must have a minimum config as I find in: >> '/usr/share/openstack-tripleo-heat-templates/deployment/cinder/' (for other >> vendors)? >> If I do need such a cinder-volume-VENDOR-puppet.yaml config to be >> integrated during overcloud deployment, where is documentation that >> explains how to construct this? Do I need to use cinder-base.yaml as a >> template? >> When looking at the web for "cinder-volume-container-puppet.yaml" I found >> the Git Page of overcloud-resource-registry-puppet.j2.yaml >> >> >> and found also >> https://opendev.org/openstack/tripleo-heat-templates/../deployment >> >> but it is not so explanatory. >> > > Your cinder-infinidat-config.yaml uses a low-level puppet mechanism for > configuring what's referred to as a "custom" block storage backend. This is > perfectly fine. If you want better integration with TripleO (and puppet) > then you'll need to develop 3 separate patches, 1 each in puppet-cinder, > puppet-tripleo and tripleo-heat-templates. Undertaking that would be a > good future goal, but isn't necessary in order for you to get past your > current deployment issue. > > >> I have opened a case with RedHat as well and they are checking who from >> their R&D could help since it's out of the scope of support. >> > > I think you're starting to see responses from Red Hat that should help > identify and resolve the problem. > > Alan > > >> >> Regards, >> Igal >> >> On Thu, Dec 31, 2020 at 9:15 PM Alan Bishop wrote: >> >>> >>> >>> On Thu, Dec 31, 2020 at 5:26 AM Igal Katzir >>> wrote: >>> >>>> Hello all, >>>> >>>> I am trying to deploy RHOSP16.1 (based on ‘*train’ *distribution) for Certification >>>> purposes. >>>> I have build a container for our cinder driver and trying to deploy it. >>>> Deployment runs almost till the end and fails at stage when it tries to >>>> configure Pacemaker; >>>> Here is the last message: >>>> >>>> "Info: Applying configuration version '1609231063'", "Notice: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]/ensure: created", "Info: /Stage[main]/Pacemaker::Corosync/File_line[pcsd_bind_addr]: Scheduling refresh of Service[pcsd]", "Info: /Stage[main]/Pacemaker::Service/Service[pcsd]: Unscheduling all events on Service[pcsd]", "Info: Class[Pacemaker::Corosync]: Unscheduling all events on Class[Pacemaker::Corosync]", "Notice: /Stage[main]/Tripleo::Profile::Pacemaker::Cinder::Volume_bundle/Pacemaker::Resource::Bundle[openstack-cinder-volume]/Pcmk_bundle[openstack-cinder-volume]: Dependency Pcmk_property[property-overcloud-controller-0-cinder-volume-role] has failures: true", "Info: Creating state file /var/lib/puppet/state/state.yaml", "Notice: Applied catalog in 382.92 seconds", "Changes:", " Total: 1", "Events:", " Success: 1", " Failure: 2", " Total: 3", >>>> >>>> >>>> I have verified that all packages on my container-image >>>> (Pacemaker,Corosync, libqb,and pcs) are installed with same versions as >>>> the overcloud-controller. >>>> >>> >>> Hi Igal, >>> >>> Thank you for checking these package versions and stating they match the >>> ones installed on the overcloud node. This rules out one of the common >>> reasons for failures when trying to run a customized cinder-volume >>> container image. >>> >>> But seems that something is still missing, because deployment with the >>>> default openstack-cinder-volume image completes successfully. >>>> >>> >>> This is also good to know. >>> >>> Can anyone help with debugging this? Let me know if more info needed. >>>> >>> >>> More info is needed, but it's hard to predict exactly where to look for >>> the root cause of the failure. I'd start by looking for something at the >>> cinder log file >>> to determine whether the cinder-volume service is even trying to start. >>> Look for /var/log/containers/cinder/cinder-volume.log on the node where >>> pacemaker is trying to run the service. Are there logs indicating the >>> service is trying to start? Or maybe the service is launched, but fails >>> early during startup? >>> >>> Another possibility is podman fails to launch the container itself. If >>> that's happening then check for errors in /var/log/messages. One source of >>> this type of failure is you've specified a container bind mount, but the >>> source directory doesn't exist (docker would auto-create the source >>> directory, but podman does not). >>> >>> You specifically mentioned RHOSP, so if you need additional support then >>> I recommend opening a support case with Red Hat. That will provide a forum >>> for posting private data, such as details of your overcloud deployment and >>> full sosreports. >>> >>> Alan >>> >>> >>>> >>>> Thanks in advance, >>>> Igal >>>> >>> >> >> -- >> Regards, >> >> *Igal Katzir* >> Cell +972-54-5597086 >> Interoperability Team >> *INFINIDAT* >> >> >> >> >> -- Regards, *Igal Katzir* Cell +972-54-5597086 Interoperability Team *INFINIDAT* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jan 7 16:34:21 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Jan 2021 16:34:21 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> Message-ID: <20210107163421.oxslpa27b6fis5up@yuggoth.org> On 2021-01-07 10:15:59 +0000 (+0000), Stephen Finucane wrote: [...] > Out of curiosity, would limiting the list in lower-constraints to > the set of requirements listed in 'requirements.txt' help matters? > That would at least ensure the lower version of our explicit > dependencies worked. The main issue I could see with this is > potentially a lot of thrashing from pip as it attempts to find > versions of implicit dependencies that satisfy the various > constraints, but I guess we'll have to address that when we come > to it. You can try it locally easily enough, but my recollections from before is that what you'll find for larger projects is old releases of some dependencies don't pin upper bounds of their own dependencies and wind up not being usable because they drag in something newer than they can actually use, so it'll be an iterative effort to figure those out. Which has essentially been my problem with that lower bounds testing model, it's a manual effort to figure out what versions of modules in the transitive set will actually be compatible with one another, and then you basically get to redo that work any time you want to adjust a lower bound for something. But do give it a shot and let us know if it winds up being easier than all that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From dtantsur at redhat.com Thu Jan 7 16:39:09 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 7 Jan 2021 17:39:09 +0100 Subject: [ironic] SPUC in 2021 In-Reply-To: References: Message-ID: Okay, in case somebody still wants to do it, "thanks" to Bluejeans I need to change meeting IDs: Friday, 10am UTC: https://bluejeans.com/772893798 Friday, 5pm UTC: https://bluejeans.com/250125662 Dmitry On Thu, Jan 7, 2021 at 2:28 PM Dmitry Tantsur wrote: > Hi folks! > > I feel like SPUC (our Sanity Preserving Un-Conference) has been very > successful in 2020, what do you think about continuing it as before: > Friday, 10am UTC (in https://bluejeans.com/643711802) > Friday, 5pm UTC (in https://bluejeans.com/313987753) > ? > > Dmitry > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jan 7 16:52:50 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 07 Jan 2021 10:52:50 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210107163421.oxslpa27b6fis5up@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> <20210107163421.oxslpa27b6fis5up@yuggoth.org> Message-ID: <176ddc3e162.b89daf27919574.878082988451964097@ghanshyammann.com> ---- On Thu, 07 Jan 2021 10:34:21 -0600 Jeremy Stanley wrote ---- > On 2021-01-07 10:15:59 +0000 (+0000), Stephen Finucane wrote: > [...] > > Out of curiosity, would limiting the list in lower-constraints to > > the set of requirements listed in 'requirements.txt' help matters? > > That would at least ensure the lower version of our explicit > > dependencies worked. The main issue I could see with this is > > potentially a lot of thrashing from pip as it attempts to find > > versions of implicit dependencies that satisfy the various > > constraints, but I guess we'll have to address that when we come > > to it. > > You can try it locally easily enough, but my recollections from > before is that what you'll find for larger projects is old releases > of some dependencies don't pin upper bounds of their own > dependencies and wind up not being usable because they drag in > something newer than they can actually use, so it'll be an iterative > effort to figure those out. Which has essentially been my problem > with that lower bounds testing model, it's a manual effort to figure > out what versions of modules in the transitive set will actually be > compatible with one another, and then you basically get to redo that > work any time you want to adjust a lower bound for something. > > But do give it a shot and let us know if it winds up being easier > than all that. I have not tested it yet but from past testing observation, I remember I end up adding some implicit deps in l-c as they were not compatible with project explicit deps and their deps compatibility so it has to be in l-c explicitly. So I am not sure if restricting the l-c with requirements.txt deps can work or not but good to try. -gmann > -- > Jeremy Stanley > From hberaud at redhat.com Thu Jan 7 16:53:44 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 7 Jan 2021 17:53:44 +0100 Subject: [release] Status: ORANGE - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: Hello everyone, @release managers: all impacted projects now have fixes submitted, so before validating a patch you only have to ensure that the released projects aren't in the list of opened patches: https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open) I move our status to ORANGE as the situation seems improving for now and also because we can easily monitor the state. @all: Notice that some projects have been ignored here because they aren't released, here is the list: i18n ideas openstack-manuals openstack-zuul-roles os-apply-config os-collect-config os-refresh-config ossa pyeclib security-analysis security-doc tempest-lib tempest-stress training-guides workload-ref-archs However it could be worth it to uniformize them, but we leave it to the teams to update them. Also notice that we proposed to add the capabilities to zuul to retrieve requirements from a dedicated place: https://review.opendev.org/c/zuul/zuul-jobs/+/769292 It will help projects that haven't documentation but that produce release notes to split their requirements more properly. If you've questions do not hesitate to ping us on #openstack-release Thanks for your reading Le mer. 6 janv. 2021 à 12:47, Herve Beraud a écrit : > @release mangaers: For now I think we can restart validating projects that > aren't present in the previous list (c.f > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html > ). > Normally they aren't impacted by this problem. > > I'll move to the "Orange" state when all the projects of list will be > patched or at least when a related patch will be present in the list (c.f > https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open+OR+status:merged)). > For now my monitoring indicates that ~50 projects still need related > changes. > > So, for now, please, ensure that the repos aren't listed here before > validate a patch > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html > > Thanks to everyone who helped here! Much appreciated! > > Le mar. 5 janv. 2021 à 12:05, Martin Chacon Piza > a écrit : > >> Hi Herve, >> >> I have added this topic to the Monasca irc meeting today. >> >> Thank you, >> Martin (chaconpiza) >> >> >> >> El lun, 4 de ene. de 2021 a la(s) 18:30, Herve Beraud (hberaud at redhat.com) >> escribió: >> >>> Thanks all! >>> >>> Here we can track our advancement: >>> >>> https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) >>> >>> Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek < >>> radoslaw.piliszek at gmail.com> a écrit : >>> >>>> On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud wrote: >>>> > >>>> > Here is the filtered list of projects that meet the conditions >>>> leading to the bug, and who should be fixed to completely solve our issue: >>>> > >>>> > ... >>>> > etcd3gw >>>> > ... >>>> > python-masakariclient >>>> > ... >>>> > >>>> > Notice that some of these projects aren't deliverables but if >>>> possible it could be worth fixing them too. >>>> > >>>> > These projects have an incompatibility between entries in their >>>> test-requirements.txt, and they're missing a doc/requirements.txt file. >>>> > >>>> > The more straightforward path to unlock our job >>>> "publish-openstack-releasenotes-python3" is to create a >>>> doc/requirements.txt file that only contains the needed dependencies to >>>> reduce the possibility of pip resolver issues. I personally think that we >>>> could use the latest allowed version of requirements (sphinx, reno, etc...). >>>> > >>>> > I propose to track the related advancement by using the >>>> "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we >>>> would be able to update our status. >>>> > >>>> > Also it could be worth fixing test-requirements.txt incompatibilities >>>> but this task is more on the projects teams sides and this task could be >>>> done with a follow up patch. >>>> > >>>> > Thoughts? >>>> >>>> Thanks, Hervé! >>>> >>>> Done for python-masakariclient in [1]. >>>> >>>> etcd3gw needs more love in general but I will have this split in mind. >>>> >>>> [1] >>>> https://review.opendev.org/c/openstack/python-masakariclient/+/769163 >>>> >>>> -yoctozepto >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> *Martín Chacón Pizá* >> *chacon.piza at gmail.com * >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jan 7 16:55:44 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 7 Jan 2021 17:55:44 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210107163421.oxslpa27b6fis5up@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> <20210107163421.oxslpa27b6fis5up@yuggoth.org> Message-ID: On Thu, Jan 7, 2021 at 5:42 PM Jeremy Stanley wrote: > > On 2021-01-07 10:15:59 +0000 (+0000), Stephen Finucane wrote: > [...] > > Out of curiosity, would limiting the list in lower-constraints to > > the set of requirements listed in 'requirements.txt' help matters? > > That would at least ensure the lower version of our explicit > > dependencies worked. The main issue I could see with this is > > potentially a lot of thrashing from pip as it attempts to find > > versions of implicit dependencies that satisfy the various > > constraints, but I guess we'll have to address that when we come > > to it. > > You can try it locally easily enough, but my recollections from > before is that what you'll find for larger projects is old releases > of some dependencies don't pin upper bounds of their own > dependencies and wind up not being usable because they drag in > something newer than they can actually use This is also why we can't really have a smart-enough solver trying to minimize dep versions as some have no bounds on either side. What would the verdict then be? If it was to install the oldest version ever, I bet it would fail most of the time. For me, lower constraints are well too complicated to really get right, and, moreover, checking only unit tests with them is likely not useful enough to warrant that they result in working deployments. I don't envy distro packagers but really the only thing they can bet on is deciding on a set of packaged deps and running tempest against the curated deployment (plus unit tests as they are much cheaper anyhow). Thanks to upper-constraints we know there is at least one combination of deps that will work. We can't really ensure any other in a simple manner. -yoctozepto From fungi at yuggoth.org Thu Jan 7 17:22:24 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Jan 2021 17:22:24 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <1c2ec93642e847aa6650f9e9af34a6fae5f9278b.camel@redhat.com> <20210107163421.oxslpa27b6fis5up@yuggoth.org> Message-ID: <20210107172224.vvm7fur2fnpdl2fk@yuggoth.org> On 2021-01-07 17:55:44 +0100 (+0100), Radosław Piliszek wrote: [...] > This is also why we can't really have a smart-enough solver trying to > minimize dep versions as some have no bounds on either side. What > would the verdict then be? If it was to install the oldest version > ever, I bet it would fail most of the time. Yes, I expect that too would require some manual tweaking to find appropriate versions to override with, however that wouldn't need to be redone nearly as often as what you end up with when you're fighting tools which always want to install the most recent available version. > For me, lower constraints are well too complicated to really get > right, and, moreover, checking only unit tests with them is likely not > useful enough to warrant that they result in working deployments. [...] This I agree with. I think lower bounds checking is theoretically possible with appropriate tools (which don't currently exist), but would still involve filling in yourself for the authors of less rigorously maintained projects in your transitive dependency set. More generally, basically nothing in the Python packaging ecosystem is designed with the idea of supporting a solution to this, and there's very little to encourage a project to even list much less keep up minimum versions of dependencies, except in order to force an upgrade. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Thu Jan 7 17:42:04 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 7 Jan 2021 09:42:04 -0800 Subject: [ironic] SPUC in 2021 In-Reply-To: References: Message-ID: Sounds like a perfect way to enjoy Friday :) Thanks Dmitry! On Thu, Jan 7, 2021 at 8:47 AM Dmitry Tantsur wrote: > > Okay, in case somebody still wants to do it, "thanks" to Bluejeans I need to change meeting IDs: > > Friday, 10am UTC: https://bluejeans.com/772893798 > Friday, 5pm UTC: https://bluejeans.com/250125662 > > Dmitry > > On Thu, Jan 7, 2021 at 2:28 PM Dmitry Tantsur wrote: >> >> Hi folks! >> >> I feel like SPUC (our Sanity Preserving Un-Conference) has been very successful in 2020, what do you think about continuing it as before: >> Friday, 10am UTC (in https://bluejeans.com/643711802) >> Friday, 5pm UTC (in https://bluejeans.com/313987753) >> ? >> >> Dmitry >> -- >> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill > > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From jp.methot at planethoster.info Thu Jan 7 16:58:05 2021 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Thu, 7 Jan 2021 11:58:05 -0500 Subject: [nova] Nova evacuate issue In-Reply-To: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> References: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> Message-ID: Considering that this issue happened in our production environment, it’s not exactly possible to try to reproduce without shutting down servers that are currently in use. That said, If the current logs I have are enough, I will try opening a bug on the bugtracker. Compute22, the source host, was completely dead. It refused to boot up through IPMI. It is possible that that stein fix prevented me from reproducing the problem in my staging environment (production is on rocky, staging is on stein). Also, it may be important to note that our neutron is split, as we use neutron-rpc-server to answer rpc calls. It’s also HA, as we have two controllers with neutron-rpc-server and the api running (and that won’t work anymore when we upgrade production to stein, but that’s another problem entirely and probably off-topic here). Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada > Le 7 janv. 2021 à 09:26, Lee Yarwood a écrit : > > Would you be able to trace an example evacuation request fully and > pastebin it somewhere using `openstack server event list $instance [1]` > output to determine the request-id etc? Feel free to also open a bug > about this and we can just triage there instead of the ML. > > The fact that q-api has sent the > network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 to n-api > suggests that the q-agt is actually alive on compute22, was that the > case? Note that a pre-condition of calling the evacuation API is that > the source host has been fenced [2]. > > That all said I wonder if this is somehow related too the following > stein change: > > https://review.opendev.org/c/openstack/nova/+/603844 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Thu Jan 7 18:57:34 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Thu, 7 Jan 2021 19:57:34 +0100 Subject: OpenStack Ansible - Telemetry In-Reply-To: <2A904BE8-8679-4C8A-89C3-DF9D7AF89E44@vsb.cz> References: <2A904BE8-8679-4C8A-89C3-DF9D7AF89E44@vsb.cz> Message-ID: <148a3fde-c75f-ed9b-e2d0-84cca2519683@matthias-runge.de> On 07/01/2021 13:20, Golasowski Martin wrote: > Dear All, > I would like to know which monitoring solution is currently supported by > OSA? We are operating a small cloud (~ 6 nodes) and we are interested in > collecting performance metrics, events and logs. > > So, as far as I know, the official OSA solution is ceilometer/aodh/panko > with Gnocchi as DB backend. However Gnocchi project seems abandoned at > the moment and the grafana plugin is not compatible with latest Grafana. > > Then there is solution based on collectd with this plugin > (https://github.com/signalfx/collectd-openstack > ) with Graphite or > InfluxDB as backend. This supports only performance metrics and not the > events. > > Then there are also some Prometheus exporters available, again, metrics > only. > > What do you guys use these days? What would you recommend? > Hi there, with my telemetry hat on: we're working on the gnocchi issue, but gnocchi is only a metrics store anyways. Personally, I wouldn't want to store any events in panko. If you use such things like autoscaling for instances, you definitely want gnocchi and aodh. With my collectd hat on: collectd supports collecting and sending metrics and events to multiple write endpoints. It is not designed to collect additional metadata, such as project or user data. You'll mostly get infrastructure related data (from baremetal nodes). The con side with using graphite or influxdb in the Open Source variant is, that you don't get HA. There is the Service Telemetry Framework[1], but it is integrated with TripleO, not with OSA, it uses both collectd and ceilometer for collection; metrics are stored in prometheus, events in elasticsearch, which is also used for log aggregation. I am unsure if this solution is not a bit too heavy for your use case. The best interest (in this community here): put some man-power on gnocchi. Matthias [1] https://infrawatch.github.io/documentation/ From martin.golasowski at vsb.cz Thu Jan 7 20:39:45 2021 From: martin.golasowski at vsb.cz (Golasowski Martin) Date: Thu, 7 Jan 2021 20:39:45 +0000 Subject: OpenStack Ansible - Telemetry In-Reply-To: <148a3fde-c75f-ed9b-e2d0-84cca2519683@matthias-runge.de> References: <2A904BE8-8679-4C8A-89C3-DF9D7AF89E44@vsb.cz> <148a3fde-c75f-ed9b-e2d0-84cca2519683@matthias-runge.de> Message-ID: <7547C949-1A4C-435C-927A-6869F25ED849@vsb.cz> Thanks! So, in that case, “builtin” ceilometer with gnocchi is the way to go. In fact, it works when deployed with OSA, the only problem is incompatible Grafana plugin. Would you recommend some other tool to visualise gnocchi metrics? We can always downgrade Grafana to the last version which was compatible with the plugin, but that may break other telemetry. Regards, Martin > On 7. 1. 2021, at 19:57, Matthias Runge wrote: > > On 07/01/2021 13:20, Golasowski Martin wrote: >> Dear All, >> I would like to know which monitoring solution is currently supported by OSA? We are operating a small cloud (~ 6 nodes) and we are interested in collecting performance metrics, events and logs. >> So, as far as I know, the official OSA solution is ceilometer/aodh/panko with Gnocchi as DB backend. However Gnocchi project seems abandoned at the moment and the grafana plugin is not compatible with latest Grafana. >> Then there is solution based on collectd with this plugin (https://github.com/signalfx/collectd-openstack ) with Graphite or InfluxDB as backend. This supports only performance metrics and not the events. >> Then there are also some Prometheus exporters available, again, metrics only. >> What do you guys use these days? What would you recommend? > > Hi there, > > with my telemetry hat on: we're working on the gnocchi issue, but gnocchi is only a metrics store anyways. Personally, I wouldn't want to store any events in panko. If you use such things like autoscaling for instances, you definitely want gnocchi and aodh. > > With my collectd hat on: collectd supports collecting and sending metrics and events to multiple write endpoints. It is not designed to collect additional metadata, such as project or user data. You'll mostly get infrastructure related data (from baremetal nodes). > The con side with using graphite or influxdb in the Open Source variant is, that you don't get HA. > > There is the Service Telemetry Framework[1], but it is integrated with TripleO, not with OSA, it uses both collectd and ceilometer for collection; metrics are stored in prometheus, events in elasticsearch, which is also used for log aggregation. I am unsure if this solution is not a bit too heavy for your use case. > > > The best interest (in this community here): put some man-power on gnocchi. > > Matthias > > > [1] https://infrawatch.github.io/documentation/ > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3632 bytes Desc: not available URL: From dvd at redhat.com Thu Jan 7 20:49:25 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Thu, 7 Jan 2021 15:49:25 -0500 Subject: [nova] Nova evacuate issue In-Reply-To: References: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> Message-ID: That would be great to have debug log level, it's easier to troubleshoot migration issues. DVD - written my phone, please ignore the tpyos On Thu., Jan. 7, 2021, 1:27 p.m. Jean-Philippe Méthot, < jp.methot at planethoster.info> wrote: > Considering that this issue happened in our production environment, it’s > not exactly possible to try to reproduce without shutting down servers that > are currently in use. That said, If the current logs I have are enough, I > will try opening a bug on the bugtracker. > > Compute22, the source host, was completely dead. It refused to boot up > through IPMI. > > It is possible that that stein fix prevented me from reproducing the > problem in my staging environment (production is on rocky, staging is on > stein). > > Also, it may be important to note that our neutron is split, as we use > neutron-rpc-server to answer rpc calls. It’s also HA, as we have two > controllers with neutron-rpc-server and the api running (and that won’t > work anymore when we upgrade production to stein, but that’s another > problem entirely and probably off-topic here). > > Jean-Philippe Méthot > Senior Openstack system administrator > Administrateur système Openstack sénior > PlanetHoster inc. > 4414-4416 Louis B Mayer > Laval, QC, H7P 0G1, Canada > > > > > > Le 7 janv. 2021 à 09:26, Lee Yarwood a écrit : > > Would you be able to trace an example evacuation request fully and > pastebin it somewhere using `openstack server event list $instance [1]` > output to determine the request-id etc? Feel free to also open a bug > about this and we can just triage there instead of the ML. > > The fact that q-api has sent the > network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 to n-api > suggests that the q-agt is actually alive on compute22, was that the > case? Note that a pre-condition of calling the evacuation API is that > the source host has been fenced [2]. > > That all said I wonder if this is somehow related too the following > stein change: > > https://review.opendev.org/c/openstack/nova/+/603844 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Thu Jan 7 00:40:30 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Thu, 7 Jan 2021 08:40:30 +0800 Subject: tinyipa cannot boot OS of baremetal node In-Reply-To: References: Message-ID: Thanks for the tip! I have make images based on dib of ironic-python-agent-builder, and it's really useful. Ankele Riccardo Pittau 于2021年1月6日周三 下午5:10写道: > Hello Ankele, > > if you're using ironic on production servers I suggest you to build and > use ironic-python-agent images based on diskimage-builder as explained here: > > https://docs.openstack.org/ironic-python-agent-builder/latest/admin/dib.html > > The tinyipa image is best suited for testing and development and not > recommended for production usage. > > Thanks, > > Riccardo > > > On Tue, Jan 5, 2021 at 5:45 PM Ankele zhang wrote: > >> Hi~ >> >> My Rocky OpenStack platform deployed with official documents, includes >> Keystone/Cinder/Neutron/Nova and Ironic. >> >> I used to boot my baremetal nodes by CoreOS downloaded on >> https://tarballs.opendev.org/openstack/ironic-python-agent/coreos/files/ >> >> >> Since I want to customize my own HardwareManager for configuring RAID, I >> have build TinyIPA image tinyipa.tar.gz and tinyipa.vmlinuz with >> ironic-python-agent-builder(master branch) and ironic-python-agent(rocky >> branch). Here are all the products of the build process. >> [image: image.png] >> Then I used these two images to create the baremetal node, and boot nova >> server, but I didn't get the results I wanted, it couldn't enter the >> ramdisk and always in 'wait call-back' state. as following >> >> [image: image.png] >> I got nothing in /var/log/ironic/ironig-conductor.log and >> /var/log/nova/nova-compute.log >> >> I don't know if these two image (tinyipa.tar.gz and tinyipa.vmlinuz) are >> valid for Ironic. If not, how can I customize HardwareManager? >> >> Looking forward to hearing from you. >> >> Ankele >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 126537 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 27968 bytes Desc: not available URL: From ryan at messagecloud.com Thu Jan 7 17:14:12 2021 From: ryan at messagecloud.com (Ryan Price-King) Date: Thu, 7 Jan 2021 17:14:12 +0000 Subject: Cannot login to Built trovestack image Message-ID: Hi, I am having problems with the image being deployed correctly with nova, but the communication to the guestagent is timing out and i am stuck in build stage. ./trovestack build-image ubuntu bionic true ubuntu Also with that, I dont know which mysql version it is building. I am assuming it is 5.7.29. I cannot diagnose as i cannot login to the guest image instance. I assume the user is ubuntu, but i cannot login with any password that i have tried. Can you tell me what username/password to login to the instance by console in openstack please. Regards, Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan at messagecloud.com Thu Jan 7 17:15:39 2021 From: ryan at messagecloud.com (Ryan Price-King) Date: Thu, 7 Jan 2021 17:15:39 +0000 Subject: Cannot login to Built trovestack image In-Reply-To: References: Message-ID: Hi, Sorry I meant the instance is being deployed in nova correctly and entering running state and console gives me login screen. Regards, Ryan On Thu, 7 Jan 2021 at 17:14, Ryan Price-King wrote: > Hi, > > I am having problems with the image being deployed correctly with nova, > but the communication to the guestagent is timing out and i am stuck in > build stage. > > ./trovestack build-image ubuntu bionic true ubuntu > > > Also with that, I dont know which mysql version it is building. > > I am assuming it is 5.7.29. > > I cannot diagnose as i cannot login to the guest image instance. > > I assume the user is ubuntu, but i cannot login with any password that i > have tried. > > Can you tell me what username/password to login to the instance by > console in openstack please. > > Regards, > Ryan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From SSelf at performair.com Thu Jan 7 21:20:54 2021 From: SSelf at performair.com (SSelf at performair.com) Date: Thu, 7 Jan 2021 21:20:54 +0000 Subject: [cinder] Cinder & Ceph Integration Error: No Valid Backend Message-ID: All; We're having problems with our Openstack/Ceph integration. The versions we're using are Ussuri & Nautilus. When trying to create a volume, the volume is created, though the status is stuck at 'ERROR'. This appears to be the most relevant line from the Cinder scheduler.log: 2021-01-07 14:00:38.473 140686 ERROR cinder.scheduler.flows.create_volume [req-f86556b5-cb2e-4b2d-b556-ed07e632289d 824c26c133b34d8b8e84a7acabbe6f91 a983323b5ffc47e18660794cd9344869 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available Here is the 'cinder.conf' from our Controller Node: [DEFAULT] # define own IP address my_ip = 10.0.80.40 log_dir = /var/log/cinder state_path = /var/lib/cinder auth_strategy = keystone enabled_backends = ceph glance_api_version = 2 debug = true # RabbitMQ connection info transport_url = rabbit://openstack:@10.0.80.40:5672 enable_v3_api = True # MariaDB connection info [database] connection = mysql+pymysql://cinder:@10.0.80.40/cinder # Keystone auth info [keystone_authtoken] www_authenticate_uri = http://10.0.80.40:5000 auth_url = http://10.0.80.40:5000 memcached_servers = 10.0.80.40:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = [oslo_concurrency] lock_path = $state_path/tmp [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = rbd_os_volumes rbd_ceph_conf = /etc/ceph/463/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_exclusive_cinder_pool = true backup_driver = cinder.backup.drivers.ceph backup_ceph_conf = /etc/ceph/300/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool = rbd_os_backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true Does anyone have any ideas as to what is going wrong? Thank you, Stephen Self IT Manager Perform Air International sself at performair.com www.performair.com From SSelf at performair.com Thu Jan 7 23:28:10 2021 From: SSelf at performair.com (SSelf at performair.com) Date: Thu, 7 Jan 2021 23:28:10 +0000 Subject: [cinder] Cinder & Ceph Integration Error: No Valid Backend In-Reply-To: References: Message-ID: All; The overall issue has been resolved. There were two major causes: Misplacement of keyring(s) (they were not within /etc/ceph/) 'openstack-cinder-volume' service was not started/enabled Thank you, Stephen Self IT Manager sself at performair.com 463 South Hamilton Court Gilbert, Arizona 85233 Phone: (480) 610-3500 Fax: (480) 610-3501 www.performair.com -----Original Message----- From: SSelf at performair.com [mailto:SSelf at performair.com] Sent: Thursday, January 7, 2021 2:21 PM To: ceph-users at ceph.io; openstack-discuss at lists.openstack.org Subject: [ceph-users] [cinder] Cinder & Ceph Integration Error: No Valid Backend All; We're having problems with our Openstack/Ceph integration. The versions we're using are Ussuri & Nautilus. When trying to create a volume, the volume is created, though the status is stuck at 'ERROR'. This appears to be the most relevant line from the Cinder scheduler.log: 2021-01-07 14:00:38.473 140686 ERROR cinder.scheduler.flows.create_volume [req-f86556b5-cb2e-4b2d-b556-ed07e632289d 824c26c133b34d8b8e84a7acabbe6f91 a983323b5ffc47e18660794cd9344869 - default default] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available: cinder.exception.NoValidBackend: No valid backend was found. No weighed backends available Here is the 'cinder.conf' from our Controller Node: [DEFAULT] # define own IP address my_ip = 10.0.80.40 log_dir = /var/log/cinder state_path = /var/lib/cinder auth_strategy = keystone enabled_backends = ceph glance_api_version = 2 debug = true # RabbitMQ connection info transport_url = rabbit://openstack:@10.0.80.40:5672 enable_v3_api = True # MariaDB connection info [database] connection = mysql+pymysql://cinder:@10.0.80.40/cinder # Keystone auth info [keystone_authtoken] www_authenticate_uri = http://10.0.80.40:5000 auth_url = http://10.0.80.40:5000 memcached_servers = 10.0.80.40:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = [oslo_concurrency] lock_path = $state_path/tmp [ceph] volume_driver = cinder.volume.drivers.rbd.RBDDriver volume_backend_name = ceph rbd_pool = rbd_os_volumes rbd_ceph_conf = /etc/ceph/463/ceph.conf rbd_flatten_volume_from_snapshot = false rbd_max_clone_depth = 5 rbd_store_chunk_size = 4 rados_connect_timeout = -1 rbd_user = cinder rbd_exclusive_cinder_pool = true backup_driver = cinder.backup.drivers.ceph backup_ceph_conf = /etc/ceph/300/ceph.conf backup_ceph_user = cinder-backup backup_ceph_chunk_size = 134217728 backup_ceph_pool = rbd_os_backups backup_ceph_stripe_unit = 0 backup_ceph_stripe_count = 0 restore_discard_excess_bytes = true Does anyone have any ideas as to what is going wrong? Thank you, Stephen Self IT Manager Perform Air International sself at performair.com www.performair.com _______________________________________________ ceph-users mailing list -- ceph-users at ceph.io To unsubscribe send an email to ceph-users-leave at ceph.io From anlin.kong at gmail.com Fri Jan 8 00:16:35 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 8 Jan 2021 13:16:35 +1300 Subject: Cannot login to Built trovestack image In-Reply-To: References: Message-ID: Hi, Have you read https://docs.openstack.org/trove/latest/admin/building_guest_images.html? --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Fri, Jan 8, 2021 at 10:09 AM Ryan Price-King wrote: > Hi, > > Sorry I meant the instance is being deployed in nova correctly and > entering running state and console gives me login screen. > > Regards, > Ryan > > On Thu, 7 Jan 2021 at 17:14, Ryan Price-King > wrote: > >> Hi, >> >> I am having problems with the image being deployed correctly with nova, >> but the communication to the guestagent is timing out and i am stuck in >> build stage. >> >> ./trovestack build-image ubuntu bionic true ubuntu >> >> >> Also with that, I dont know which mysql version it is building. >> >> I am assuming it is 5.7.29. >> >> I cannot diagnose as i cannot login to the guest image instance. >> >> I assume the user is ubuntu, but i cannot login with any password that i >> have tried. >> >> Can you tell me what username/password to login to the instance by >> console in openstack please. >> >> Regards, >> Ryan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Fri Jan 8 02:25:32 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 8 Jan 2021 15:25:32 +1300 Subject: Cannot login to Built trovestack image In-Reply-To: References: Message-ID: In this case, we need to check the trove-guestagent log. For how to ssh into the guest instance, https://docs.openstack.org/trove/latest/admin/troubleshooting.html You can also jump into #openstack-trove IRC channel we could have a chat there. --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Fri, Jan 8, 2021 at 1:34 PM Ryan Price-King wrote: > Hi, > > Sorry I should have clarified. The build step I am stuck on is while > spinning up the trove database on openstack horizon. I have built the qcow > image fine. Also, I can view the login prompt of the trove instance that > was created when creating a trove database. So it seems the agent is not > running in the instance properly. > > I have read that document lots and need to login to the image to see the > files and Ubuntu Ubuntu doesnt work. > > Regards, > Ryan > > On 8 Jan 2021 at 00:16, Lingxian Kong wrote: > > Hi, > > Have you read > https://docs.openstack.org/trove/latest/admin/building_guest_images.html? > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz > > > On Fri, Jan 8, 2021 at 10:09 AM Ryan Price-King > wrote: > >> Hi, >> >> Sorry I meant the instance is being deployed in nova correctly and >> entering running state and console gives me login screen. >> >> Regards, >> Ryan >> >> On Thu, 7 Jan 2021 at 17:14, Ryan Price-King >> wrote: >> >>> Hi, >>> >>> I am having problems with the image being deployed correctly with nova, >>> but the communication to the guestagent is timing out and i am stuck in >>> build stage. >>> >>> ./trovestack build-image ubuntu bionic true ubuntu >>> >>> >>> Also with that, I dont know which mysql version it is building. >>> >>> I am assuming it is 5.7.29. >>> >>> I cannot diagnose as i cannot login to the guest image instance. >>> >>> I assume the user is ubuntu, but i cannot login with any password that i >>> have tried. >>> >>> Can you tell me what username/password to login to the instance by >>> console in openstack please. >>> >>> Regards, >>> Ryan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jan 8 03:02:37 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 7 Jan 2021 22:02:37 -0500 Subject: [cinder] review priorities for the next 2 weeks Message-ID: <09e56f20-6961-e663-644e-4b72c4c3e19f@gmail.com> As discussed at yesterday's cinder weekly meeting, the new driver merge deadline will soon be upon us at the Wallaby-2 milestone in two weeks [0]. Thus, all members of the cinder community should make driver reviews the top priority over the next 2 weeks. There are four drivers proposed for Wallaby that haven't merged yet: 1 - Ceph ISCSI driver https://review.opendev.org/c/openstack/cinder/+/662829 2 - Dell/EMC PowerVault ME Series driver: https://review.opendev.org/c/openstack/cinder/+/758684/ 3 - TOYOU ACS5000 driver: https://review.opendev.org/c/openstack/cinder/+/767290/ 4 - Kioxia KumoScale NVMeoF volume driver: https://review.opendev.org/c/openstack/cinder/+/768574 nvmeof connector mdraid support: https://review.opendev.org/c/openstack/os-brick/+/768575 healing agent: https://review.opendev.org/c/openstack/os-brick/+/768576 Don't forget that the cinder docs contain a helpful checklist for reviewing drivers: https://docs.openstack.org/cinder/latest/contributor/new_driver_checklist.html If you are waiting for reviews of your driver, it will be helpful for you to review other drivers. You may notice something that you missed or could code differently in your driver, or you may be able to suggest changes based on issues you've come across implementing your driver. Happy reviewing! brian [0] https://releases.openstack.org/wallaby/schedule.html#w-cinder-driver-deadline From hberaud at redhat.com Fri Jan 8 08:26:47 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 8 Jan 2021 09:26:47 +0100 Subject: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.3.0 failed In-Reply-To: References: Message-ID: Hello, FYI it seems that kolla met docker's new limitation during its release jobs, especially with your publish jobs, I saw that you (the kolla team) already discussed this limitation [1] on the ML. ``` 2021-01-07 17:21:03.396355 | primary | ERROR:kolla.common.utils.base:Error'd with the following message 2021-01-07 17:21:03.396465 | primary | ERROR:kolla.common.utils.base:toomanyrequests: You have reached your pull rate limit. You may increase ``` Three jobs here failed for the same reason. I don't think that reenqueue the failing jobs without a specific action to manage this limitation will help us here. Let us know if we can help us in some manner. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019148.html Le jeu. 7 janv. 2021 à 20:41, a écrit : > Build failed. > > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/a64b4c4bc398481f9afffa2ad465a012 > : SUCCESS in 1m 03s > - release-openstack-python > https://zuul.opendev.org/t/openstack/build/2ed64dfcbcaf483abf003cfdf4f25837 > : SUCCESS in 3m 11s > - announce-release > https://zuul.opendev.org/t/openstack/build/84d803d496a2485c983f6233dafcfd71 > : SUCCESS in 4m 07s > - propose-update-constraints > https://zuul.opendev.org/t/openstack/build/d0cac6a077054bc8ba9eb92e56c21799 > : SUCCESS in 4m 15s > - kolla-publish-centos-source > https://zuul.opendev.org/t/openstack/build/5e8f6aa2f56940a7be77a8dfe1c8ecc6 > : SUCCESS in 2h 20m 47s > - kolla-publish-centos-binary > https://zuul.opendev.org/t/openstack/build/8a8a021d9f9c4ca79755b06309710cc7 > : SUCCESS in 1h 56m 32s (non-voting) > - kolla-publish-centos8-source > https://zuul.opendev.org/t/openstack/build/fb6891d3f5e4493b880fce263a92e086 > : SUCCESS in 1h 50m 57s > - kolla-publish-centos8-binary > https://zuul.opendev.org/t/openstack/build/c312c05e5d084fdbb3f372755221f186 > : SUCCESS in 1h 13m 12s (non-voting) > - kolla-publish-debian-source > https://zuul.opendev.org/t/openstack/build/e24c12751b8c4aba881adb6c9ae8dc07 > : SUCCESS in 1h 27m 27s (non-voting) > - kolla-publish-debian-source-aarch64 > https://zuul.opendev.org/t/openstack/build/1ff3b02df53847d0aa54bf12ea7fa666 > : FAILURE in 1h 51m 49s (non-voting) > - kolla-publish-debian-binary > https://zuul.opendev.org/t/openstack/build/012c2de475fe45ea83f1cd8a7420aa6d > : SUCCESS in 1h 15m 15s (non-voting) > - kolla-publish-ubuntu-source > https://zuul.opendev.org/t/openstack/build/88ca21d972514cce954ecb586324fa29 > : FAILURE in 4m 14s > - kolla-publish-ubuntu-binary > https://zuul.opendev.org/t/openstack/build/f3d44e4d1b6c4e9799161b156290b238 > : FAILURE in 4m 19s (non-voting) > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jan 8 08:45:02 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 8 Jan 2021 08:45:02 +0000 Subject: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.3.0 failed In-Reply-To: References: Message-ID: Hi Herve, Thanks for noticing this. The pull rate limit refreshes every 6 hours, is specific to the IP used by the job, and in the case of the build/publish jobs we only require pulling a single image - the base OS image - per job. I suggest we reenqueue the failing jobs. For build/publish jobs we have discussed using the infra Docker registry mirrors, which should avoid hitting Dockerhub too often. Cheers, Mark On Fri, 8 Jan 2021 at 08:28, Herve Beraud wrote: > Hello, > > FYI it seems that kolla met docker's new limitation during its release > jobs, especially with your publish jobs, I saw that you (the kolla team) > already discussed this limitation [1] on the ML. > > ``` > > 2021-01-07 17:21:03.396355 | primary | ERROR:kolla.common.utils.base:Error'd with the following message > > 2021-01-07 17:21:03.396465 | primary | ERROR:kolla.common.utils.base:toomanyrequests: You have reached your pull rate limit. You may increase > ``` > > Three jobs here failed for the same reason. > > I don't think that reenqueue the failing jobs without a specific action to manage this limitation will help us here. > > Let us know if we can help us in some manner. > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019148.html > > > Le jeu. 7 janv. 2021 à 20:41, a écrit : > >> Build failed. >> >> - openstack-upload-github-mirror >> https://zuul.opendev.org/t/openstack/build/a64b4c4bc398481f9afffa2ad465a012 >> : SUCCESS in 1m 03s >> - release-openstack-python >> https://zuul.opendev.org/t/openstack/build/2ed64dfcbcaf483abf003cfdf4f25837 >> : SUCCESS in 3m 11s >> - announce-release >> https://zuul.opendev.org/t/openstack/build/84d803d496a2485c983f6233dafcfd71 >> : SUCCESS in 4m 07s >> - propose-update-constraints >> https://zuul.opendev.org/t/openstack/build/d0cac6a077054bc8ba9eb92e56c21799 >> : SUCCESS in 4m 15s >> - kolla-publish-centos-source >> https://zuul.opendev.org/t/openstack/build/5e8f6aa2f56940a7be77a8dfe1c8ecc6 >> : SUCCESS in 2h 20m 47s >> - kolla-publish-centos-binary >> https://zuul.opendev.org/t/openstack/build/8a8a021d9f9c4ca79755b06309710cc7 >> : SUCCESS in 1h 56m 32s (non-voting) >> - kolla-publish-centos8-source >> https://zuul.opendev.org/t/openstack/build/fb6891d3f5e4493b880fce263a92e086 >> : SUCCESS in 1h 50m 57s >> - kolla-publish-centos8-binary >> https://zuul.opendev.org/t/openstack/build/c312c05e5d084fdbb3f372755221f186 >> : SUCCESS in 1h 13m 12s (non-voting) >> - kolla-publish-debian-source >> https://zuul.opendev.org/t/openstack/build/e24c12751b8c4aba881adb6c9ae8dc07 >> : SUCCESS in 1h 27m 27s (non-voting) >> - kolla-publish-debian-source-aarch64 >> https://zuul.opendev.org/t/openstack/build/1ff3b02df53847d0aa54bf12ea7fa666 >> : FAILURE in 1h 51m 49s (non-voting) >> - kolla-publish-debian-binary >> https://zuul.opendev.org/t/openstack/build/012c2de475fe45ea83f1cd8a7420aa6d >> : SUCCESS in 1h 15m 15s (non-voting) >> - kolla-publish-ubuntu-source >> https://zuul.opendev.org/t/openstack/build/88ca21d972514cce954ecb586324fa29 >> : FAILURE in 4m 14s >> - kolla-publish-ubuntu-binary >> https://zuul.opendev.org/t/openstack/build/f3d44e4d1b6c4e9799161b156290b238 >> : FAILURE in 4m 19s (non-voting) >> >> _______________________________________________ >> Release-job-failures mailing list >> Release-job-failures at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 8 09:40:43 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 8 Jan 2021 10:40:43 +0100 Subject: [cinder][tripleo][release] Victoria Cycle-Trailing Release Deadline Message-ID: Hello teams with trailing projects, Next week is the Victoria cycle-trailing release deadline [1], and all projects following the cycle-trailing release model must release their Victoria deliverables by 14 January, 2021. The following trailing projects haven't been yet released for Victoria. Cinder: - cinderlib Tripleo: - os-collect-config - os-refresh-config - tripleo-ipsec This is just a friendly reminder to allow you to release these projects in time. Do not hesitate to ping us if you have any questions or concerns. [1] https://releases.openstack.org/wallaby/schedule.html#w-cycle-trail Hervé Beraud (hberaud) and the Release Management Team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 8 09:58:44 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 8 Jan 2021 10:58:44 +0100 Subject: [cinder][tripleo][release] Victoria Cycle-Trailing Release Deadline In-Reply-To: References: Message-ID: @Cinder: In my previous email I missed this patch ( https://review.opendev.org/c/openstack/releases/+/768766) , so sorry, please ignore my previous communication. Le ven. 8 janv. 2021 à 10:40, Herve Beraud a écrit : > Hello teams with trailing projects, > > Next week is the Victoria cycle-trailing release deadline [1], and all > projects following the cycle-trailing release model must release their > Victoria deliverables by 14 January, 2021. > > The following trailing projects haven't been yet released for Victoria. > > Cinder: > - cinderlib > > Tripleo: > - os-collect-config > - os-refresh-config > - tripleo-ipsec > > This is just a friendly reminder to allow you to release these projects in > time. > > Do not hesitate to ping us if you have any questions or concerns. > > [1] https://releases.openstack.org/wallaby/schedule.html#w-cycle-trail > > Hervé Beraud (hberaud) and the Release Management Team > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Jan 8 11:08:04 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 8 Jan 2021 13:08:04 +0200 Subject: [cinder][tripleo][release] Victoria Cycle-Trailing Release Deadline In-Reply-To: References: Message-ID: On Fri, Jan 8, 2021 at 11:42 AM Herve Beraud wrote: > Hello teams with trailing projects, > > Next week is the Victoria cycle-trailing release deadline [1], and all > projects following the cycle-trailing release model must release their > Victoria deliverables by 14 January, 2021. > > The following trailing projects haven't been yet released for Victoria. > > Cinder: > - cinderlib > > Tripleo: > - os-collect-config > - os-refresh-config > - tripleo-ipsec > > This is just a friendly reminder to allow you to release these projects in > time. > > many thanks for the reminder I will look into this regards > Do not hesitate to ping us if you have any questions or concerns. > > [1] https://releases.openstack.org/wallaby/schedule.html#w-cycle-trail > > Hervé Beraud (hberaud) and the Release Management Team > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Jan 8 12:58:53 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 08 Jan 2021 13:58:53 +0100 Subject: [nova] unit testing on ppc64le Message-ID: <5E9MMQ.3INH7FY465VR3@est.tech> Hi, We have a bugreport[1] showing that our unit tests are not passing on ppc. In the upstream CI we don't have test capability to run our tests on ppc. But we have the IBM Power KVM CI[2] that runs integration tests on ppc. I'm wondering if IBM could extend the CI to run nova unit and functional tests too. I've added Michael Turek (mjturek at us.ibm.com) to CC. Michael is listed as the contact person for the CI. Cheers, gibi [1]https://bugs.launchpad.net/nova/+bug/1909972 [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI From C-Albert.Braden at charter.com Fri Jan 8 13:02:04 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Fri, 8 Jan 2021 13:02:04 +0000 Subject: [kolla] [train] Changing default quotas Message-ID: <3c8f6754aeb14df38ed059c0b6eb47f8@NCEMEXGP009.CORP.CHARTERCOM.com> I'm trying to change default quotas on a Train kolla POC, and I can change compute, but when I try to change a network quota such as subnets I get an error: (openstack) [root at adjutant-poc openstack]# openstack quota set --class --subnets 2 default Network quotas are ignored since quota class is not supported. When I google around I find some old documents talking about setting neutron quotas in /etc/neutron/neutron.conf but I can't find anything recent, and I don't see any mention of quotas when I look at /etc/neutron/neutron.conf in our neutron_server containers. What is the best way to change neutron default quotas in kolla? I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Fri Jan 8 13:05:31 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Fri, 8 Jan 2021 13:05:31 +0000 Subject: [nova] Nova evacuate issue In-Reply-To: References: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> Message-ID: <20210108130531.3p7u6qb24o47wdcp@lyarwood-laptop.usersys.redhat.com> On 07-01-21 11:58:05, Jean-Philippe Méthot wrote: > Considering that this issue happened in our production environment, > it’s not exactly possible to try to reproduce without shutting down > servers that are currently in use. That said, If the current logs I > have are enough, I will try opening a bug on the bugtracker. Yup appreciate that, if you still have logs then using the event list to determine the request-id for the evacuation and then providing any n-api/n-cpu logs referencing that request-id in the bug would be great. Lots more detail in the following doc: https://docs.openstack.org/api-guide/compute/faults.html > Compute22, the source host, was completely dead. It refused to boot up > through IPMI. ACK. > It is possible that that stein fix prevented me from reproducing the > problem in my staging environment (production is on rocky, staging is > on stein). > > Also, it may be important to note that our neutron is split, as we use > neutron-rpc-server to answer rpc calls. It’s also HA, as we have two > controllers with neutron-rpc-server and the api running (and that > won’t work anymore when we upgrade production to stein, but that’s > another problem entirely and probably off-topic here). I doubt that played a part, we've fixed many many bugs with Nova's evacuation logic over the releases so for now I'm going to assume it's something within Nova. > > Le 7 janv. 2021 à 09:26, Lee Yarwood a écrit : > > > > Would you be able to trace an example evacuation request fully and > > pastebin it somewhere using `openstack server event list $instance [1]` > > output to determine the request-id etc? Feel free to also open a bug > > about this and we can just triage there instead of the ML. > > > > The fact that q-api has sent the > > network-vif-plugged:80371c01-930d-4ea2-9d28-14438e948b65 to n-api > > suggests that the q-agt is actually alive on compute22, was that the > > case? Note that a pre-condition of calling the evacuation API is that > > the source host has been fenced [2]. > > > > That all said I wonder if this is somehow related too the following > > stein change: > > > > https://review.opendev.org/c/openstack/nova/+/603844 -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Fri Jan 8 13:42:14 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 8 Jan 2021 14:42:14 +0100 Subject: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.3.0 failed In-Reply-To: References: Message-ID: Also notice that yesterday we met an AFS issue [1] during the merge of the following patches: - https://review.opendev.org/c/openstack/releases/+/769325 - https://review.opendev.org/c/openstack/releases/+/769322 - https://review.opendev.org/c/openstack/releases/+/769324 The problem was that afs an server got stuck in a pathological way from 05:50 utc today until 16:10 utc when we hard rebooted the server instance. The consequence of this is that the related tarballs haven't been published: - https://tarballs.opendev.org/openstack/kolla/?C=M;O=D - https://tarballs.opendev.org/openstack/kolla-ansible/?C=M;O=D - https://tarballs.opendev.org/openstack/kayobe/?C=M;O=D And so the RDO CI fail to build them for train, ussuri and victoria: - https://review.rdoproject.org/r/#/c/31499/ - https://review.rdoproject.org/r/#/c/31498/ - https://review.rdoproject.org/r/#/c/31497/ So I think we need to reenqueue these jobs too. Thanks for reading [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-07.log.html#t2021-01-07T16:48:57 Le ven. 8 janv. 2021 à 09:45, Mark Goddard a écrit : > Hi Herve, > > Thanks for noticing this. The pull rate limit refreshes every 6 hours, is > specific to the IP used by the job, and in the case of the build/publish > jobs we only require pulling a single image - the base OS image - per job. > I suggest we reenqueue the failing jobs. > > For build/publish jobs we have discussed using the infra Docker registry > mirrors, which should avoid hitting Dockerhub too often. > > Cheers, > Mark > > On Fri, 8 Jan 2021 at 08:28, Herve Beraud wrote: > >> Hello, >> >> FYI it seems that kolla met docker's new limitation during its release >> jobs, especially with your publish jobs, I saw that you (the kolla team) >> already discussed this limitation [1] on the ML. >> >> ``` >> >> 2021-01-07 17:21:03.396355 | primary | ERROR:kolla.common.utils.base:Error'd with the following message >> >> 2021-01-07 17:21:03.396465 | primary | ERROR:kolla.common.utils.base:toomanyrequests: You have reached your pull rate limit. You may increase >> ``` >> >> Three jobs here failed for the same reason. >> >> I don't think that reenqueue the failing jobs without a specific action to manage this limitation will help us here. >> >> Let us know if we can help us in some manner. >> >> >> [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019148.html >> >> >> Le jeu. 7 janv. 2021 à 20:41, a écrit : >> >>> Build failed. >>> >>> - openstack-upload-github-mirror >>> https://zuul.opendev.org/t/openstack/build/a64b4c4bc398481f9afffa2ad465a012 >>> : SUCCESS in 1m 03s >>> - release-openstack-python >>> https://zuul.opendev.org/t/openstack/build/2ed64dfcbcaf483abf003cfdf4f25837 >>> : SUCCESS in 3m 11s >>> - announce-release >>> https://zuul.opendev.org/t/openstack/build/84d803d496a2485c983f6233dafcfd71 >>> : SUCCESS in 4m 07s >>> - propose-update-constraints >>> https://zuul.opendev.org/t/openstack/build/d0cac6a077054bc8ba9eb92e56c21799 >>> : SUCCESS in 4m 15s >>> - kolla-publish-centos-source >>> https://zuul.opendev.org/t/openstack/build/5e8f6aa2f56940a7be77a8dfe1c8ecc6 >>> : SUCCESS in 2h 20m 47s >>> - kolla-publish-centos-binary >>> https://zuul.opendev.org/t/openstack/build/8a8a021d9f9c4ca79755b06309710cc7 >>> : SUCCESS in 1h 56m 32s (non-voting) >>> - kolla-publish-centos8-source >>> https://zuul.opendev.org/t/openstack/build/fb6891d3f5e4493b880fce263a92e086 >>> : SUCCESS in 1h 50m 57s >>> - kolla-publish-centos8-binary >>> https://zuul.opendev.org/t/openstack/build/c312c05e5d084fdbb3f372755221f186 >>> : SUCCESS in 1h 13m 12s (non-voting) >>> - kolla-publish-debian-source >>> https://zuul.opendev.org/t/openstack/build/e24c12751b8c4aba881adb6c9ae8dc07 >>> : SUCCESS in 1h 27m 27s (non-voting) >>> - kolla-publish-debian-source-aarch64 >>> https://zuul.opendev.org/t/openstack/build/1ff3b02df53847d0aa54bf12ea7fa666 >>> : FAILURE in 1h 51m 49s (non-voting) >>> - kolla-publish-debian-binary >>> https://zuul.opendev.org/t/openstack/build/012c2de475fe45ea83f1cd8a7420aa6d >>> : SUCCESS in 1h 15m 15s (non-voting) >>> - kolla-publish-ubuntu-source >>> https://zuul.opendev.org/t/openstack/build/88ca21d972514cce954ecb586324fa29 >>> : FAILURE in 4m 14s >>> - kolla-publish-ubuntu-binary >>> https://zuul.opendev.org/t/openstack/build/f3d44e4d1b6c4e9799161b156290b238 >>> : FAILURE in 4m 19s (non-voting) >>> >>> _______________________________________________ >>> Release-job-failures mailing list >>> Release-job-failures at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >>> >> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Jan 8 14:02:10 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 8 Jan 2021 16:02:10 +0200 Subject: [cinder][tripleo][release] Victoria Cycle-Trailing Release Deadline In-Reply-To: References: Message-ID: On Fri, Jan 8, 2021 at 1:08 PM Marios Andreou wrote: > > > On Fri, Jan 8, 2021 at 11:42 AM Herve Beraud wrote: > >> Hello teams with trailing projects, >> >> Next week is the Victoria cycle-trailing release deadline [1], and all >> projects following the cycle-trailing release model must release their >> Victoria deliverables by 14 January, 2021. >> >> The following trailing projects haven't been yet released for Victoria. >> >> Cinder: >> - cinderlib >> >> Tripleo: >> - os-collect-config >> - os-refresh-config >> - tripleo-ipsec >> >> This is just a friendly reminder to allow you to release these projects >> in time. >> >> > many thanks for the reminder I will look into this > > regards > > there is discussion between me and Herve at https://review.opendev.org/c/openstack/releases/+/769915 if you are interested in this topic thanks, marios > > >> Do not hesitate to ping us if you have any questions or concerns. >> >> [1] https://releases.openstack.org/wallaby/schedule.html#w-cycle-trail >> >> Hervé Beraud (hberaud) and the Release Management Team >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Fri Jan 8 15:21:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Fri, 8 Jan 2021 10:21:51 -0500 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: On 1/6/21 12:59 PM, Ghanshyam Mann wrote: > Hello Everyone, > > You might have seen the discussion around dropping the lower constraints > testing as it becomes more challenging than the current value of doing it. I think the TC needs to discuss this explicitly (at a meeting or two, not just on the ML) and give the projects some guidance. I agree that there's little point in maintaining the l-c if they're not actually useful to anyone in their current state, but if their helpfulness (or potential helpfulness) outweighs the maintenance burden, then we should keep them. (How's that for a profound statement?) Maybe someone can point me to where I can RTFM to get a clearer picture, but my admittedly vague idea of what the l-c are for is that it has something to do with making packaging easier. If that's the case, it would be good for the TC to reach out to some openstack packagers/distributors to find outline how they use l-c (if at all) and what changes could be done to make them actually useful, and then we can re-assess the maintenance burden. This whole experience with the new pip resolver has been painful, I think, because it hit all projects and all branches at once. My experience, however, is that if I'd been updating the minimum versions for all the cinder deliverables in their requirements.txt and l-c.txt files every cycle to reflect a pip freeze at Milestone-3 it would have been a lot easier. What do other projects do about this? In Cinder, we've just been updating the requirements on-demand, not proactively, and as a result for some dependencies we claimed that foo>=0.9.0 is OK -- but except for unit tests in the l-c job, cinder deliverables haven't been using anything other than foo>=16.0 since rocky. So in master, I took advantage of having to revise requirements and l-c to make some major jumps in minimum versions. And I'm thinking of doing a pip-freeze requirements.txt minimum version update from now on at M-3 each cycle, which will force me to make an l-c.txt update too. (Maybe I was supposed to be doing that all along? Or maybe it's a bad idea? I could use some guidance here.) It would be good for the l-c to reflect reality, but on the other hand, updating the minimum versions in requirements.txt (and hence in l-c) too aggressively probably won't help packagers at all. (Or maybe it will, I don't know.) On the other hand, having the l-c is useful from the standpoint of letting you know when your minimum acceptable version in requirements.txt will break your unit tests. But if we're updating the minimum versions of dependencies every cycle to known good minimum versions, an l-c failure is going to be pretty rare, so maybe it's not worth the trouble of maintaining the l-c.txt and CI job. One other thing: if we do keep l-c, we need to have some guidance about what's actually supposed to be in there. (Or I need to RTFM.) I've noticed that as we've added new dependencies to cinder, we've included the dependency in l-c.txt, but not its indirect dependencies. I guess we should have been adding the indirect dependencies all along, too? (Spoiler alert: we haven't.) This email has gotten too long, so I will shut up now. cheers, brian > > Few of the ML thread around this discussion: > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > As Oslo and many other project dropping or already dropped it, we should decide it for all > other projects also otherwise it can be more changing than it is currently. > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > want to keep it but we should decide a general recommendation here. > > -gmann > From pierre at stackhpc.com Fri Jan 8 15:29:21 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Fri, 8 Jan 2021 16:29:21 +0100 Subject: [kolla] [train] Changing default quotas In-Reply-To: <3c8f6754aeb14df38ed059c0b6eb47f8@NCEMEXGP009.CORP.CHARTERCOM.com> References: <3c8f6754aeb14df38ed059c0b6eb47f8@NCEMEXGP009.CORP.CHARTERCOM.com> Message-ID: Hi Albert, You can change your default Neutron quotas via neutron.conf. This is described in the latest Neutron documentation: https://docs.openstack.org/neutron/latest/admin/ops-quotas.html#basic-quota-configuration Kolla-Ansible doesn't provide a full configuration file for OpenStack services. Instead, it includes specific options that are changed from defaults. Create a /etc/kolla/config/neutron.conf file with your own [quotas] settings, which will be merged into the default configuration templated by Kolla-Ansible. For more details see https://docs.openstack.org/kolla-ansible/latest/admin/advanced-configuration.html#openstack-service-configuration-in-kolla Best wishes, Pierre On Fri, 8 Jan 2021 at 14:05, Braden, Albert wrote: > > I’m trying to change default quotas on a Train kolla POC, and I can change compute, but when I try to change a network quota such as subnets I get an error: > > > > (openstack) [root at adjutant-poc openstack]# openstack quota set --class --subnets 2 default > > Network quotas are ignored since quota class is not supported. > > > > When I google around I find some old documents talking about setting neutron quotas in /etc/neutron/neutron.conf but I can’t find anything recent, and I don’t see any mention of quotas when I look at /etc/neutron/neutron.conf in our neutron_server containers. What is the best way to change neutron default quotas in kolla? > > > > I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From arne.wiebalck at cern.ch Fri Jan 8 15:39:59 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 8 Jan 2021 16:39:59 +0100 Subject: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' Message-ID: Dear all, Happy new year! The Bare Metal SIG will continue its monthly meetings and start again on Tue Jan 12, 2021, at 2pm UTC. This time there will be a 10 minute "topic-of-the-day" presentation by Tzu-Mainn Chen (tzumainn) on 'Multi-Tenancy in Ironic: Of Owners and Lessees' So, if you would like to learn how this relatively recent addition to Ironic works, you can find all the details for this meeting on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, don't miss out! Cheers, Arne From juliaashleykreger at gmail.com Fri Jan 8 16:01:32 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 8 Jan 2021 08:01:32 -0800 Subject: [ironic] Resuming meetings next week Message-ID: Greetings everyone, Now that the Holidays are over, it is time for us to resume meeting on a regular basis. Ironic will resume it's weekly meeting next Monday at 1500 UTC in #openstack-ironic on irc.freenode.net. The agenda can be found on the OpenStack wiki[0]. See you all there! -Julia [0]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting From ryan at messagecloud.com Fri Jan 8 00:34:58 2021 From: ryan at messagecloud.com (Ryan Price-King) Date: Thu, 7 Jan 2021 16:34:58 -0800 Subject: Cannot login to Built trovestack image In-Reply-To: References: Message-ID: Hi, Sorry I should have clarified. The build step I am stuck on is while spinning up the trove database on openstack horizon. I have built the qcow image fine. Also, I can view the login prompt of the trove instance that was created when creating a trove database. So it seems the agent is not running in the instance properly. I have read that document lots and need to login to the image to see the files and Ubuntu Ubuntu doesnt work. Regards, Ryan On 8 Jan 2021 at 00:16, Lingxian Kong wrote: Hi, Have you read https://docs.openstack.org/trove/latest/admin/building_guest_images.html? --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz On Fri, Jan 8, 2021 at 10:09 AM Ryan Price-King wrote: > Hi, > > Sorry I meant the instance is being deployed in nova correctly and > entering running state and console gives me login screen. > > Regards, > Ryan > > On Thu, 7 Jan 2021 at 17:14, Ryan Price-King > wrote: > >> Hi, >> >> I am having problems with the image being deployed correctly with nova, >> but the communication to the guestagent is timing out and i am stuck in >> build stage. >> >> ./trovestack build-image ubuntu bionic true ubuntu >> >> >> Also with that, I dont know which mysql version it is building. >> >> I am assuming it is 5.7.29. >> >> I cannot diagnose as i cannot login to the guest image instance. >> >> I assume the user is ubuntu, but i cannot login with any password that i >> have tried. >> >> Can you tell me what username/password to login to the instance by >> console in openstack please. >> >> Regards, >> Ryan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Fri Jan 8 06:17:45 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Fri, 8 Jan 2021 14:17:45 +0800 Subject: how to read ipa source code#ironic-python-agent Message-ID: Hi~ I have an OpenStack platform on Rocky, and I used Nova/Keystone/Cinder/Glance/Ceph/Ironic. Recently, I have build my own ironic-python-agent deployment ramdisk and kernel images by ironic-python-agent-builder, the following work is to add my customized raid configuration method to HardwareManager. Before this, I want to read the ipa's source code. But I don't know where is the 'main()' entry. Looking forward to your help. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.R.MacDonald at salford.ac.uk Fri Jan 8 14:57:09 2021 From: D.R.MacDonald at salford.ac.uk (Daniel Macdonald) Date: Fri, 8 Jan 2021 14:57:09 +0000 Subject: [octavia] problems configuring octavia Message-ID: Happy new year Openstack users and devs! I have been trying on and off for several months to get octavia working but I have yet to have it successfully create a loadbalancer. I have deployed OS bionic-train using the Charms telemetry bundle with the octavia overlay. Openstack is working for creating regular instances but I get various errors when trying to create a loadbalancer. The first issue I feel I should mention is that I am using bind running on our MAAS controller as a DNS server. juju doesn't work if I enable IPv6 under bind yet the octavia charm defaults to using IPv6 for its management network so I have tried creating a IPv4 management network but I'm still having problems. For more details on that please see the comments of this bug report: https://bugs.launchpad.net/charm-octavia/+bug/1897418 Bug #1897418 “feature request: have option to use ipv4 when sett...” : Bugs : OpenStack Octavia Charm By default, Octavia charm uses ipv6 for its lb-mgmt-subnet.[1] It would be nice to have the option to choose an ipv4 network from the start instead of deleting the ipv6 network and recreating the ipv4 subnet. Implementation - possible configuration option parameter when deploying. [1] https://opendev.org/openstack/charm-octavia/src/branch/master/src/lib/charm/openstack/api_crud.py#L560 bugs.launchpad.net Another notable issue I have is that after installing the charms telemetry bundle I have 2 projects call services. How do I know which is the correct one to use for Octavia? Is this following document going to be the best guide for me to follow to complete the final steps required to get Octavia (under Train) working: https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components OpenStack Docs: Install and configure for Ubuntu Install and configure for Ubuntu¶. This section describes how to install and configure the Load-balancer service for Ubuntu 18.04 (LTS). docs.openstack.org I'm hoping someone has already written an easy to follow guide to using Octavia with an IPv4 management network using the Charms bundle to do most of the installation work? Thanks [University of Salford] DANIEL MACDONALD Specialist Technical Demonstrator School of Computing, Science & Engineering Room 145, Newton Building, University of Salford, Manchester M5 4WT T: +44(0) 0161 295 5242 D.R.MacDonald at salford.ac.uk / www.salford.ac.uk [CSE] -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jan 8 16:22:32 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Jan 2021 16:22:32 +0000 Subject: [dev][ironic] how to read ipa source code#ironic-python-agent In-Reply-To: References: Message-ID: <20210108162231.wi3u7cqguv3euyqd@yuggoth.org> [Keeping you in Cc since you don't seem to be subscribed to this ML] On 2021-01-08 14:17:45 +0800 (+0800), Ankele zhang wrote: > I have an OpenStack platform on Rocky, and I used > Nova/Keystone/Cinder/Glance/Ceph/Ironic. Recently, I have build my > own ironic-python-agent deployment ramdisk and kernel images by > ironic-python-agent-builder, the following work is to add my > customized raid configuration method to HardwareManager. Before > this, I want to read the ipa's source code. But I don't know where > is the 'main()' entry. Looking forward to your help. You specifically want the stable/rocky version? The ironic-python-agent console script entrypoint (according to the setup.cfg at the top of that repository) calls the run() function from this file: https://opendev.org/openstack/ironic-python-agent/src/branch/stable/rocky/ironic_python_agent/cmd/agent.py Hope that helps! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jan 8 16:28:24 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Jan 2021 16:28:24 +0000 Subject: [Release-job-failures] Release of openstack/kolla for ref refs/tags/9.3.0 failed In-Reply-To: References: Message-ID: <20210108162824.fb7bszb33ncrnaiz@yuggoth.org> On 2021-01-08 14:42:14 +0100 (+0100), Herve Beraud wrote: > Also notice that yesterday we met an AFS issue [1] during the merge of the > following patches: > > - https://review.opendev.org/c/openstack/releases/+/769325 > - https://review.opendev.org/c/openstack/releases/+/769322 > - https://review.opendev.org/c/openstack/releases/+/769324 > > The problem was that afs an server got stuck in a pathological way from > 05:50 utc today until 16:10 utc when we hard rebooted the server instance. The patches you reference merged later, so wasn't directly impacted by write failures (the jobs actually succeeded). > The consequence of this is that the related tarballs haven't been published: > > - https://tarballs.opendev.org/openstack/kolla/?C=M;O=D > - https://tarballs.opendev.org/openstack/kolla-ansible/?C=M;O=D > - https://tarballs.opendev.org/openstack/kayobe/?C=M;O=D > > And so the RDO CI fail to build them for train, ussuri and victoria: > > - https://review.rdoproject.org/r/#/c/31499/ > - https://review.rdoproject.org/r/#/c/31498/ > - https://review.rdoproject.org/r/#/c/31497/ > > So I think we need to reenqueue these jobs too. [...] The files were written into the tarballs volume just fine, but the read-only replicas which back the tarballs.opendev.org site hadn't been synchronized. I found a stuck process (waiting since yesterday for a response from the server which had previously died) and killed it to get the periodic synchronization of the read-only volumes working again so, the site is no longer stale and has those releases on it now as of 15:15:40 UTC today. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Fri Jan 8 17:03:47 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 8 Jan 2021 09:03:47 -0800 Subject: [dev][ironic] how to read ipa source code#ironic-python-agent In-Reply-To: <20210108162231.wi3u7cqguv3euyqd@yuggoth.org> References: <20210108162231.wi3u7cqguv3euyqd@yuggoth.org> Message-ID: Thanks for beating me to to reply Jeremy! With regards to Rocky, if you can consider a newer version, I would highly recommend it. A few different reasons: 1) Rocky is in extended maintenance. There will not be new releases from the Rocky branch to include new fixes. 2) There have been substantial improvements in interaction/support with Ussuri and Victoria releases. If you have any questions, please feel free to contact us. -Julia On Fri, Jan 8, 2021 at 8:29 AM Jeremy Stanley wrote: > > [Keeping you in Cc since you don't seem to be subscribed to this ML] > > On 2021-01-08 14:17:45 +0800 (+0800), Ankele zhang wrote: > > I have an OpenStack platform on Rocky, and I used > > Nova/Keystone/Cinder/Glance/Ceph/Ironic. Recently, I have build my > > own ironic-python-agent deployment ramdisk and kernel images by > > ironic-python-agent-builder, the following work is to add my > > customized raid configuration method to HardwareManager. Before > > this, I want to read the ipa's source code. But I don't know where > > is the 'main()' entry. Looking forward to your help. > > You specifically want the stable/rocky version? The > ironic-python-agent console script entrypoint (according to the > setup.cfg at the top of that repository) calls the run() function > from this file: > > https://opendev.org/openstack/ironic-python-agent/src/branch/stable/rocky/ironic_python_agent/cmd/agent.py > > Hope that helps! > -- > Jeremy Stanley From mthode at mthode.org Fri Jan 8 17:04:41 2021 From: mthode at mthode.org (Matthew Thode) Date: Fri, 8 Jan 2021 11:04:41 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: <20210108170441.koyxpaxsse7qj645@mthode.org> On 21-01-08 10:21:51, Brian Rosmaita wrote: > On 1/6/21 12:59 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > You might have seen the discussion around dropping the lower constraints > > testing as it becomes more challenging than the current value of doing it. > > I think the TC needs to discuss this explicitly (at a meeting or two, not > just on the ML) and give the projects some guidance. I agree that there's > little point in maintaining the l-c if they're not actually useful to anyone > in their current state, but if their helpfulness (or potential helpfulness) > outweighs the maintenance burden, then we should keep them. (How's that for > a profound statement?) > > Maybe someone can point me to where I can RTFM to get a clearer picture, but > my admittedly vague idea of what the l-c are for is that it has something to > do with making packaging easier. If that's the case, it would be good for > the TC to reach out to some openstack packagers/distributors to find outline > how they use l-c (if at all) and what changes could be done to make them > actually useful, and then we can re-assess the maintenance burden. > > This whole experience with the new pip resolver has been painful, I think, > because it hit all projects and all branches at once. My experience, > however, is that if I'd been updating the minimum versions for all the > cinder deliverables in their requirements.txt and l-c.txt files every cycle > to reflect a pip freeze at Milestone-3 it would have been a lot easier. > > What do other projects do about this? In Cinder, we've just been updating > the requirements on-demand, not proactively, and as a result for some > dependencies we claimed that foo>=0.9.0 is OK -- but except for unit tests > in the l-c job, cinder deliverables haven't been using anything other than > foo>=16.0 since rocky. So in master, I took advantage of having to revise > requirements and l-c to make some major jumps in minimum versions. And I'm > thinking of doing a pip-freeze requirements.txt minimum version update from > now on at M-3 each cycle, which will force me to make an l-c.txt update too. > (Maybe I was supposed to be doing that all along? Or maybe it's a bad idea? > I could use some guidance here.) > > It would be good for the l-c to reflect reality, but on the other hand, > updating the minimum versions in requirements.txt (and hence in l-c) too > aggressively probably won't help packagers at all. (Or maybe it will, I > don't know.) On the other hand, having the l-c is useful from the > standpoint of letting you know when your minimum acceptable version in > requirements.txt will break your unit tests. But if we're updating the > minimum versions of dependencies every cycle to known good minimum versions, > an l-c failure is going to be pretty rare, so maybe it's not worth the > trouble of maintaining the l-c.txt and CI job. > > One other thing: if we do keep l-c, we need to have some guidance about > what's actually supposed to be in there. (Or I need to RTFM.) I've noticed > that as we've added new dependencies to cinder, we've included the > dependency in l-c.txt, but not its indirect dependencies. I guess we should > have been adding the indirect dependencies all along, too? (Spoiler alert: > we haven't.) > > This email has gotten too long, so I will shut up now. > > cheers, > brian > > > > > Few of the ML thread around this discussion: > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > > > As Oslo and many other project dropping or already dropped it, we should decide it for all > > other projects also otherwise it can be more changing than it is currently. > > > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > > want to keep it but we should decide a general recommendation here. > > > > -gmann > > > > /requirements hat l-c was mainly promoted as a way to know when you are using a feature that is not in an old release. The way we generally test is with newer constraints, which don't test what we state we support (the range between the lower bound in requirements.txt and upper-contraints). While I do think it's useful to know that the range of versions of a library needs to be updated... I understand that it may not be useful, either because of the possible maintenance required by devs, the load on the testing infrastructure generated by testing lower-constraints or that downstream packagers do not use it. Search this for lower-constraints. https://docs.openstack.org/project-team-guide/dependency-management.html Indirect dependencies in lower-constraints were not encouraged iirc, both for maintenance reasons (lot of churn) and because 'hopefully' downstream deps are doing the same thing and testing their deps for changes they need. /downstream packager hat I do not look at lower-constraints, but I do look at lower-bounds in the requirements.txt file (from which lower-constraints are generated). I look for updates in the lower-bounds to know if a library that was already packaged needed updating, though I do try and target the version mentioned in upper-constraints.txt when updating. More and more I've just made sure that the entire dependency tree for openstack matches what is packaged. Even then though, if the minimum is not updated then this pushes it down on users. /user (deployer) perspective Why does $PROJECT not work, I'm going to report it as a bug to $distro, $deployment and $upstream. What they did was not update the version of pyroute2 (or something) because $project didn't update the lower bound to require it. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mjturek at us.ibm.com Fri Jan 8 19:22:16 2021 From: mjturek at us.ibm.com (Michael J Turek) Date: Fri, 8 Jan 2021 19:22:16 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: <5E9MMQ.3INH7FY465VR3@est.tech> References: <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: An HTML attachment was scrubbed... URL: From rodrigo.barbieri2010 at gmail.com Fri Jan 8 21:27:39 2021 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Fri, 8 Jan 2021 18:27:39 -0300 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> References: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> Message-ID: Thanks for the responses Eugen and Sean! The placement.yaml approach sounds good if it can prevent the compute host from reporting local_gb repeatedly, and then as you suggested use Placement Aggregates I can perhaps make that work for a subset of use cases. Too bad it is only available on Victoria+. I was looking for something that could work, even if partially, on Queens and Stein. The cron job updating the reservation, I'm not sure if it will clash with the host updates (being overriden, as I've described in the LP bug), but you actually gave me another idea. I may be able to create a fake allocation in the nodes to cancel out their reported values, and then rely only on the shared value through placement. Monitoring Ceph is only part of the problem. The second part, if you end up needing it (and you may if you're not very conservative in the monitoring parameters and have unpredictable workload) is to prevent new instances from being created, thus new data from being stored, to prevent it from filling up before you can react to it (think of an accidental DoS attack by running a certain storage-heavy workloads). @Eugen, yes. I was actually looking for more reliable ways to prevent it from happening. Overall, the shared placement + fake allocation sounded like the cleanest workaround for me. I will try that and report back. Thanks for the help! On Wed, Jan 6, 2021 at 10:57 AM Eugen Block wrote: > Hi, > > we're using OpenStack with Ceph in production and also have customers > doing that. > From my point of view fixing nova to be able to deal with shared > storage of course would improve many things, but it doesn't liberate > you from monitoring your systems. Filling up a ceph cluster should be > avoided and therefore proper monitoring is required. > > I assume you were able to resolve the frozen instances? > > Regards, > Eugen > > > Zitat von Sean Mooney : > > > On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: > >> Hi Nova folks and OpenStack operators! > >> > >> I have had some trouble recently where while using the "images_type = > rbd" > >> libvirt option my ceph cluster got filled up without I noticing and > froze > >> all my nova services and instances. > >> > >> I started digging and investigating why and how I could prevent or > >> workaround this issue, but I didn't find a very reliable clean way. > >> > >> I documented all my steps and investigation in bug 1908133 [0]. It has > been > >> marked as a duplicate of 1522307 [1] which has been around for quite > some > >> time, so I am wondering if any operators have been using nova + ceph in > >> production with "images_type = rbd" config set and how you have been > >> handling/working around the issue. > > > > this is indeed a know issue and the long term plan to fix it was to > > track shared storae > > as a sharing resouce provide in plamcent. that never happend so > > there si currenlty no mechanium > > available to prevent this explcitly in nova. > > > > the disk filter which is nolonger used could prevnet the boot of a > > vm that would fill the ceph pool but > > it could not protect against two concurrent request form filling the > pool. > > > > placement can protect against that due to the transational nature of > > allocations which serialise > > all resouce useage however since each host reports the total size of > > the ceph pool as its local storage that wont work out of the box. > > > > as a quick hack what you can do is set the > > [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) > > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > > on each of your compute agents configs. > > > > > > that will prevent over subscription however it has other negitve > sidefects. > > mainly that you will fail to scudle instance that could boot if a > > host exced its 1/n usage > > so unless you have perfectly blanced consumtion this is not a good > approch. > > > > a better appoch but one that requires external scripting is to have > > a chron job that will update the resrved > > usaage of each of the disk_gb inventores to the actull amount of of > > stoarge allocated form the pool. > > > > the real fix however is for nova to tack its shared usage in > > placment correctly as a sharing resouce provide. > > > > its possible you might be able to do that via the porvider.yaml file > > > > by overriding the local disk_gb to 0 on all comupte nodes > > then creating a singel haring resouce provider of disk_gb that > > models the ceph pool. > > > > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html > > currently that does not support the addtion of providers to placment > > agggreate so while it could be used to 0 out the comptue node > > disk inventoies and to create a sharing provider it with the > > MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping > > which compute nodes can consume form sharing provider via the > > agggrate but you could do that form. > > that assume that "sharing resouce provdiers" actully work. > > > > > > bacialy what it comes down to today is you need to monitor the > > avaiable resouce yourslef externally and ensure you never run out of > > space. > > that sucks but untill we proably track things in plamcent there is > > nothign we can really do. > > the two approch i suggested above might work for a subset of > > usecasue but really this is a feature that need native suport in > > nova to adress properly. > > > >> > >> Thanks in advance! > >> > >> [0] https://bugs.launchpad.net/nova/+bug/1908133 > >> [1] https://bugs.launchpad.net/nova/+bug/1522307 > >> > > > > > -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jan 8 22:07:09 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 08 Jan 2021 22:07:09 +0000 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: References: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> Message-ID: <98fb643dc6ec0d0201c8e4aea114cd07cf46fef7.camel@redhat.com> On Fri, 2021-01-08 at 18:27 -0300, Rodrigo Barbieri wrote: > Thanks for the responses Eugen and Sean! > > The placement.yaml approach sounds good if it can prevent the compute host > from reporting local_gb repeatedly, and then as you suggested use Placement > Aggregates I can perhaps make that work for a subset of use cases. Too bad > it is only available on Victoria+. I was looking for something that could > work, even if partially, on Queens and Stein. > > The cron job updating the reservation, I'm not sure if it will clash with > the host updates (being overriden, as I've described in the LP bug), but > you actually gave me another idea. I may be able to create a fake > allocation in the nodes to cancel out their reported values, and then rely > only on the shared value through placement. well actully you could use the host reserved disk space config value to do that on older releases just set it equal to the pool size. https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.reserved_host_disk_mb not sure why that is in MB it really should be GB but anyway if you set that then it will set the placement value. > > Monitoring Ceph is only part of the problem. The second part, if you end up > needing it (and you may if you're not very conservative in the monitoring > parameters and have unpredictable workload) is to prevent new instances > from being created, thus new data from being stored, to prevent it from > filling up before you can react to it (think of an accidental DoS attack by > running a certain storage-heavy workloads). > > @Eugen, yes. I was actually looking for more reliable ways to prevent it > from happening. > > Overall, the shared placement + fake allocation sounded like the cleanest > workaround for me. I will try that and report back. if i get time in the next week or two im hoping ot try and tweak our ceph ci job to test that toplogy in the upstream ci. but just looking a the placemnt funcitonal tests it should work. This covers the use of sharing resouce providers https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml the final section thes the allocation candiate endpoint and asserts we getan allocation for both providres https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml#L135-L143 its relitivly simple to read this file top to bottom and its only 143 lines long but it basically step through and constucte the topolgoy i was descifbing or at least a similar ones and shows step by step what the different behavior will be as the rps are created and aggreates are created exctra. the main issue with this approch is we dont really have a good way to upgrade existing deployments to this toplogy beyond live migrating everything one node at a time so that there allcoation will get reshaped as a side effect of the move operation. looking a tthe history of this file it was added 3 years ago https://github.com/openstack/placement/commit/caeae7a41ed41535195640dfa6c5bb58a7999a9b around stien although it may also have worked before thatim not sure when we added sharing providers. > > Thanks for the help! > > On Wed, Jan 6, 2021 at 10:57 AM Eugen Block wrote: > > > Hi, > > > > we're using OpenStack with Ceph in production and also have customers > > doing that. > >  From my point of view fixing nova to be able to deal with shared > > storage of course would improve many things, but it doesn't liberate > > you from monitoring your systems. Filling up a ceph cluster should be > > avoided and therefore proper monitoring is required. > > > > I assume you were able to resolve the frozen instances? > > > > Regards, > > Eugen > > > > > > Zitat von Sean Mooney : > > > > > On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: > > > > Hi Nova folks and OpenStack operators! > > > > > > > > I have had some trouble recently where while using the "images_type = > > rbd" > > > > libvirt option my ceph cluster got filled up without I noticing and > > froze > > > > all my nova services and instances. > > > > > > > > I started digging and investigating why and how I could prevent or > > > > workaround this issue, but I didn't find a very reliable clean way. > > > > > > > > I documented all my steps and investigation in bug 1908133 [0]. It has > > been > > > > marked as a duplicate of 1522307 [1] which has been around for quite > > some > > > > time, so I am wondering if any operators have been using nova + ceph in > > > > production with "images_type = rbd" config set and how you have been > > > > handling/working around the issue. > > > > > > this is indeed a know issue and the long term plan to fix it was to > > > track shared storae > > > as a sharing resouce provide in plamcent. that never happend so > > > there si currenlty no mechanium > > > available to prevent this explcitly in nova. > > > > > > the disk filter which is nolonger used could prevnet the boot of a > > > vm that would fill the ceph pool but > > > it could not protect against two concurrent request form filling the > > pool. > > > > > > placement can protect against that due to the transational nature of > > > allocations which serialise > > > all resouce useage however since each host reports the total size of > > > the ceph pool as its local storage that wont work out of the box. > > > > > > as a quick hack what you can do is set the > > > [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) > > > > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > > > on each of your compute agents configs. > > > > > > > > > that will prevent over subscription however it has other negitve > > sidefects. > > > mainly that you will fail to scudle instance that could boot if a > > > host exced its 1/n usage > > > so unless you have perfectly blanced consumtion this is not a good > > approch. > > > > > > a better appoch but one that requires external scripting is to have > > > a chron job that will update the resrved > > >  usaage of each of the disk_gb inventores to the actull amount of of > > > stoarge allocated form the pool. > > > > > > the real fix however is for nova to tack its shared usage in > > > placment correctly as a sharing resouce provide. > > > > > > its possible you might be able to do that via the porvider.yaml file > > > > > > by overriding the local disk_gb to 0 on all comupte nodes > > > then creating a singel haring resouce provider of disk_gb that > > > models the ceph pool. > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html > > > currently that does not support the addtion of providers to placment > > > agggreate so while it could be used to 0 out the comptue node > > > disk inventoies and to create a sharing provider it with the > > > MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping > > > which compute nodes can consume form sharing provider via the > > > agggrate but you could do that form. > > > that assume that "sharing resouce provdiers" actully work. > > > > > > > > > bacialy what it comes down to today is you need to monitor the > > > avaiable resouce yourslef externally and ensure you never run out of > > > space. > > > that sucks but untill we proably track things in plamcent there is > > > nothign we can really do. > > > the two approch i suggested above might work for a subset of > > > usecasue but really this is a feature that need native suport in > > > nova to adress properly. > > > > > > > > > > > Thanks in advance! > > > > > > > > [0] https://bugs.launchpad.net/nova/+bug/1908133 > > > > [1] https://bugs.launchpad.net/nova/+bug/1522307 > > > > > > > > > > > > > > > From jgoetz at teraswitch.com Fri Jan 8 22:19:25 2021 From: jgoetz at teraswitch.com (Justin Goetz) Date: Fri, 8 Jan 2021 17:19:25 -0500 Subject: [nova] [Ussuri] Unable to perform live block migrations on ephemeral VMs without shared storage Message-ID: Hello! Happy new year to everyone! I'm experiencing some trouble in my attempts to get live block migrations working on machines who's boot drives live on ephemeral (local) host storage. Offline (cold) migrations appear to work fine, however, I need live migrations to function with no downtime to the VMs. We're running on OpenStack Ussuri, with Ubuntu 18.04.5 LTS as our hypervisor OS, running libvirt 6.0.0. Below is the following command I'm utilizing to perform the live block migration: openstack server migrate --debug --verbose --live-migration --block-migration The verbose logging of the above command shows no obvious issues, however digging deeper on the hypervisor side, we see some libvirtd errors, and the most telling is this segment here: Jan 08 16:53:02 h5 libvirtd[4118]: Unsafe migration: Migration without shared storage is unsafe Jan 08 16:53:02 h5 nova-compute[130270]: 2021-01-08 16:53:02.860 130270 ERROR nova.virt.libvirt.driver [-] [instance: 72bc9feb-b8f0-45f2-bad4-9cb61c3abff3] Live Migration failure: Unsafe migration: Migration without shared storage is unsafe: libvirt.libvirtError: Unsafe migration: Migration without shared storage is unsafe Full log output is available here: http://paste.openstack.org/show/801525/ Perhaps I'm severely mistaken, but native libvirt should have no problem whatsoever performing live migrations without shared storage on it's own. I've dug into the libvirt side for hours to try and troubleshoot what the system is seeing as "unsafe" (I confirmed disk caching is disabled, as well as no strange custom settings are present), but was unsuccessful. The full output of my VM's QEMU XML file can be viewed here: http://paste.openstack.org/show/801526/ As I couldn't figure out why libvirtd was not seeing the migration as "safe", I attempted to override the libvirt message by attempting to utilize the live_migration_flag and block_migration_flag options in nova.conf to invoke the --unsafe parameter when running the libvirt commands (with the flags VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_UNSAFE,VIR_MIGRATE_NON_SHARED_INC), however I later learned from a previous developers thread on the mailing list from 2016 that both of these options have been depreciated in the later versions of Nova and replaced by the "live_migration_tunnelled" setting in nova.conf. I've ensured my nova.conf files have the live_migration_tunnelled parameter set to "True". However, I have not found any method for passing along the "--unsafe" flag (or any other libvirt flag, in this case) to libvirt from nova.conf from this point on after the depreciation of the live_migration_flag parameters. Any help would be greatly appreciated, from reading around online this seems to be a fairly common setup, I'm baffled at how I've run into this issue across several separate OpenStack clusters at this point. Thank you for reading. -- Justin Goetz Systems Engineer, TeraSwitch Inc. jgoetz at teraswitch.com 412-945-7045 (NOC) | 412-459-7945 (Direct) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Sat Jan 9 18:45:02 2021 From: jungleboyj at gmail.com (Jay S. Bryant) Date: Sat, 9 Jan 2021 12:45:02 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210108170441.koyxpaxsse7qj645@mthode.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <20210108170441.koyxpaxsse7qj645@mthode.org> Message-ID: On 1/8/2021 11:04 AM, Matthew Thode wrote: > On 21-01-08 10:21:51, Brian Rosmaita wrote: >> On 1/6/21 12:59 PM, Ghanshyam Mann wrote: >>> Hello Everyone, >>> >>> You might have seen the discussion around dropping the lower constraints >>> testing as it becomes more challenging than the current value of doing it. >> I think the TC needs to discuss this explicitly (at a meeting or two, not >> just on the ML) and give the projects some guidance. I agree that there's >> little point in maintaining the l-c if they're not actually useful to anyone >> in their current state, but if their helpfulness (or potential helpfulness) >> outweighs the maintenance burden, then we should keep them. (How's that for >> a profound statement?) >> >> Maybe someone can point me to where I can RTFM to get a clearer picture, but >> my admittedly vague idea of what the l-c are for is that it has something to >> do with making packaging easier. If that's the case, it would be good for >> the TC to reach out to some openstack packagers/distributors to find outline >> how they use l-c (if at all) and what changes could be done to make them >> actually useful, and then we can re-assess the maintenance burden. >> >> This whole experience with the new pip resolver has been painful, I think, >> because it hit all projects and all branches at once. My experience, >> however, is that if I'd been updating the minimum versions for all the >> cinder deliverables in their requirements.txt and l-c.txt files every cycle >> to reflect a pip freeze at Milestone-3 it would have been a lot easier. >> >> What do other projects do about this? In Cinder, we've just been updating >> the requirements on-demand, not proactively, and as a result for some >> dependencies we claimed that foo>=0.9.0 is OK -- but except for unit tests >> in the l-c job, cinder deliverables haven't been using anything other than >> foo>=16.0 since rocky. So in master, I took advantage of having to revise >> requirements and l-c to make some major jumps in minimum versions. And I'm >> thinking of doing a pip-freeze requirements.txt minimum version update from >> now on at M-3 each cycle, which will force me to make an l-c.txt update too. >> (Maybe I was supposed to be doing that all along? Or maybe it's a bad idea? >> I could use some guidance here.) >> >> It would be good for the l-c to reflect reality, but on the other hand, >> updating the minimum versions in requirements.txt (and hence in l-c) too >> aggressively probably won't help packagers at all. (Or maybe it will, I >> don't know.) On the other hand, having the l-c is useful from the >> standpoint of letting you know when your minimum acceptable version in >> requirements.txt will break your unit tests. But if we're updating the >> minimum versions of dependencies every cycle to known good minimum versions, >> an l-c failure is going to be pretty rare, so maybe it's not worth the >> trouble of maintaining the l-c.txt and CI job. >> >> One other thing: if we do keep l-c, we need to have some guidance about >> what's actually supposed to be in there. (Or I need to RTFM.) I've noticed >> that as we've added new dependencies to cinder, we've included the >> dependency in l-c.txt, but not its indirect dependencies. I guess we should >> have been adding the indirect dependencies all along, too? (Spoiler alert: >> we haven't.) >> >> This email has gotten too long, so I will shut up now. >> >> cheers, >> brian >> >>> Few of the ML thread around this discussion: >>> >>> - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html >>> - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html >>> >>> As Oslo and many other project dropping or already dropped it, we should decide it for all >>> other projects also otherwise it can be more changing than it is currently. >>> >>> We have not defined it in PTI or testing runtime so it is always up to projects if they still >>> want to keep it but we should decide a general recommendation here. >>> >>> -gmann >>> >> > /requirements hat > > l-c was mainly promoted as a way to know when you are using a feature > that is not in an old release. The way we generally test is with newer > constraints, which don't test what we state we support (the range > between the lower bound in requirements.txt and upper-contraints). > > While I do think it's useful to know that the range of versions of a > library needs to be updated... I understand that it may not be useful, > either because of the possible maintenance required by devs, the load on > the testing infrastructure generated by testing lower-constraints or > that downstream packagers do not use it. > > Search this for lower-constraints. > https://docs.openstack.org/project-team-guide/dependency-management.html I am in the same boat as Brian that the lower-constraints have never made much sense to me.  The documentation you share above is helpful to understand how everything works but I think it maybe needs to be enhanced as it isn't clear to me as a Cinder team member what I should do to avoid breakages. If we can add some documentation and guidance as to how to maintain these in the branches to avoid a major breakage like this in the future I think it would be a useful effort. Jay > Indirect dependencies in lower-constraints were not encouraged iirc, > both for maintenance reasons (lot of churn) and because 'hopefully' > downstream deps are doing the same thing and testing their deps for > changes they need. > > /downstream packager hat > > I do not look at lower-constraints, but I do look at lower-bounds in the > requirements.txt file (from which lower-constraints are generated). I > look for updates in the lower-bounds to know if a library that was > already packaged needed updating, though I do try and target the version > mentioned in upper-constraints.txt when updating. More and more I've > just made sure that the entire dependency tree for openstack matches > what is packaged. Even then though, if the minimum is not updated then > this pushes it down on users. > > /user (deployer) perspective > > Why does $PROJECT not work, I'm going to report it as a bug to $distro, > $deployment and $upstream. > > What they did was not update the version of pyroute2 (or something) > because $project didn't update the lower bound to require it. > From sean.mcginnis at gmx.com Sun Jan 10 19:40:59 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 10 Jan 2021 13:40:59 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <20210108170441.koyxpaxsse7qj645@mthode.org> Message-ID: <20210110194059.GA1866794@sm-workstation> > > /requirements hat > > > > l-c was mainly promoted as a way to know when you are using a feature > > that is not in an old release. The way we generally test is with newer > > constraints, which don't test what we state we support (the range > > between the lower bound in requirements.txt and upper-contraints). > > > > While I do think it's useful to know that the range of versions of a > > library needs to be updated... I understand that it may not be useful, > > either because of the possible maintenance required by devs, the load on > > the testing infrastructure generated by testing lower-constraints or > > that downstream packagers do not use it. > > > > Search this for lower-constraints. > > https://docs.openstack.org/project-team-guide/dependency-management.html > > I am in the same boat as Brian that the lower-constraints have never made > much sense to me.  The documentation you share above is helpful to > understand how everything works but I think it maybe needs to be enhanced as > it isn't clear to me as a Cinder team member what I should do to avoid > breakages. > > If we can add some documentation and guidance as to how to maintain these in > the branches to avoid a major breakage like this in the future I think it > would be a useful effort. > > Jay > I agree documentation could really be improved here. But one thing to keep in mind that I think is worth pointing out is that the current breakages are really due to enhancements in pip. Our l-c jobs have been broken for a long time, and we are now just being made painfully aware of it. But now that pip actually enforces these things, theoretically at least, once we fix the problems we have in l-c it should be much easier to maintain going forward. We shouldn't have random stable branch failures, because once we fix our requirements they should remain stable going forward. // 2 cents From pangliye at inspur.com Sat Jan 9 06:26:05 2021 From: pangliye at inspur.com (=?gb2312?B?TGl5ZSBQYW5nKOXMwaLStSk=?=) Date: Sat, 9 Jan 2021 06:26:05 +0000 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community Message-ID: Hello everyone, after feedback from a large number of operations and maintenance personnel in InCloud OpenStack, we developed the log management project “Venus” for the OpenStack projects and that has contributed to the OpenStack community. The following is an introduction to “Venus”. If there is interest in the community, we are interested in proposing it to become an official OpenStack project in the future. Background In the day-to-day operation and maintenance of large-scale cloud platform, the following problems are encountered: l Time-consuming for log querying while the server increasing to thousands. l Difficult to retrieve logs, since there are many modules in the platform, e.g. systems service, compute, storage, network and other platform services. l The large amount and dispersion of log make faults are difficult to be discovered. l Because of distributed and interaction between components of the cloud platform, and scattered logs between components, it will take more time to locate problems. About Venus According to the key requirements of OpenStack in log storage, retrieval, analysis and so on, we introduced Venus project, a unified log management module. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation and other needs, which involves helping operator or maintainer to quickly solve retrieve problems, grasp the operational health of the platform, and improve the management capabilities of the cloud platform. Additionally, this module plans to use machine learning algorithms to quickly locate IT failures and root causes, and improve operation and maintenance efficiency. Application scenario Venus played a key role in the following scenarios: l Retrieval: Provide a simple and easy-to-use way to retrieve all log and the context. l Analysis: Realize log association, field value statistics, and provide multi-scene and multi-dimensional visual analysis reports. l Alerts:Convert retrieval into active alerts to realize the error finding in massive logs. l Issue location: Establish a chain relationship and knowledge graphs to quickly locate problems. Overall structure The architecture of log management system based on Venus and elastic search is as follows: Diagram 0: Architecture of Venus venus_api: API module,provide API、rest-api service. venus_manager: Internal timing task module to realize the core functions of the log system. Current progress The current progress of the Venus project is as follows: l Collection:Develop fluentd collection tasks based on collectd to read, filter, format and send plug-ins for OpenStack, operating systems, and platform services, etc. l Index:Dealing with multi-dimensional index data in elasticsearch, and provide more concise and comprehensive authentication interface to return query results. l Analysis:Analyzing and display the related module errors, Mariadb connection errors, and Rabbitmq connection errors. l Alerts:Develop alarm task code to set threshold for the number of error logs of different modules at different times, and provides alarm services and notification services. l Location:Develop the call chain analysis function based on global_requested series, which can show the execution sequence, time and error information, etc., and provide the export operation. l Management:Develop configuration management functions in the log system, such as alarm threshold setting, timing task management, and log saving time setting, etc. Application examples Two examples of Venus application scenarios are as follows. 1. The virtual machine creation operation was performed on the cloud platform and it was found that the virtual machine was not created successfully. First, we can find the request id of the operation and jump to the virtual machine creation call chain page. Then, we can query the calling process, view and download the details of the log of the call. 2. In the cloud platform, the error log of each module can be converted into alarms to remind the users. Further, we can retrieve the details of the error log and error log statistics. Next step The next step of the Venus project is as follows: * Collection:In addition to fluent, other collection plugins such as logstash will be integrated. * Analysis: Explore more operation and maintenance scenarios, and conduct statistical analysis and alarm on key data. * display: The configuration, analysis and alarm of Venus will be integrated into horizon in the form of plugin. * location: Form clustering log and construct knowledge map, and integrate algorithm class library to locate the root cause of the fault. Venus Project Registry Venus library: https://opendev.org/inspur/venus You can grab the source code using the following git command: git clone https://opendev.org/inspur/venus.git Venus Demo Youtu.be: https://youtu.be/mE2MoEx3awM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 8136 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 15944 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 8405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 15175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 8496 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From balazs.gibizer at est.tech Mon Jan 11 08:09:54 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 11 Jan 2021 09:09:54 +0100 Subject: [nova] unit testing on ppc64le In-Reply-To: References: <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: On Fri, Jan 8, 2021 at 19:22, Michael J Turek wrote: > Thanks for the heads up, > > We should have the capacity to add them. At one point I think we ran > unit tests for nova but the job may have been culled in the move to > zuul v3. I've CC'd the maintainers of the CI, Aditi Dukle and > Sajauddin Mohammad. Thanks! Let me know if you need help from upstream side. > > Aditi and Sajauddin, could we add a job to pkvmci to run unit tests > for nova? Could please you also make sure that the contact information for this CI is up to date on the wiki[1]? Thanks gibi [1] https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI > > Michael Turek > Software Engineer > Power Cloud Department > 1 845 433 1290 Office > mjturek at us.ibm.com > He/Him/His > > IBM > > > >> ----- Original message ----- >> From: Balazs Gibizer >> To: OpenStack Discuss >> Cc: mjturek at us.ibm.com >> Subject: [EXTERNAL] [nova] unit testing on ppc64le >> Date: Fri, Jan 8, 2021 7:59 AM >> >> Hi, >> >> We have a bugreport[1] showing that our unit tests are not passing on >> ppc. In the upstream CI we don't have test capability to run our >> tests >> on ppc. But we have the IBM Power KVM CI[2] that runs integration >> tests >> on ppc. I'm wondering if IBM could extend the CI to run nova unit and >> functional tests too. I've added Michael Turek (mjturek at us.ibm.com) >> to >> CC. Michael is listed as the contact person for the CI. >> >> Cheers, >> gibi >> >> [1]https://bugs.launchpad.net/nova/+bug/1909972 >> [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI >> >> > > From thierry at openstack.org Mon Jan 11 11:18:16 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Jan 2021 12:18:16 +0100 Subject: [largescale-sig] Next meeting: January 13, 15utc Message-ID: Hi everyone, We'll have our first 2021 Large Scale SIG meeting this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210113T15 Our main topic will be to reboot the Large Scale SIG in the new year and get a few work items and contributions planned. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Talk to you all later, -- Thierry Carrez From marios at redhat.com Mon Jan 11 11:59:19 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 11 Jan 2021 13:59:19 +0200 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none Message-ID: Hi TripleO, you may have seen the thread started by Herve at [1] around the deadline for making a victoria release for os-refresh-config, os-collect-config and tripleo-ipsec. This message is to ask if anyone is still using these? In particular would anyone mind if we stopped making tagged releases, as discussed at [2]. Would someone mind if there was no stable/victoria branch for these repos? For the os-refresh/collect-config I suspect the answer is NO - at least, we aren't using these any more in the 'normal' tripleo deployment for a good while now, since we switched to config download by default. We haven't even created an ussuri branch for these [3] and no one has shouted about that (or at least not loud enough I haven't heard anything). For tripleo-ipsec it *looks* like we're still using it in the sense that we carry the template and pass the parameters in tripleo-heat-templates [4]. However we aren't running that in any CI job as far as I can see, and we haven't created any branches there since Rocky. So is anyone using tripleo-ipsec? Depending on the answers here and as discussed at [2] I will move to make these as unreleased (release-management: none in openstack/governance reference/projects.yaml) and remove the release file altogether. For now however and given the deadline of this week for a victoria release I am proposing that we move forward with [2] and cut the victoria branch for these. thanks for reading and please speak up if any of the above are important to you! thanks, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html [2] https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml [3] https://pastebin.com/raw/KJ0JxKPx [4] https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jan 11 13:12:29 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 11 Jan 2021 14:12:29 +0100 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community In-Reply-To: References: Message-ID: Liye Pang(逄立业) wrote: > Hello everyone, after feedback from a large number of operations and > maintenance personnel in InCloud OpenStack, we developed the log > management project “Venus” for the OpenStack projects [...] OpenStack-aware centralized log management sounds very interesting to me... If others are interested in collaborating on producing that component, I personally think it would be a great fit for the "operations tooling" section of the OpenStack Map[1]. [1] https://www.openstack.org/software/ -- Thierry Carrez (ttx) From amy at demarco.com Mon Jan 11 13:49:09 2021 From: amy at demarco.com (Amy Marrich) Date: Mon, 11 Jan 2021 07:49:09 -0600 Subject: [Diversity] Diversity &Inclusion WG Meeting reminder Message-ID: The Diversity & Inclusion WG invites members of all OSF projects to our meeting today, at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Mon Jan 11 14:24:24 2021 From: james.slagle at gmail.com (James Slagle) Date: Mon, 11 Jan 2021 09:24:24 -0500 Subject: [Heat][tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Mon, Jan 11, 2021 at 7:04 AM Marios Andreou wrote: > For the os-refresh/collect-config I suspect the answer is NO - at least, > we aren't using these any more in the 'normal' tripleo deployment for a > good while now, since we switched to config download by default. We haven't > even created an ussuri branch for these [3] and no one has shouted about > that (or at least not loud enough I haven't heard anything). > TripleO doesn't use them, but I think Heat still does for the support of SoftwareDeployment/StructuredDeployment. I've added [Heat] to the subject tags. It is as least still documented: https://docs.openstack.org/heat/latest/template_guide/software_deployment.html#software-deployment-resources which shows using the os-*-config elements from tripleo-image-elements to build an image with diskimage-builder. However, that doesn't mean we need to create stable branches or releases for these repos. The elements support installing from source, and RDO is packaging off of master it seems. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon Jan 11 14:26:34 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 11 Jan 2021 07:26:34 -0700 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou wrote: > > Hi TripleO, > > you may have seen the thread started by Herve at [1] around the deadline for making a victoria release for os-refresh-config, os-collect-config and tripleo-ipsec. > > This message is to ask if anyone is still using these? In particular would anyone mind if we stopped making tagged releases, as discussed at [2]. Would someone mind if there was no stable/victoria branch for these repos? > > For the os-refresh/collect-config I suspect the answer is NO - at least, we aren't using these any more in the 'normal' tripleo deployment for a good while now, since we switched to config download by default. We haven't even created an ussuri branch for these [3] and no one has shouted about that (or at least not loud enough I haven't heard anything). Maybe switch to independent? That being said as James points out they are still used by Heat so maybe the ownership should be moved. > > For tripleo-ipsec it *looks* like we're still using it in the sense that we carry the template and pass the parameters in tripleo-heat-templates [4]. However we aren't running that in any CI job as far as I can see, and we haven't created any branches there since Rocky. So is anyone using tripleo-ipsec? > I think tripleo-ipsec is no longer needed as we now have proper tls-everywhere support. We might want to revisit this and deprecate/remove it. > Depending on the answers here and as discussed at [2] I will move to make these as unreleased (release-management: none in openstack/governance reference/projects.yaml) and remove the release file altogether. > > For now however and given the deadline of this week for a victoria release I am proposing that we move forward with [2] and cut the victoria branch for these. > > thanks for reading and please speak up if any of the above are important to you! > > thanks, marios > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html > [2] https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml > [3] https://pastebin.com/raw/KJ0JxKPx > [4] https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 > From akekane at redhat.com Mon Jan 11 14:44:04 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 11 Jan 2021 20:14:04 +0530 Subject: [Glance] PTL on leaves Message-ID: Hi All, I'm going on leave from 13th January and will be back on 21st January to attend my sister's wedding. Please direct any issues to the rest of the core team. Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jan 11 15:07:31 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 11 Jan 2021 16:07:31 +0100 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: Le lun. 11 janv. 2021 à 15:27, Alex Schultz a écrit : > On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou wrote: > > > > Hi TripleO, > > > > you may have seen the thread started by Herve at [1] around the deadline > for making a victoria release for os-refresh-config, os-collect-config and > tripleo-ipsec. > > > > This message is to ask if anyone is still using these? In particular > would anyone mind if we stopped making tagged releases, as discussed at > [2]. Would someone mind if there was no stable/victoria branch for these > repos? > > > > For the os-refresh/collect-config I suspect the answer is NO - at least, > we aren't using these any more in the 'normal' tripleo deployment for a > good while now, since we switched to config download by default. We haven't > even created an ussuri branch for these [3] and no one has shouted about > that (or at least not loud enough I haven't heard anything). > > Maybe switch to independent? That being said as James points out they > are still used by Heat so maybe the ownership should be moved. > I agree, moving them to the independent model could be a solution, in this case the patch could be adapted to reflect that choice and we could ignore these repos from victoria deadline point of view. Concerning the "ownership" side of the question this is more an internal discussion between teams and eventually the TC, I don't think that that will impact us from a release management POV. > > > > For tripleo-ipsec it *looks* like we're still using it in the sense that > we carry the template and pass the parameters in tripleo-heat-templates > [4]. However we aren't running that in any CI job as far as I can see, and > we haven't created any branches there since Rocky. So is anyone using > tripleo-ipsec? > > > > I think tripleo-ipsec is no longer needed as we now have proper > tls-everywhere support. We might want to revisit this and > deprecate/remove it. > > > Depending on the answers here and as discussed at [2] I will move to > make these as unreleased (release-management: none in openstack/governance > reference/projects.yaml) and remove the release file altogether. > > > > For now however and given the deadline of this week for a victoria > release I am proposing that we move forward with [2] and cut the victoria > branch for these. > > > > thanks for reading and please speak up if any of the above are important > to you! > > > > thanks, marios > > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html > > [2] > https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml > > [3] https://pastebin.com/raw/KJ0JxKPx > > [4] > https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Jan 11 15:07:53 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 11 Jan 2021 15:07:53 +0000 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community In-Reply-To: References: Message-ID: On Mon, 11 Jan 2021 at 13:13, Thierry Carrez wrote: > > Liye Pang(逄立业) wrote: > > Hello everyone, after feedback from a large number of operations and > > maintenance personnel in InCloud OpenStack, we developed the log > > management project “Venus” for the OpenStack projects [...] > OpenStack-aware centralized log management sounds very interesting to me... > > If others are interested in collaborating on producing that component, I > personally think it would be a great fit for the "operations tooling" > section of the OpenStack Map[1]. > > [1] https://www.openstack.org/software/ Let's not forget that Monasca has a log aggregation API [1] [1] https://wiki.openstack.org/wiki/Monasca/Logging > > -- > Thierry Carrez (ttx) > From moguimar at redhat.com Mon Jan 11 15:57:09 2021 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Mon, 11 Jan 2021 16:57:09 +0100 Subject: [barbican][oslo][nova][glance][cinder] cursive library status In-Reply-To: References: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Message-ID: Hi Brian, During Oslo's last weekly meeting [1] we decided that Oslo can take cursive under its umbrella with collaboration of Barbican folks. I just waited a bit with this confirmation as the Barbican PTL was on PTO and I wanted to confirm with him. What are the next steps from here? [1]: http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log.html#l-64 Thanks, Moisés On Fri, Dec 18, 2020 at 10:06 PM Douglas Mendizabal wrote: > On 12/16/20 12:50 PM, Ben Nemec wrote: > > > > > > On 12/16/20 12:02 PM, Brian Rosmaita wrote: > >> Hello Barbican team, > >> > >> Apologies for not including barbican in the previous thread on this > >> topic: > >> > >> > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html > >> > >> > >> The situation is that cursive is used by Nova, Glance, and Cinder and > >> we'd like to move it out of the 'x' namespace into openstack > >> governance. The question is then what team would oversee it. It > >> seems like a good fit for Oslo, and the Oslo team seems OK with that, > >> but since barbican-core is currently included in cursive-core, it make > >> sense to give the Barbican team first dibs. > >> > >> From the consuming teams' side, I don't think we have a preference as > >> long as it's clear who we need to bother about approvals if a bugfix > >> is posted for review. > >> > >> Thus my ask is that the Barbican team indicate whether they'd like to > >> move cursive to the 'openstack' namespace under their governance, or > >> whether they'd prefer Oslo to oversee the library. > > > > Note that this is not necessarily an either/or thing. Castellan is under > > Oslo governance but is co-owned by the Oslo and Barbican teams. We could > > do a similar thing with Cursive. > > > > Hi Brian and Ben, > > Sorry I missed the original thread. Given that the end of the year is > around the corner, most of the Barbican team is out on PTO and we > haven't had a chance to discuss this in our weekly meeting. > > That said, I doubt anyone would object to moving cursive into the > openstack namespace. > > I personally do not mind the Oslo team taking over maintenace, and I am > also willing to help review patches if the Oslo team would like to > co-own this library just like we currently do for Castellan. > > - Douglas Mendizábal (redrobot) > > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Jan 11 17:08:47 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Jan 2021 11:08:47 -0600 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: <5755ea96-5cc7-2756-970e-dcbf5184b2a2@debian.org> References: <5755ea96-5cc7-2756-970e-dcbf5184b2a2@debian.org> Message-ID: On 1/6/21 2:58 PM, Thomas Goirand wrote: > On 12/18/20 3:54 PM, hberaud wrote: >> In a first time I tried to fix our gates by fixing our lower-constraints >> project by project but with around ~36 projects to maintain this is a >> painful task, especially due to nested oslo layers inside oslo >> himself... I saw the face of the hell of dependencies. > > Welcome to my world! > >> Thoughts? > > Couldn't someone address the dependency loops in Oslo? It's IMO anyway > needed. Which libraries have circular dependencies? It's something we've intentionally avoided, so if they exist I would consider it a bug. > > Just my 2 cents, not sure if that helps... > Cheers, > > Thomas Goirand (zigo) > From openstack at nemebean.com Mon Jan 11 17:09:35 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 11 Jan 2021 11:09:35 -0600 Subject: [all][tc] Thoughts on Python 3.7 support In-Reply-To: References: <20210105215107.jap2c5evbkpu2u7n@yuggoth.org> <68d4b804-5729-e313-7f29-6c7b14166c5c@nemebean.com> <176d8d1747f.e25ad718873974.5391430581306589135@ghanshyammann.com> Message-ID: On 1/6/21 3:23 PM, Pierre Riteau wrote: > On Wed, 6 Jan 2021 at 18:58, Ghanshyam Mann wrote: >> >> ---- On Wed, 06 Jan 2021 10:34:35 -0600 Ben Nemec wrote ---- >> > >> > >> > On 1/5/21 3:51 PM, Jeremy Stanley wrote: >> > > On 2021-01-05 22:32:58 +0100 (+0100), Pierre Riteau wrote: >> > >> There have been many patches submitted to drop the Python 3.7 >> > >> classifier from setup.cfg: >> > >> https://review.opendev.org/q/%2522remove+py37%2522 >> > >> The justification is that Wallaby tested runtimes only include 3.6 and 3.8. >> > >> >> > >> Most projects are merging these patches, but I've seen a couple of >> > >> objections from ironic and horizon: >> > >> >> > >> - https://review.opendev.org/c/openstack/python-ironicclient/+/769044 >> > >> - https://review.opendev.org/c/openstack/horizon/+/769237 >> > >> >> > >> What are the thoughts of the TC and of the overall community on this? >> > >> Should we really drop these classifiers when there are no >> > >> corresponding CI jobs, even though more Python versions may well be >> > >> supported? >> > > >> > > My recollection of the many discussions we held was that the runtime >> > > document would recommend the default python3 available in our >> > > targeted platforms, but that we would also make a best effort to >> > > test with the latest python3 available to us at the start of the >> > > cycle as well. It was suggested more than once that we should test >> > > all minor versions in between, but this was ruled out based on the >> > > additional CI resources it would consume for minimal gain. Instead >> > > we deemed that testing our target version and the latest available >> > > would give us sufficient confidence that, if those worked, the >> > > versions in between them were likely fine as well. Based on that, I >> > > think the versions projects claim to work with should be contiguous >> > > ranges, not contiguous lists of the exact versions tested (noting >> > > that those aren't particularly *exact* versions to begin with). >> > > >> > > Apologies for the lack of references to old discussions, I can >> > > probably dig some up from the ML and TC meetings several years back >> > > of folks think it will help inform this further. >> > > >> > >> > For what little it's worth, that jives with my hazy memories of the >> > discussion too. The assumption was that if we tested the upper and lower >> > bounds of our Python versions then the ones in the middle would be >> > unlikely to break. It was a compromise to support multiple versions of >> > Python without spending a ton of testing resources on it. >> >> >> Exactly, py3.7 is not broken for OpenStack so declaring it not supported is not the right thing. >> I remember the discussion when we declared the wallaby (probably from Victoria) testing runtime, >> we decided if we test py3.6 and py3.8 it means we are not going to break py3.7 support so indirectly >> it is tested and supported. >> >> And testing runtime does not mean we have to drop everything else testing means projects are all >> welcome to keep running the py3.7 testing job on the gate there is no harm in that. >> >> In both cases, either project has an explicit py3.7 job or not we should not remove it from classifiers. >> >> >> -gmann > > Thanks everyone for your input. Then should we request that those > patches dropping the 3.7 classifier are abandoned, or reverted if > already merged? > That would be my takeaway from this discussion, yes. From hberaud at redhat.com Mon Jan 11 17:57:37 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 11 Jan 2021 18:57:37 +0100 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: <5755ea96-5cc7-2756-970e-dcbf5184b2a2@debian.org> Message-ID: Le lun. 11 janv. 2021 à 18:11, Ben Nemec a écrit : > > > On 1/6/21 2:58 PM, Thomas Goirand wrote: > > On 12/18/20 3:54 PM, hberaud wrote: > >> In a first time I tried to fix our gates by fixing our lower-constraints > >> project by project but with around ~36 projects to maintain this is a > >> painful task, especially due to nested oslo layers inside oslo > >> himself... I saw the face of the hell of dependencies. > > > > Welcome to my world! > > > >> Thoughts? > > > > Couldn't someone address the dependency loops in Oslo? It's IMO anyway > > needed. > AFAIK nobody is volunteering on our side, feel free to send patches to fix them.... > > Which libraries have circular dependencies? It's something we've > intentionally avoided, so if they exist I would consider it a bug. > Concerning myself, I meant that by example to fix oslo.messaging its LC you should first address those of oslo.log and etc... This isn't a circular dependency but an order to follow in our patches to fix all our L.C. Also I didn't notice "real" circular dependencies, in other words I only noticed an order to apply to fix all the L.C, but over 36 projects that order is a bit annoying, that's all. > > > > > Just my 2 cents, not sure if that helps... > > Cheers, > > > > Thomas Goirand (zigo) > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Jan 11 20:43:04 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 11 Jan 2021 15:43:04 -0500 Subject: [tc] weekly update Message-ID: Hi there, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Setting Xinran Wang as Cyborg's PTL https://review.opendev.org/c/openstack/governance/+/770075 - molteniron does not make releases https://review.opendev.org/c/openstack/governance/+/769805 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 - Add doc/requirements https://review.opendev.org/c/openstack/governance/+/769696 - Add link to Xena announcement https://review.opendev.org/c/openstack/governance/+/769620 - Add glance-tempest-plugin to Glance https://review.opendev.org/c/openstack/governance/+/767666 - Remove Karbor project team | https://review.opendev.org/c/openstack/governance/+/767056 ## Projects Updates - Deprecate neutron-fwaas and neutron-fwaas-dashboard master branch https://review.opendev.org/c/openstack/governance/+/735828 # Other Reminders - Our next [TC] Weekly meeting is scheduled on January 14th at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, January 13th, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Mon Jan 11 20:45:39 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 11 Jan 2021 20:45:39 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Thanks I added it to the commit. Could you share your uwsgi config as well. Best Regards, Erik Olof Gunnar Andersson Technical Lead, Senior Cloud Engineer From: Ionut Biru Sent: Tuesday, January 5, 2021 1:51 AM To: Erik Olof Gunnar Andersson Cc: Spyros Trigazis ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson > wrote: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis > Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru > Cc: Erik Olof Gunnar Andersson >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jan 11 21:48:55 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Jan 2021 15:48:55 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210108170441.koyxpaxsse7qj645@mthode.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <20210108170441.koyxpaxsse7qj645@mthode.org> Message-ID: <176f36c63fa.f6d7dcc91078291.3697415809509428960@ghanshyammann.com> ---- On Fri, 08 Jan 2021 11:04:41 -0600 Matthew Thode wrote ---- > On 21-01-08 10:21:51, Brian Rosmaita wrote: > > On 1/6/21 12:59 PM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > You might have seen the discussion around dropping the lower constraints > > > testing as it becomes more challenging than the current value of doing it. > > > > I think the TC needs to discuss this explicitly (at a meeting or two, not > > just on the ML) and give the projects some guidance. I agree that there's > > little point in maintaining the l-c if they're not actually useful to anyone > > in their current state, but if their helpfulness (or potential helpfulness) > > outweighs the maintenance burden, then we should keep them. (How's that for > > a profound statement?) > > > > Maybe someone can point me to where I can RTFM to get a clearer picture, but > > my admittedly vague idea of what the l-c are for is that it has something to > > do with making packaging easier. If that's the case, it would be good for > > the TC to reach out to some openstack packagers/distributors to find outline > > how they use l-c (if at all) and what changes could be done to make them > > actually useful, and then we can re-assess the maintenance burden. > > > > This whole experience with the new pip resolver has been painful, I think, > > because it hit all projects and all branches at once. My experience, > > however, is that if I'd been updating the minimum versions for all the > > cinder deliverables in their requirements.txt and l-c.txt files every cycle > > to reflect a pip freeze at Milestone-3 it would have been a lot easier. > > > > What do other projects do about this? In Cinder, we've just been updating > > the requirements on-demand, not proactively, and as a result for some > > dependencies we claimed that foo>=0.9.0 is OK -- but except for unit tests > > in the l-c job, cinder deliverables haven't been using anything other than > > foo>=16.0 since rocky. So in master, I took advantage of having to revise > > requirements and l-c to make some major jumps in minimum versions. And I'm > > thinking of doing a pip-freeze requirements.txt minimum version update from > > now on at M-3 each cycle, which will force me to make an l-c.txt update too. > > (Maybe I was supposed to be doing that all along? Or maybe it's a bad idea? > > I could use some guidance here.) > > > > It would be good for the l-c to reflect reality, but on the other hand, > > updating the minimum versions in requirements.txt (and hence in l-c) too > > aggressively probably won't help packagers at all. (Or maybe it will, I > > don't know.) On the other hand, having the l-c is useful from the > > standpoint of letting you know when your minimum acceptable version in > > requirements.txt will break your unit tests. But if we're updating the > > minimum versions of dependencies every cycle to known good minimum versions, > > an l-c failure is going to be pretty rare, so maybe it's not worth the > > trouble of maintaining the l-c.txt and CI job. > > > > One other thing: if we do keep l-c, we need to have some guidance about > > what's actually supposed to be in there. (Or I need to RTFM.) I've noticed > > that as we've added new dependencies to cinder, we've included the > > dependency in l-c.txt, but not its indirect dependencies. I guess we should > > have been adding the indirect dependencies all along, too? (Spoiler alert: > > we haven't.) > > > > This email has gotten too long, so I will shut up now. > > > > cheers, > > brian > > > > > > > > Few of the ML thread around this discussion: > > > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > > > > > As Oslo and many other project dropping or already dropped it, we should decide it for all > > > other projects also otherwise it can be more changing than it is currently. > > > > > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > > > want to keep it but we should decide a general recommendation here. > > > > > > -gmann > > > > > > > > > /requirements hat > > l-c was mainly promoted as a way to know when you are using a feature > that is not in an old release. The way we generally test is with newer > constraints, which don't test what we state we support (the range > between the lower bound in requirements.txt and upper-contraints). > > While I do think it's useful to know that the range of versions of a > library needs to be updated... I understand that it may not be useful, > either because of the possible maintenance required by devs, the load on > the testing infrastructure generated by testing lower-constraints or > that downstream packagers do not use it. > > Search this for lower-constraints. > https://docs.openstack.org/project-team-guide/dependency-management.html > > Indirect dependencies in lower-constraints were not encouraged iirc, > both for maintenance reasons (lot of churn) and because 'hopefully' > downstream deps are doing the same thing and testing their deps for > changes they need. > > /downstream packager hat > > I do not look at lower-constraints, but I do look at lower-bounds in the > requirements.txt file (from which lower-constraints are generated). I > look for updates in the lower-bounds to know if a library that was > already packaged needed updating, though I do try and target the version > mentioned in upper-constraints.txt when updating. More and more I've > just made sure that the entire dependency tree for openstack matches > what is packaged. Even then though, if the minimum is not updated then > this pushes it down on users. I do not have downstream packager maintenance experience but in my local or deps resolver time I do look at the lower bound in requirements.txt The challenge with that will to keep req.txt lower bound up to date as our CI will be testing with u-c. -gmann > > /user (deployer) perspective > > Why does $PROJECT not work, I'm going to report it as a bug to $distro, > $deployment and $upstream. > > What they did was not update the version of pyroute2 (or something) > because $project didn't update the lower bound to require it. > > -- > Matthew Thode > From gmann at ghanshyammann.com Mon Jan 11 22:00:54 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Jan 2021 16:00:54 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> Message-ID: <176f3775a84.e963c2901078611.3413948910917368000@ghanshyammann.com> ---- On Fri, 08 Jan 2021 09:21:51 -0600 Brian Rosmaita wrote ---- > On 1/6/21 12:59 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > You might have seen the discussion around dropping the lower constraints > > testing as it becomes more challenging than the current value of doing it. > > I think the TC needs to discuss this explicitly (at a meeting or two, > not just on the ML) and give the projects some guidance. I agree that > there's little point in maintaining the l-c if they're not actually > useful to anyone in their current state, but if their helpfulness (or > potential helpfulness) outweighs the maintenance burden, then we should > keep them. (How's that for a profound statement?) Yes, that is the plan. This ML thread is to get initial feedback and then discuss in meeting. I have added this to the next TC meeting on the 14th Jan agenda. - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions -gmann > > Maybe someone can point me to where I can RTFM to get a clearer picture, > but my admittedly vague idea of what the l-c are for is that it has > something to do with making packaging easier. If that's the case, it > would be good for the TC to reach out to some openstack > packagers/distributors to find outline how they use l-c (if at all) and > what changes could be done to make them actually useful, and then we can > re-assess the maintenance burden. > > This whole experience with the new pip resolver has been painful, I > think, because it hit all projects and all branches at once. My > experience, however, is that if I'd been updating the minimum versions > for all the cinder deliverables in their requirements.txt and l-c.txt > files every cycle to reflect a pip freeze at Milestone-3 it would have > been a lot easier. > > What do other projects do about this? In Cinder, we've just been > updating the requirements on-demand, not proactively, and as a result > for some dependencies we claimed that foo>=0.9.0 is OK -- but except for > unit tests in the l-c job, cinder deliverables haven't been using > anything other than foo>=16.0 since rocky. So in master, I took > advantage of having to revise requirements and l-c to make some major > jumps in minimum versions. And I'm thinking of doing a pip-freeze > requirements.txt minimum version update from now on at M-3 each cycle, > which will force me to make an l-c.txt update too. (Maybe I was > supposed to be doing that all along? Or maybe it's a bad idea? I could > use some guidance here.) > > It would be good for the l-c to reflect reality, but on the other hand, > updating the minimum versions in requirements.txt (and hence in l-c) too > aggressively probably won't help packagers at all. (Or maybe it will, I > don't know.) On the other hand, having the l-c is useful from the > standpoint of letting you know when your minimum acceptable version in > requirements.txt will break your unit tests. But if we're updating the > minimum versions of dependencies every cycle to known good minimum > versions, an l-c failure is going to be pretty rare, so maybe it's not > worth the trouble of maintaining the l-c.txt and CI job. > > One other thing: if we do keep l-c, we need to have some guidance about > what's actually supposed to be in there. (Or I need to RTFM.) I've > noticed that as we've added new dependencies to cinder, we've included > the dependency in l-c.txt, but not its indirect dependencies. I guess > we should have been adding the indirect dependencies all along, too? > (Spoiler alert: we haven't.) > > This email has gotten too long, so I will shut up now. > > cheers, > brian > > > > > Few of the ML thread around this discussion: > > > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019521.html > > - http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019390.html > > > > As Oslo and many other project dropping or already dropped it, we should decide it for all > > other projects also otherwise it can be more changing than it is currently. > > > > We have not defined it in PTI or testing runtime so it is always up to projects if they still > > want to keep it but we should decide a general recommendation here. > > > > -gmann > > > > > From gmann at ghanshyammann.com Mon Jan 11 22:02:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Jan 2021 16:02:07 -0600 Subject: [oslo][TC] Dropping lower-constraints testing In-Reply-To: References: <5755ea96-5cc7-2756-970e-dcbf5184b2a2@debian.org> Message-ID: <176f37878b6.d5a4f51b1078642.5636035119235474321@ghanshyammann.com> ---- On Mon, 11 Jan 2021 11:57:37 -0600 Herve Beraud wrote ---- > Le lun. 11 janv. 2021 à 18:11, Ben Nemec a écrit : > > > On 1/6/21 2:58 PM, Thomas Goirand wrote: > > On 12/18/20 3:54 PM, hberaud wrote: > >> In a first time I tried to fix our gates by fixing our lower-constraints > >> project by project but with around ~36 projects to maintain this is a > >> painful task, especially due to nested oslo layers inside oslo > >> himself... I saw the face of the hell of dependencies. > > > > Welcome to my world! > > > >> Thoughts? > > > > Couldn't someone address the dependency loops in Oslo? It's IMO anyway > > needed. > > AFAIK nobody is volunteering on our side, feel free to send patches to fix them.... > > Which libraries have circular dependencies? It's something we've > intentionally avoided, so if they exist I would consider it a bug. > > Concerning myself, I meant that by example to fix oslo.messaging its LC you should first address those of oslo.log and etc... > This isn't a circular dependency but an order to follow in our patches to fix all our L.C. > Also I didn't notice "real" circular dependencies, in other words I only noticed an order to apply to fix all the L.C, but over 36 projects that order is a bit annoying, that's all. I agree. I have added this topic to the next TC meeting on the 14th Jan agenda to get consensus for all the projects. - https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Agenda_Suggestions -gmann > > > > > Just my 2 cents, not sure if that helps... > > Cheers, > > > > Thomas Goirand (zigo) > > > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From johnsomor at gmail.com Mon Jan 11 23:43:37 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Jan 2021 15:43:37 -0800 Subject: [octavia] problems configuring octavia In-Reply-To: References: Message-ID: Hi Daniel, It might be helpful to use the [charms] in the subject as I think everything you mentioned here is related to the Ubuntu/juju/charms deployment tooling and not the Octavia project. I don't have any experience with doing a deployment with charms, so I can't be of much help there. That said, Octavia does not rely on DNS for any of its operations. This includes the lb-mgmt-net and for the load balancing elements. So, the lack of access to BIND for Octavia is not a problem. Michael On Fri, Jan 8, 2021 at 8:16 AM Daniel Macdonald wrote: > Happy new year Openstack users and devs! > > I have been trying on and off for several months to get octavia working > but I have yet to have it successfully create a loadbalancer. I have > deployed OS bionic-train using the Charms telemetry bundle with the octavia > overlay. Openstack is working for creating regular instances but I get > various errors when trying to create a loadbalancer. > > The first issue I feel I should mention is that I am using bind running on > our MAAS controller as a DNS server. juju doesn't work if I enable IPv6 > under bind yet the octavia charm defaults to using IPv6 for its management > network so I have tried creating a IPv4 management network but I'm still > having problems. For more details on that please see the comments of this > bug report: > > https://bugs.launchpad.net/charm-octavia/+bug/1897418 > Bug #1897418 “feature request: have option to use ipv4 when sett...” : > Bugs : OpenStack Octavia Charm > > By default, Octavia charm uses ipv6 for its lb-mgmt-subnet.[1] It would be > nice to have the option to choose an ipv4 network from the start instead of > deleting the ipv6 network and recreating the ipv4 subnet. Implementation - > possible configuration option parameter when deploying. [1] > https://opendev.org/openstack/charm-octavia/src/branch/master/src/lib/charm/openstack/api_crud.py#L560 > bugs.launchpad.net > > Another notable issue I have is that after installing the charms telemetry > bundle I have 2 projects call services. How do I know which is the correct > one to use for Octavia? > > > Is this following document going to be the best guide for me to follow to > complete the final steps required to get Octavia (under Train) working: > > > https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components > OpenStack Docs: Install and configure for Ubuntu > > Install and configure for Ubuntu¶. This section describes how to install > and configure the Load-balancer service for Ubuntu 18.04 (LTS). > docs.openstack.org > > I'm hoping someone has already written an easy to follow guide to using > Octavia with an IPv4 management network using the Charms bundle to do most > of the installation work? > > Thanks > > > > > [image: University of Salford] > *DANIEL MACDONALD* > Specialist Technical Demonstrator > School of Computing, Science & Engineering > Room 145, Newton Building, University of Salford, Manchester M5 4WT > T: +44(0) 0161 295 5242 > D.R.MacDonald at salford.ac.uk > > */ *www.salford.ac.uk > > [image: CSE] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From araragi222 at gmail.com Tue Jan 12 01:18:20 2021 From: araragi222 at gmail.com (=?UTF-8?B?5ZGC6Imv?=) Date: Tue, 12 Jan 2021 10:18:20 +0900 Subject: [tosca-parser][tacker] May I ask to push review process of patch 747516 : Support "Implementation" definition as Artifact Name Message-ID: Hi Bob, This is Liang from tacker team, in our patch: Implement SOL001 features to MgmtDriver (Ib4ad3eb9) · Gerrit Code Review (opendev.org) , we tried to implement in order to allow user use their script by defining 'interface' in vnfd file: …/vnflcm_driver.py · Gerrit Code Review (opendev.org) in the function above, changes in Support "Implementation" definition as Artifact Name (I192d3649) · Gerrit Code Review (opendev.org) , is needed for our implementation, may I ask you to help us push review for an early merge of this patch, appreciate for that! Best regards Liang Lu -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Jan 12 07:40:45 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Jan 2021 13:10:45 +0530 Subject: [Heat][tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Mon, Jan 11, 2021 at 8:01 PM James Slagle wrote: > > > On Mon, Jan 11, 2021 at 7:04 AM Marios Andreou wrote: > >> For the os-refresh/collect-config I suspect the answer is NO - at least, >> we aren't using these any more in the 'normal' tripleo deployment for a >> good while now, since we switched to config download by default. We haven't >> even created an ussuri branch for these [3] and no one has shouted about >> that (or at least not loud enough I haven't heard anything). >> > > TripleO doesn't use them, but I think Heat still does for the support of > SoftwareDeployment/StructuredDeployment. I've added [Heat] to the subject > tags. It is as least still documented: > > > https://docs.openstack.org/heat/latest/template_guide/software_deployment.html#software-deployment-resources > > which shows using the os-*-config elements from tripleo-image-elements to > build an image with diskimage-builder. > > Yeah, ideally os-collect-config/os-refresh-config should now be owned by heat team along with heat-agents (cycle-with-intermediary), as long as software deployments are supported by heat (though there were some discussions to deprecate/drop it some time back). > However, that doesn't mean we need to create stable branches or releases > for these repos. The elements support installing from source, and RDO is > packaging off of master it seems. > > -- > -- James Slagle > -- > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Jan 12 08:45:10 2021 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 12 Jan 2021 08:45:10 +0000 Subject: =?gb2312?B?tPC4tDogW2FsbF1JbnRyb2R1Y3Rpb24gdG8gdmVudXMgd2hpY2ggaXMgdGhl?= =?gb2312?B?IHByb2plY3Qgb2YgbG9nIG1hbmFnZW1lbnQgYW5kIGhhcyBiZWVuIGNvbnRy?= =?gb2312?Q?ibuted_to_the_OpenStack_community?= In-Reply-To: References: Message-ID: <4220f2e3ab1641b493a3c0e8fb83e510@inspur.com> >-----邮件原件----- >发件人: Thierry Carrez [mailto:thierry at openstack.org] >发送时间: 2021年1月11日 21:12 >收件人: openstack-discuss at lists.openstack.org >主题: Re: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community >Liye Pang(逄立业) wrote: >> Hello everyone, after feedback from a large number of operations and >> maintenance personnel in InCloud OpenStack, we developed the log >> management project “Venus” for the OpenStack projects [...] > OpenStack-aware centralized log management sounds very interesting to me... > If others are interested in collaborating on producing that component, I personally think it would be a great fit for the > "operations tooling" > section of the OpenStack Map[1]. Yes, after Inspur did a 1,000-nodes OpenStack single-cluster large-scale test, I was more convinced of the benefits Venus can bring to operation and maintenance. By Venus, we can quickly locate and find problems with the OpenStack platform, which can bring great convenience to operation and maintenance. https://mp.weixin.qq.com/s/RSrjjZjVFn086StNLV1Ivg This is the article of 1000-nodes test, but it's wrote by Chinese, don't worry ^^, we will publish the English article in future. This is the demo for Venus, hope that can help you to know what it can be done: https://youtu.be/mE2MoEx3awM> https://youtu.be/mE2MoEx3awM > [1] https://www.openstack.org/software/ > -- > Thierry Carrez (ttx) Original ML: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html brinzhang From radoslaw.piliszek at gmail.com Tue Jan 12 09:56:25 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 12 Jan 2021 10:56:25 +0100 Subject: [masakari] Masakari Team Meeting - the schedule Message-ID: Hello, Folks! I have realised the Masakari Team Meeting is to run on even weeks [1]. However, anyone who created the meeting record in their calendar (including me) has likely gotten the meeting schedule in odd weeks this year (because last year finished with an odd week and obviously numbering also starts on odd: the 1). So I have run the first meeting this year the previous week but someone came for the meeting this week. :-) According to the "new wrong" schedule, the next meeting would be on Jan 19, but according to the "proper" one it would be on Jan 26. I am available both weeks the same so can run either term (or both as well, why not). The question is whether we don't want to simply move to the weekly meeting schedule. We usually don't have much to discuss but it might be less confusing and a better way to form a habit if we met every week. Please let me know your thoughts. [1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting -yoctozepto From marios at redhat.com Tue Jan 12 10:07:14 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 12 Jan 2021 12:07:14 +0200 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Mon, Jan 11, 2021 at 5:07 PM Herve Beraud wrote: > > > Le lun. 11 janv. 2021 à 15:27, Alex Schultz a > écrit : > >> On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou wrote: >> > >> > Hi TripleO, >> > >> > you may have seen the thread started by Herve at [1] around the >> deadline for making a victoria release for os-refresh-config, >> os-collect-config and tripleo-ipsec. >> > >> > This message is to ask if anyone is still using these? In particular >> would anyone mind if we stopped making tagged releases, as discussed at >> [2]. Would someone mind if there was no stable/victoria branch for these >> repos? >> > >> > For the os-refresh/collect-config I suspect the answer is NO - at >> least, we aren't using these any more in the 'normal' tripleo deployment >> for a good while now, since we switched to config download by default. We >> haven't even created an ussuri branch for these [3] and no one has shouted >> about that (or at least not loud enough I haven't heard anything). >> >> Maybe switch to independent? That being said as James points out they >> are still used by Heat so maybe the ownership should be moved. >> > > I agree, moving them to the independent model could be a solution, in this > case the patch could be adapted to reflect that choice and we could ignore > these repos from victoria deadline point of view. > > Concerning the "ownership" side of the question this is more an internal > discussion between teams and eventually the TC, I don't think that that > will impact us from a release management POV. > > ack yes this makes sense thanks James, Alex, Herve and Rabi for your comments First I'll refactor https://review.opendev.org/c/openstack/releases/+/769915 to instead move them to independent (and i'll also include os-apply-config). Then I'll reach out to Heat PTL to see what they think about the transfer of ownership, thanks all > >> > >> > For tripleo-ipsec it *looks* like we're still using it in the sense >> that we carry the template and pass the parameters in >> tripleo-heat-templates [4]. However we aren't running that in any CI job as >> far as I can see, and we haven't created any branches there since Rocky. So >> is anyone using tripleo-ipsec? >> > >> >> I think tripleo-ipsec is no longer needed as we now have proper >> tls-everywhere support. We might want to revisit this and >> deprecate/remove it. >> >> > Depending on the answers here and as discussed at [2] I will move to >> make these as unreleased (release-management: none in openstack/governance >> reference/projects.yaml) and remove the release file altogether. >> > >> > For now however and given the deadline of this week for a victoria >> release I am proposing that we move forward with [2] and cut the victoria >> branch for these. >> > >> > thanks for reading and please speak up if any of the above are >> important to you! >> > >> > thanks, marios >> > >> > [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html >> > [2] >> https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml >> > [3] https://pastebin.com/raw/KJ0JxKPx >> > [4] >> https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue Jan 12 10:18:52 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 12 Jan 2021 10:18:52 +0000 Subject: [kolla] Victoria releases available Message-ID: Hi, I'm pleased to announce the availability of the first Victoria releases for all Kolla deliverables: * kolla 11.0.0 (https://docs.openstack.org/releasenotes/kolla/victoria.html#relnotes-11-0-0-stable-victoria) * kolla-ansible 11.0.0 (https://docs.openstack.org/releasenotes/kolla-ansible/victoria.html#relnotes-11-0-0-stable-victoria) * kayobe 9.0.0 (https://docs.openstack.org/releasenotes/kayobe/victoria.html#relnotes-9-0-0-stable-victoria) Thanks to everyone who contributed to these releases. And now onto Wallaby! Thanks, Mark From ionut at fleio.com Tue Jan 12 11:17:37 2021 From: ionut at fleio.com (Ionut Biru) Date: Tue, 12 Jan 2021 13:17:37 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Erik, Here it is: https://paste.xinu.at/LgH8dT/ On Mon, Jan 11, 2021 at 10:45 PM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Thanks I added it to the commit. > > > > Could you share your uwsgi config as well. > > > > Best Regards, Erik Olof Gunnar Andersson > > Technical Lead, Senior Cloud Engineer > > > > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 1:51 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi, > > > > Here is my config. maybe something is fishy. > > > > I did have around 300 messages in the queue in notification.info > > and notification.err and I purged them. > > > > https://paste.xinu.at/woMt/ > > > > > > > > > On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > > workers = 2 > > > ------------------------------ > > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > > I tried with process=1 and it reached 1016 connections to rabbitmq. > > lsof > > https://paste.xinu.at/jGg/ > > > > > i think it goes into error when it reaches 1024 file descriptors. > > > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > > > * Maybe your rabbit is flooded with notifications that are not consumed? > > * You can use way more than 1024 file descriptors, maybe 2^10? > > > > Spyros > > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > > -- > > Ionut Biru - https://fleio.com > > > > > > -- > > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From pangliye at inspur.com Tue Jan 12 11:40:39 2021 From: pangliye at inspur.com (=?utf-8?B?TGl5ZSBQYW5nKOmAhOeri+S4mik=?=) Date: Tue, 12 Jan 2021 11:40:39 +0000 Subject: =?utf-8?B?562U5aSNOiBbYWxsXUludHJvZHVjdGlvbiB0byB2ZW51cyB3aGljaCBpcyB0?= =?utf-8?B?aGUgcHJvamVjdCBvZiBsb2cgbWFuYWdlbWVudCBhbmQgaGFzIGJlZW4gY29u?= =?utf-8?Q?tributed_to_the_OpenStack_community?= In-Reply-To: References: Message-ID: <59d0d1ee43324038af46d658a3ca243c@inspur.com> On Mon, 11 Jan 2021 at 13:13, Thierry Carrez wrote: > > Liye Pang(逄立业) wrote: > >> Hello everyone, after feedback from a large number of operations and >> > maintenance personnel in InCloud OpenStack, we developed the log > >> management project “Venus” for the OpenStack projects [...] >> OpenStack-aware centralized log management sounds very interesting to me... >> >> If others are interested in collaborating on producing that component, >> I personally think it would be a great fit for the "operations tooling" >> section of the OpenStack Map[1]. >> >> [1] https://www.openstack.org/software/ > >Let's not forget that Monasca has a log aggregation API [1] > > [1] https://wiki.openstack.org/wiki/Monasca/Logging The major work of monasca project about logs is indexing logs for users to retrieve. The venus project is based on indexed data and provides more functions, such as correlation analysis, error alarms,problem location , etc. >> > >-- > >Thierry Carrez (ttx) >> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From marios at redhat.com Tue Jan 12 12:41:14 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 12 Jan 2021 14:41:14 +0200 Subject: [heat][tripleo] os-apply-config os-refresh-config os-collect-config project governance Message-ID: Hello Heat team o/ as discussed in the thread at [1] TripleO is no longer using os-collect-config, os-refresh-config and os-apply-config as part of the OpenStack deployment. As such we are considering moving these repos to the independent release model. However as pointed out in [1] it seems that Heat is still using these - documented at [2]. Is the Heat community interested in taking over the ownership of these? To be clear what the proposal here is, I posted [3] - i.e. move these projects under the Heat governance. Thanks for your time, thoughts and comments on this, either here or in [3]. If you are NOT interested in owning these then we will likely move them to independent, or possibly even unmaintained/eol (in due course). regards, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019751.html [2] https://docs.openstack.org/heat/latest/template_guide/software_deployment.html#custom-image-script [3] https://review.opendev.org/c/openstack/governance/+/770285 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Jan 12 12:43:30 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 12 Jan 2021 14:43:30 +0200 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 12:07 PM Marios Andreou wrote: > > > On Mon, Jan 11, 2021 at 5:07 PM Herve Beraud wrote: > >> >> >> Le lun. 11 janv. 2021 à 15:27, Alex Schultz a >> écrit : >> >>> On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou >>> wrote: >>> > >>> > Hi TripleO, >>> > >>> > you may have seen the thread started by Herve at [1] around the >>> deadline for making a victoria release for os-refresh-config, >>> os-collect-config and tripleo-ipsec. >>> > >>> > This message is to ask if anyone is still using these? In particular >>> would anyone mind if we stopped making tagged releases, as discussed at >>> [2]. Would someone mind if there was no stable/victoria branch for these >>> repos? >>> > >>> > For the os-refresh/collect-config I suspect the answer is NO - at >>> least, we aren't using these any more in the 'normal' tripleo deployment >>> for a good while now, since we switched to config download by default. We >>> haven't even created an ussuri branch for these [3] and no one has shouted >>> about that (or at least not loud enough I haven't heard anything). >>> >>> Maybe switch to independent? That being said as James points out they >>> are still used by Heat so maybe the ownership should be moved. >>> >> >> I agree, moving them to the independent model could be a solution, in >> this case the patch could be adapted to reflect that choice and we could >> ignore these repos from victoria deadline point of view. >> >> Concerning the "ownership" side of the question this is more an internal >> discussion between teams and eventually the TC, I don't think that that >> will impact us from a release management POV. >> >> > ack yes this makes sense thanks James, Alex, Herve and Rabi for your > comments > First I'll refactor > https://review.opendev.org/c/openstack/releases/+/769915 to instead move > them to independent (and i'll also include os-apply-config). Then I'll > reach out to Heat PTL to see what they think about the transfer of > ownership, > > thanks all > > me again ;) my apologies but I've spent some time staring at this and have changed my mind. IMO it is best if we go ahead and create the victoria bits for these right now whilst also moving forward on the proposed governance change. To be clear, I think we should merge [1] as is to create stable/victoria in time for the deadline. We already have a stable/victoria for os-apply-config and so let's be consistent and create it for os-refresh-config and os-collect-config too. I reached out to Heat with [2] and posted [3] to illustrate the proposal of moving the governance for these under Heat. If they want them then they can decide about moving to independent or not. Otherwise I will followup next week with a move to independent. [1] https://review.opendev.org/c/openstack/releases/+/769915 [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019777.html [3] https://review.opendev.org/c/openstack/governance/+/770285 > > >> >>> > >>> > For tripleo-ipsec it *looks* like we're still using it in the sense >>> that we carry the template and pass the parameters in >>> tripleo-heat-templates [4]. However we aren't running that in any CI job as >>> far as I can see, and we haven't created any branches there since Rocky. So >>> is anyone using tripleo-ipsec? >>> > >>> >>> I think tripleo-ipsec is no longer needed as we now have proper >>> tls-everywhere support. We might want to revisit this and >>> deprecate/remove it. >>> >>> > Depending on the answers here and as discussed at [2] I will move to >>> make these as unreleased (release-management: none in openstack/governance >>> reference/projects.yaml) and remove the release file altogether. >>> > >>> > For now however and given the deadline of this week for a victoria >>> release I am proposing that we move forward with [2] and cut the victoria >>> branch for these. >>> > >>> > thanks for reading and please speak up if any of the above are >>> important to you! >>> > >>> > thanks, marios >>> > >>> > [1] >>> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html >>> > [2] >>> https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml >>> > [3] https://pastebin.com/raw/KJ0JxKPx >>> > [4] >>> https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 >>> > >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Jan 12 12:54:42 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 12 Jan 2021 13:54:42 +0100 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: LGTM Le mar. 12 janv. 2021 à 13:43, Marios Andreou a écrit : > > > On Tue, Jan 12, 2021 at 12:07 PM Marios Andreou wrote: > >> >> >> On Mon, Jan 11, 2021 at 5:07 PM Herve Beraud wrote: >> >>> >>> >>> Le lun. 11 janv. 2021 à 15:27, Alex Schultz a >>> écrit : >>> >>>> On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou >>>> wrote: >>>> > >>>> > Hi TripleO, >>>> > >>>> > you may have seen the thread started by Herve at [1] around the >>>> deadline for making a victoria release for os-refresh-config, >>>> os-collect-config and tripleo-ipsec. >>>> > >>>> > This message is to ask if anyone is still using these? In particular >>>> would anyone mind if we stopped making tagged releases, as discussed at >>>> [2]. Would someone mind if there was no stable/victoria branch for these >>>> repos? >>>> > >>>> > For the os-refresh/collect-config I suspect the answer is NO - at >>>> least, we aren't using these any more in the 'normal' tripleo deployment >>>> for a good while now, since we switched to config download by default. We >>>> haven't even created an ussuri branch for these [3] and no one has shouted >>>> about that (or at least not loud enough I haven't heard anything). >>>> >>>> Maybe switch to independent? That being said as James points out they >>>> are still used by Heat so maybe the ownership should be moved. >>>> >>> >>> I agree, moving them to the independent model could be a solution, in >>> this case the patch could be adapted to reflect that choice and we could >>> ignore these repos from victoria deadline point of view. >>> >>> Concerning the "ownership" side of the question this is more an internal >>> discussion between teams and eventually the TC, I don't think that that >>> will impact us from a release management POV. >>> >>> >> ack yes this makes sense thanks James, Alex, Herve and Rabi for your >> comments >> First I'll refactor >> https://review.opendev.org/c/openstack/releases/+/769915 to instead move >> them to independent (and i'll also include os-apply-config). Then I'll >> reach out to Heat PTL to see what they think about the transfer of >> ownership, >> >> thanks all >> >> > > me again ;) > > my apologies but I've spent some time staring at this and have changed my > mind. IMO it is best if we go ahead and create the victoria bits for these > right now whilst also moving forward on the proposed governance change. > > To be clear, I think we should merge [1] as is to create stable/victoria > in time for the deadline. We already have a stable/victoria for > os-apply-config and so let's be consistent and create it for > os-refresh-config and os-collect-config too. > > I reached out to Heat with [2] and posted [3] to illustrate the proposal > of moving the governance for these under Heat. If they want them then they > can decide about moving to independent or not. > > Otherwise I will followup next week with a move to independent. > > [1] https://review.opendev.org/c/openstack/releases/+/769915 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019777.html > [3] https://review.opendev.org/c/openstack/governance/+/770285 > > > > >> >> >>> >>>> > >>>> > For tripleo-ipsec it *looks* like we're still using it in the sense >>>> that we carry the template and pass the parameters in >>>> tripleo-heat-templates [4]. However we aren't running that in any CI job as >>>> far as I can see, and we haven't created any branches there since Rocky. So >>>> is anyone using tripleo-ipsec? >>>> > >>>> >>>> I think tripleo-ipsec is no longer needed as we now have proper >>>> tls-everywhere support. We might want to revisit this and >>>> deprecate/remove it. >>>> >>>> > Depending on the answers here and as discussed at [2] I will move to >>>> make these as unreleased (release-management: none in openstack/governance >>>> reference/projects.yaml) and remove the release file altogether. >>>> > >>>> > For now however and given the deadline of this week for a victoria >>>> release I am proposing that we move forward with [2] and cut the victoria >>>> branch for these. >>>> > >>>> > thanks for reading and please speak up if any of the above are >>>> important to you! >>>> > >>>> > thanks, marios >>>> > >>>> > [1] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html >>>> > [2] >>>> https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml >>>> > [3] https://pastebin.com/raw/KJ0JxKPx >>>> > [4] >>>> https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 >>>> > >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Tue Jan 12 13:23:10 2021 From: james.slagle at gmail.com (James Slagle) Date: Tue, 12 Jan 2021 08:23:10 -0500 Subject: [heat][tripleo] os-apply-config os-refresh-config os-collect-config project governance In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 7:47 AM Marios Andreou wrote: > Hello Heat team o/ > > as discussed in the thread at [1] TripleO is no longer using > os-collect-config, os-refresh-config and os-apply-config as part of the > OpenStack deployment. As such we are considering moving these repos to the > independent release model. > > However as pointed out in [1] it seems that Heat is still using these - > documented at [2]. Is the Heat community interested in taking over the > ownership of these? To be clear what the proposal here is, I posted [3] - > i.e. move these projects under the Heat governance. > > Thanks for your time, thoughts and comments on this, either here or in > [3]. If you are NOT interested in owning these then we will likely move > them to independent, or possibly even unmaintained/eol (in due course). > I realized a possible wrinkle with moving these to Heat governance. TripleO still relies on the stable/train branches given these projects are still used in train, and there exists a possibility that TripleO cores may need to backport patch(es) to the train branch in these projects. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Jan 12 13:50:49 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 12 Jan 2021 22:50:49 +0900 Subject: [neutron] bug deputy report (week of Jan 4) Message-ID: Hi, This is a summary of neutron bugs last week. Sorry for late. It was generally a quiet week. Thanks, Akihiro # Undecided https://bugs.launchpad.net/neutron/+bug/1910474 some times rally test would fail at network scenario due to mysql data base error Undecided It was reported against the queens release and the detail log mentioned in the bug is not available. It looks related to a nested transaction in percona but we are waiting for more information from the reporter. # Unassigned gate failures https://bugs.launchpad.net/neutron/+bug/1910691 In "test_dvr_router_lifecycle_ha_with_snat_with_fips", the HA proxy PID file is not removed reported by ralonsoh. Hope someone interested in it can look into it. # Assigned or In Progress https://bugs.launchpad.net/neutron/+bug/1910213 Live migration doesn't work with OVN and DPDK assigned to jlibosva https://bugs.launchpad.net/neutron/+bug/1910717 "test_set_igmp_snooping_flood" fails because "ports_other_config" is None ralonsoh is working on the fix https://bugs.launchpad.net/bugs/1910334 [OVN]after create port forwarding, floating ip status is not running In Progress, a fix is under review From hberaud at redhat.com Tue Jan 12 14:09:49 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 12 Jan 2021 15:09:49 +0100 Subject: [heat][tripleo] os-apply-config os-refresh-config os-collect-config project governance In-Reply-To: References: Message-ID: Le mar. 12 janv. 2021 à 14:26, James Slagle a écrit : > > > On Tue, Jan 12, 2021 at 7:47 AM Marios Andreou wrote: > >> Hello Heat team o/ >> >> as discussed in the thread at [1] TripleO is no longer using >> os-collect-config, os-refresh-config and os-apply-config as part of the >> OpenStack deployment. As such we are considering moving these repos to the >> independent release model. >> >> However as pointed out in [1] it seems that Heat is still using these - >> documented at [2]. Is the Heat community interested in taking over the >> ownership of these? To be clear what the proposal here is, I posted [3] - >> i.e. move these projects under the Heat governance. >> >> Thanks for your time, thoughts and comments on this, either here or in >> [3]. If you are NOT interested in owning these then we will likely move >> them to independent, or possibly even unmaintained/eol (in due course). >> > > I realized a possible wrinkle with moving these to Heat governance. > TripleO still relies on the stable/train branches given these projects are > still used in train, and there exists a possibility that TripleO cores may > need to backport patch(es) to the train branch in these projects. > Maybe adding some tripleo cores as stable maintainers could help the both teams to converge, thoughts? Heat could be more the focused on the current series and tripleo cores could propose fixes and lead backports to stable branches. > -- > -- James Slagle > -- > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Tue Jan 12 14:32:00 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Tue, 12 Jan 2021 20:02:00 +0530 Subject: [heat][tripleo] os-apply-config os-refresh-config os-collect-config project governance In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 6:58 PM James Slagle wrote: > > > On Tue, Jan 12, 2021 at 7:47 AM Marios Andreou wrote: > >> Hello Heat team o/ >> >> as discussed in the thread at [1] TripleO is no longer using >> os-collect-config, os-refresh-config and os-apply-config as part of the >> OpenStack deployment. As such we are considering moving these repos to the >> independent release model. >> >> However as pointed out in [1] it seems that Heat is still using these - >> documented at [2]. Is the Heat community interested in taking over the >> ownership of these? To be clear what the proposal here is, I posted [3] - >> i.e. move these projects under the Heat governance. >> >> Thanks for your time, thoughts and comments on this, either here or in >> [3]. If you are NOT interested in owning these then we will likely move >> them to independent, or possibly even unmaintained/eol (in due course). >> > > I realized a possible wrinkle with moving these to Heat governance. > TripleO still relies on the stable/train branches given these projects are > still used in train, and there exists a possibility that TripleO cores may > need to backport patch(es) to the train branch in these projects. > I thought we switched to config-download by default from stable/rocky and disabled os-collect-config[1]. Not sure if anyone uses it beyond that with TripleO. Looking at the number of functional changes backported to stable/queens (AFICS only 3 of them)[2], probably it won't be a major bottleneck, assuming the heat community wants to own it. [1] https://review.opendev.org/plugins/gitiles/openstack/tripleo-puppet-elements/+/eaebed2ab92c65a1dce55a1d1945bdc47c1e5022 [2] https://bit.ly/3qeTqX2 > > -- > -- James Slagle > -- > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From tzumainn at redhat.com Tue Jan 12 14:42:51 2021 From: tzumainn at redhat.com (Tzu-Mainn Chen) Date: Tue, 12 Jan 2021 09:42:51 -0500 Subject: [ironic] question regarding attributes required for a non-admin to boot from volume Message-ID: Hi, I've been looking into what it would take to allow non-admins to boot from volume. It looks like they'd need to update a node's storage_interface; and if you'd like to boot from an iSCSI volume, then you also need to update a node's capabilities property. Are these fields that we want non-admins to be able to update? If not, is there an alternative that might be possible? Thanks, Tzu-Mainn Chen -------------- next part -------------- An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Jan 12 14:58:52 2021 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 12 Jan 2021 09:58:52 -0500 Subject: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' In-Reply-To: References: Message-ID: <20210112145852.i3yyh2u7lnqonqjt@barron.net> On 08/01/21 16:39 +0100, Arne Wiebalck wrote: >Dear all, > >Happy new year! > >The Bare Metal SIG will continue its monthly meetings and >start again on > >Tue Jan 12, 2021, at 2pm UTC. > >This time there will be a 10 minute "topic-of-the-day" >presentation by Tzu-Mainn Chen (tzumainn) on > >'Multi-Tenancy in Ironic: Of Owners and Lessees' > >So, if you would like to learn how this relatively recent >addition to Ironic works, you can find all the details >for this meeting on the SIG's etherpad: > >https://etherpad.opendev.org/p/bare-metal-sig > >Everyone is welcome, don't miss out! > >Cheers, > Arne > Had to miss Multi-Tenancy presentation today but look forward to any artefacts (recording, minutes, slides) if these become available. -- Tom From arne.wiebalck at cern.ch Tue Jan 12 15:12:21 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Tue, 12 Jan 2021 16:12:21 +0100 Subject: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' In-Reply-To: <20210112145852.i3yyh2u7lnqonqjt@barron.net> References: <20210112145852.i3yyh2u7lnqonqjt@barron.net> Message-ID: <6a3c789f-1413-51c6-81ff-4755ccdbe26a@cern.ch> On 12.01.21 15:58, Tom Barron wrote: > On 08/01/21 16:39 +0100, Arne Wiebalck wrote: >> Dear all, >> >> Happy new year! >> >> The Bare Metal SIG will continue its monthly meetings and >> start again on >> >> Tue Jan 12, 2021, at 2pm UTC. >> >> This time there will be a 10 minute "topic-of-the-day" >> presentation by Tzu-Mainn Chen (tzumainn) on >> >> 'Multi-Tenancy in Ironic: Of Owners and Lessees' >> >> So, if you would like to learn how this relatively recent >> addition to Ironic works, you can find all the details >> for this meeting on the SIG's etherpad: >> >> https://etherpad.opendev.org/p/bare-metal-sig >> >> Everyone is welcome, don't miss out! >> >> Cheers, >> Arne >> > > Had to miss Multi-Tenancy presentation today but look forward to any > artefacts (recording, minutes, slides) if these become available. > > -- Tom > Hi Tom, We have recorded the presentation and will make it (and the ones of the previous presentations) available as soon as we find the time to edit the videos. The links will be made available on the bare metal SIG etherpad, and we will also send something to the list when they are ready! Cheers, Arne From gmann at ghanshyammann.com Tue Jan 12 15:36:24 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Jan 2021 09:36:24 -0600 Subject: =?UTF-8?Q?Re:_=E7=AD=94=E5=A4=8D:_[all]Introduction_to_ve?= =?UTF-8?Q?nus_which_is_the_project_of_log?= =?UTF-8?Q?_management_and_has_been_contributed_to_the_OpenStack_community?= In-Reply-To: <4220f2e3ab1641b493a3c0e8fb83e510@inspur.com> References: <4220f2e3ab1641b493a3c0e8fb83e510@inspur.com> Message-ID: <176f73db0e0.ba0db5211128248.9112192436090806738@ghanshyammann.com> ---- On Tue, 12 Jan 2021 02:45:10 -0600 Brin Zhang(张百林) wrote ---- > >-----邮件原件----- > >发件人: Thierry Carrez [mailto:thierry at openstack.org] > >发送时间: 2021年1月11日 21:12 > >收件人: openstack-discuss at lists.openstack.org > >主题: Re: [all]Introduction to venus which is the project of log management > and has been contributed to the OpenStack community > > >Liye Pang(逄立业) wrote: > >> Hello everyone, after feedback from a large number of operations and > >> maintenance personnel in InCloud OpenStack, we developed the log > >> management project “Venus” for the OpenStack projects [...] > > OpenStack-aware centralized log management sounds very interesting to > me... > > > If others are interested in collaborating on producing that component, I > personally think it would be a great fit for the > > "operations tooling" > > section of the OpenStack Map[1]. > > Yes, after Inspur did a 1,000-nodes OpenStack single-cluster large-scale test, I was more convinced of the benefits Venus can bring to operation and maintenance. By Venus, we can quickly locate and find problems with the OpenStack platform, which can bring great convenience to operation and maintenance. > > https://mp.weixin.qq.com/s/RSrjjZjVFn086StNLV1Ivg This is the article of 1000-nodes test, but it's wrote by Chinese, don't worry ^^, we will publish the English article in future. > > This is the demo for Venus, hope that can help you to know what it can be done: https://youtu.be/mE2MoEx3awM> https://youtu.be/mE2MoEx3awM > > > [1] https://www.openstack.org/software/ Thanks Liye, Brin for the details, I also things this is a valuable project for day-to-day operation on a large-scale cloud or even small scale to automate the log to alter, etc. Just one question, can we configuration the particular error log msg string/pattern for raising alarm? and different levels of alarm (critical, high priority etc)? For example: If there is a known limitation on my cloud (due to RBAC or backend deps) and requests end up in error so I do not want to raise an alarm for those. -gmann > > > -- > > Thierry Carrez (ttx) > > Original ML: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html > > brinzhang > > From dtantsur at redhat.com Tue Jan 12 16:18:38 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 12 Jan 2021 17:18:38 +0100 Subject: [ironic] question regarding attributes required for a non-admin to boot from volume In-Reply-To: References: Message-ID: Hi, On Tue, Jan 12, 2021 at 3:45 PM Tzu-Mainn Chen wrote: > Hi, > > I've been looking into what it would take to allow non-admins to boot from > volume. It looks like they'd need to update a node's storage_interface; and > if you'd like to boot from an iSCSI volume, then you also need to update a > node's capabilities property. > FWIW we have a similar situation with the ramdisk deploy and the necessity to update deploy_interface. In both cases I don't feel well about updating node.*_interface itself, because such changes are permanent and there is no code to un-do them before the node becomes available again. My plan (if I ever get to it) is to accept deploy_interface (and now storage_interface) in node.instance_info. This solves both problems: instance_info is writable by non-admins and is wiped automatically on tear down. As to capabilities, I think it's fine if you set the iscsi_boot capability for every node that supports it in advance. You don't need non-admins to do that. Otherwise, most capabilities can be overridden via instance_info[capabilities] (although not this one). Dmitry > > Are these fields that we want non-admins to be able to update? If not, is > there an alternative that might be possible? > > Thanks, > Tzu-Mainn Chen > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue Jan 12 04:57:20 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 11 Jan 2021 23:57:20 -0500 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community In-Reply-To: References: Message-ID: This seems really interesting. Tracing events with request-ids is something that is quite useful. What is the current state? Can it be deployed by a third party? On Sun, Jan 10, 2021 at 4:01 PM Liye Pang(逄立业) wrote: > Hello everyone, after feedback from a large number of operations and > maintenance personnel in InCloud OpenStack, we developed the log management > project “Venus” for the OpenStack projects and that has contributed to the > OpenStack community. The following is an introduction to “Venus”. If there > is interest in the community, we are interested in proposing it to become > an official OpenStack project in the future. > Background > > In the day-to-day operation and maintenance of large-scale cloud platform, > the following problems are encountered: > > l Time-consuming for log querying while the server increasing to > thousands. > > l Difficult to retrieve logs, since there are many modules in the > platform, e.g. systems service, compute, storage, network and other > platform services. > > l The large amount and dispersion of log make faults are difficult to be > discovered. > > l Because of distributed and interaction between components of the cloud > platform, and scattered logs between components, it will take more time to > locate problems. > About Venus > > According to the key requirements of OpenStack in log storage, retrieval, > analysis and so on, we introduced *Venus *project, a unified log > management module. This module can provide a one-stop solution to log > collection, cleaning, indexing, analysis, alarm, visualization, report > generation and other needs, which involves helping operator or maintainer > to quickly solve retrieve problems, grasp the operational health of the > platform, and improve the management capabilities of the cloud platform. > > Additionally, this module plans to use machine learning algorithms to > quickly locate IT failures and root causes, and improve operation and > maintenance efficiency. > Application scenario > > Venus played a key role in the following scenarios: > > l *Retrieval:* Provide a simple and easy-to-use way to retrieve all log > and the context. > > l *Analysis*: Realize log association, field value statistics, and > provide multi-scene and multi-dimensional visual analysis reports. > > l *Alerts*:Convert retrieval into active alerts to realize the error > finding in massive logs. > > l *Issue location*: Establish a chain relationship and knowledge graphs > to quickly locate problems. > Overall structure > > The architecture of log management system based on Venus and elastic > search is as follows: > > Diagram 0: Architecture of Venus > > *venus_api*: API module,provide API、rest-api service. > > *venus_manager*: Internal timing task module to realize the core > functions of the log system. > Current progress > > The current progress of the Venus project is as follows: > > l Collection:Develop *fluentd* collection tasks based on collectd to > read, filter, format and send plug-ins for OpenStack, operating systems, > and platform services, etc. > > l Index:Dealing with multi-dimensional index data in *elasticsearch*, > and provide more concise and comprehensive authentication interface to > return query results. > > l Analysis:Analyzing and display the related module errors, Mariadb > connection errors, and Rabbitmq connection errors. > > l Alerts:Develop alarm task code to set threshold for the number of > error logs of different modules at different times, and provides alarm > services and notification services. > > l Location:Develop the call chain analysis function based on > *global_requested* series, which can show the execution sequence, time > and error information, etc., and provide the export operation. > > l Management:Develop configuration management functions in the log > system, such as alarm threshold setting, timing task management, and log > saving time setting, etc. > Application examples > > Two examples of Venus application scenarios are as follows. > > 1. The virtual machine creation operation was performed on the > cloud platform and it was found that the virtual machine was not created > successfully. > > First, we can find the request id of the operation and jump to the virtual > machine creation call chain page. > > Then, we can query the calling process, view and download the details of > the log of the call. > > 2. In the cloud platform, the error log of each module can be > converted into alarms to remind the users. > > Further, we can retrieve the details of the error log and error log > statistics. > > Next step > > The next step of the Venus project is as follows: > > l *Collection*:In addition to fluent, other collection plugins such as > logstash will be integrated. > > l *Analysis*: Explore more operation and maintenance scenarios, and > conduct statistical analysis and alarm on key data. > > l *display*: The configuration, analysis and alarm of Venus will be > integrated into horizon in the form of plugin. > > l *location*: Form clustering log and construct knowledge map, and > integrate algorithm class library to locate the root cause of the fault. > Venus Project Registry > > *Venus library*: https://opendev.org/inspur/venus > > You can grab the source code using the following git command: > > git clone https://opendev.org/inspur/venus.git > > > Venus Demo > > *Youtu.be*: https://youtu.be/mE2MoEx3awM > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 8136 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 15944 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 8405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 15175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 8496 bytes Desc: not available URL: From D.R.MacDonald at salford.ac.uk Tue Jan 12 11:03:22 2021 From: D.R.MacDonald at salford.ac.uk (Daniel Macdonald) Date: Tue, 12 Jan 2021 11:03:22 +0000 Subject: [octavia] problems configuring octavia In-Reply-To: References: , Message-ID: Hi Michael Thanks for your response. It is useful to know that DNS isn't causing my problems. I have a few more things I need to try, hopefully I'll get through it today then if I still have issues I will use the charms tag for my next post. ________________________________ From: Michael Johnson Sent: 11 January 2021 23:43 To: Daniel Macdonald Cc: openstack-discuss at lists.openstack.org ; Adekunbi Adewojo Subject: Re: [octavia] problems configuring octavia Hi Daniel, It might be helpful to use the [charms] in the subject as I think everything you mentioned here is related to the Ubuntu/juju/charms deployment tooling and not the Octavia project. I don't have any experience with doing a deployment with charms, so I can't be of much help there. That said, Octavia does not rely on DNS for any of its operations. This includes the lb-mgmt-net and for the load balancing elements. So, the lack of access to BIND for Octavia is not a problem. Michael On Fri, Jan 8, 2021 at 8:16 AM Daniel Macdonald > wrote: Happy new year Openstack users and devs! I have been trying on and off for several months to get octavia working but I have yet to have it successfully create a loadbalancer. I have deployed OS bionic-train using the Charms telemetry bundle with the octavia overlay. Openstack is working for creating regular instances but I get various errors when trying to create a loadbalancer. The first issue I feel I should mention is that I am using bind running on our MAAS controller as a DNS server. juju doesn't work if I enable IPv6 under bind yet the octavia charm defaults to using IPv6 for its management network so I have tried creating a IPv4 management network but I'm still having problems. For more details on that please see the comments of this bug report: https://bugs.launchpad.net/charm-octavia/+bug/1897418 Bug #1897418 “feature request: have option to use ipv4 when sett...” : Bugs : OpenStack Octavia Charm By default, Octavia charm uses ipv6 for its lb-mgmt-subnet.[1] It would be nice to have the option to choose an ipv4 network from the start instead of deleting the ipv6 network and recreating the ipv4 subnet. Implementation - possible configuration option parameter when deploying. [1] https://opendev.org/openstack/charm-octavia/src/branch/master/src/lib/charm/openstack/api_crud.py#L560 bugs.launchpad.net Another notable issue I have is that after installing the charms telemetry bundle I have 2 projects call services. How do I know which is the correct one to use for Octavia? Is this following document going to be the best guide for me to follow to complete the final steps required to get Octavia (under Train) working: https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components OpenStack Docs: Install and configure for Ubuntu Install and configure for Ubuntu¶. This section describes how to install and configure the Load-balancer service for Ubuntu 18.04 (LTS). docs.openstack.org I'm hoping someone has already written an easy to follow guide to using Octavia with an IPv4 management network using the Charms bundle to do most of the installation work? Thanks [University of Salford] DANIEL MACDONALD Specialist Technical Demonstrator School of Computing, Science & Engineering Room 145, Newton Building, University of Salford, Manchester M5 4WT T: +44(0) 0161 295 5242 D.R.MacDonald at salford.ac.uk / www.salford.ac.uk [CSE] -------------- next part -------------- An HTML attachment was scrubbed... URL: From aditi.Dukle at ibm.com Tue Jan 12 12:11:48 2021 From: aditi.Dukle at ibm.com (aditi Dukle) Date: Tue, 12 Jan 2021 12:11:48 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: References: , <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: An HTML attachment was scrubbed... URL: From tpb at dyncloud.net Tue Jan 12 17:00:50 2021 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 12 Jan 2021 12:00:50 -0500 Subject: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' In-Reply-To: <6a3c789f-1413-51c6-81ff-4755ccdbe26a@cern.ch> References: <20210112145852.i3yyh2u7lnqonqjt@barron.net> <6a3c789f-1413-51c6-81ff-4755ccdbe26a@cern.ch> Message-ID: <20210112170050.dxgydzlcb7jfuvly@barron.net> On 12/01/21 16:12 +0100, Arne Wiebalck wrote: >On 12.01.21 15:58, Tom Barron wrote: >>On 08/01/21 16:39 +0100, Arne Wiebalck wrote: >>>Dear all, >>> >>>Happy new year! >>> >>>The Bare Metal SIG will continue its monthly meetings and >>>start again on >>> >>>Tue Jan 12, 2021, at 2pm UTC. >>> >>>This time there will be a 10 minute "topic-of-the-day" >>>presentation by Tzu-Mainn Chen (tzumainn) on >>> >>>'Multi-Tenancy in Ironic: Of Owners and Lessees' >>> >>>So, if you would like to learn how this relatively recent >>>addition to Ironic works, you can find all the details >>>for this meeting on the SIG's etherpad: >>> >>>https://etherpad.opendev.org/p/bare-metal-sig >>> >>>Everyone is welcome, don't miss out! >>> >>>Cheers, >>>Arne >>> >> >>Had to miss Multi-Tenancy presentation today but look forward to any >>artefacts (recording, minutes, slides) if these become available. >> >>-- Tom >> >Hi Tom, > >We have recorded the presentation and will make it (and the >ones of the previous presentations) available as soon as we >find the time to edit the videos. > >The links will be made available on the bare metal SIG >etherpad, and we will also send something to the list when >they are ready! > >Cheers, > Arne > Thanks, Arne! From smooney at redhat.com Tue Jan 12 17:52:28 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 12 Jan 2021 17:52:28 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: References: , <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: <902298832b49f91eb790e8de1e643d5223366423.camel@redhat.com> On Tue, 2021-01-12 at 12:11 +0000, aditi Dukle wrote: > Hi Mike, >   > I have created these unit test jobs - openstack-tox-py27, openstack-tox-py35, openstack-tox-py36, openstack-tox-py37, openstack-tox-py38, openstack- > tox-py39 > by referring to the upstream CI( https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml) and these jobs are triggered > for every patchset in the Openstack CI.  > > I checked the code for old CI for Power, we didn't have any unit test jobs that were run for every patchset for nova. We had one "nova-python27" job > that was run in a periodic pipeline. So, I wanted to know if we need to run the unit test jobs on ppc for every patchset for nova? and If yes, > should these be reporting to the Openstack community? the request was to have the unit tests and ideally func tests run on ppc and report back to the comunity for every patch if you have the capasity to do that. if its not per patch then a way to manually trigger it per patch would be perfered in addtion to the periodic. what i would suggest is running just 1 python version per branch form the supporte python runtimes for that branch the supproted runtimes are defined here https://github.com/openstack/governance/tree/master/reference/runtimes so if you are unning this on a ubuntu image then master shoudl ideally use python 3.8 for both functional and unit tests. the problem we are trying to adress is currenlty we have some unit/fucntional tests that fail when running on ppc so we woudl like to use your third party ci to catch that the same way we us its tempest test to inform if we have broken compatiblity. if we took a perodic approch we would still need an ablity to trigger it manually so that we could test teh patchs that proport to fix a breakage in compatiablity. > > > Thanks, > Aditi Dukle > > ----- Original message ----- > > From: Michael J Turek/Poughkeepsie/IBM > > To: balazs.gibizer at est.tech, aditi Dukle/India/Contr/IBM at IBM, Sajauddin Mohammad/India/Contr/IBM at IBM > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > > Date: Sat, Jan 9, 2021 12:52 AM > >   > > Thanks for the heads up, > > > > We should have the capacity to add them. At one point I think we ran unit tests for nova but the job may have been culled in the move to zuul v3. > > I've CC'd the maintainers of the CI, Aditi Dukle and Sajauddin Mohammad. > > > > Aditi and Sajauddin, could we add a job to pkvmci to run unit tests for nova? > >   > > Michael Turek > > Software Engineer > > Power Cloud Department > > 1 845 433 1290 Office > > mjturek at us.ibm.com > > He/Him/His > >   > > IBM > >   > >   > >   > > > ----- Original message ----- > > > From: Balazs Gibizer > > > To: OpenStack Discuss > > > Cc: mjturek at us.ibm.com > > > Subject: [EXTERNAL] [nova] unit testing on ppc64le > > > Date: Fri, Jan 8, 2021 7:59 AM > > >   > > > Hi, > > > > > > We have a bugreport[1] showing that our unit tests are not passing on > > > ppc. In the upstream CI we don't have test capability to run our tests > > > on ppc. But we have the IBM Power KVM CI[2] that runs integration tests > > > on ppc. I'm wondering if IBM could extend the CI to run nova unit and > > > functional tests too. I've added Michael Turek (mjturek at us.ibm.com) to > > > CC. Michael is listed as the contact person for the CI. > > > > > > Cheers, > > > gibi > > > > > > [1]https://bugs.launchpad.net/nova/+bug/1909972  > > > [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI  > > > > > >   > >   >   > From james.slagle at gmail.com Tue Jan 12 18:17:32 2021 From: james.slagle at gmail.com (James Slagle) Date: Tue, 12 Jan 2021 13:17:32 -0500 Subject: [heat][tripleo] os-apply-config os-refresh-config os-collect-config project governance In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 9:32 AM Rabi Mishra wrote: > > > On Tue, Jan 12, 2021 at 6:58 PM James Slagle > wrote: > >> >> >> On Tue, Jan 12, 2021 at 7:47 AM Marios Andreou wrote: >> >>> Hello Heat team o/ >>> >>> as discussed in the thread at [1] TripleO is no longer using >>> os-collect-config, os-refresh-config and os-apply-config as part of the >>> OpenStack deployment. As such we are considering moving these repos to the >>> independent release model. >>> >>> However as pointed out in [1] it seems that Heat is still using these - >>> documented at [2]. Is the Heat community interested in taking over the >>> ownership of these? To be clear what the proposal here is, I posted [3] - >>> i.e. move these projects under the Heat governance. >>> >>> Thanks for your time, thoughts and comments on this, either here or in >>> [3]. If you are NOT interested in owning these then we will likely move >>> them to independent, or possibly even unmaintained/eol (in due course). >>> >> >> I realized a possible wrinkle with moving these to Heat governance. >> TripleO still relies on the stable/train branches given these projects are >> still used in train, and there exists a possibility that TripleO cores may >> need to backport patch(es) to the train branch in these projects. >> > > I thought we switched to config-download by default from stable/rocky and > disabled os-collect-config[1]. Not sure if anyone uses it beyond that with > TripleO. Looking at the number of functional changes backported to > stable/queens (AFICS only 3 of them)[2], probably it won't be a major > bottleneck, assuming the heat community wants to own it. > You're right. I meant to say queens instead of train. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue Jan 12 23:25:36 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 12 Jan 2021 23:25:36 +0000 Subject: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' In-Reply-To: <6a3c789f-1413-51c6-81ff-4755ccdbe26a@cern.ch> References: <20210112145852.i3yyh2u7lnqonqjt@barron.net> <6a3c789f-1413-51c6-81ff-4755ccdbe26a@cern.ch> Message-ID: Thanks Arne. Looking forward to recording. -----Original Message----- From: Arne Wiebalck Sent: Tuesday, January 12, 2021 9:12 AM To: Tom Barron Cc: openstack-discuss Subject: Re: [baremetal-sig][ironic] Tue Jan 12, 2021, 2pm UTC: 'Multi-Tenancy in Ironic: Of Owners and Lessees' [EXTERNAL EMAIL] On 12.01.21 15:58, Tom Barron wrote: > On 08/01/21 16:39 +0100, Arne Wiebalck wrote: >> Dear all, >> >> Happy new year! >> >> The Bare Metal SIG will continue its monthly meetings and start again >> on >> >> Tue Jan 12, 2021, at 2pm UTC. >> >> This time there will be a 10 minute "topic-of-the-day" >> presentation by Tzu-Mainn Chen (tzumainn) on >> >> 'Multi-Tenancy in Ironic: Of Owners and Lessees' >> >> So, if you would like to learn how this relatively recent addition to >> Ironic works, you can find all the details for this meeting on the >> SIG's etherpad: >> >> https://etherpad.opendev.org/p/bare-metal-sig >> >> Everyone is welcome, don't miss out! >> >> Cheers, >> Arne >> > > Had to miss Multi-Tenancy presentation today but look forward to any > artefacts (recording, minutes, slides) if these become available. > > -- Tom > Hi Tom, We have recorded the presentation and will make it (and the ones of the previous presentations) available as soon as we find the time to edit the videos. The links will be made available on the bare metal SIG etherpad, and we will also send something to the list when they are ready! Cheers, Arne From pangliye at inspur.com Wed Jan 13 02:04:07 2021 From: pangliye at inspur.com (=?utf-8?B?TGl5ZSBQYW5nKOmAhOeri+S4mik=?=) Date: Wed, 13 Jan 2021 02:04:07 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTog562U5aSN?= =?utf-8?B?OiBbYWxsXUludHJvZHVjdGlvbiB0byB2ZW51cyB3aGljaCBpcyB0aGUgcHJv?= =?utf-8?B?amVjdCBvZiBsb2cgbWFuYWdlbWVudCBhbmQgaGFzIGJlZW4gY29udHJpYnV0?= =?utf-8?Q?ed_to_the_OpenStack_community?= In-Reply-To: <176f73db0e0.ba0db5211128248.9112192436090806738@ghanshyammann.com> References: <09603c06ed48c77923781a3dcccbb4c6@sslemail.net> <176f73db0e0.ba0db5211128248.9112192436090806738@ghanshyammann.com> Message-ID: <494026c44e9f46c5b3560c17f602bfd9@inspur.com> ---- On Tue, 12 Jan 2021 02:45:10 -0600 Brin Zhang(张百林) wrote ---- > >-----邮件原件----- > > >发件人: Thierry Carrez [mailto:thierry at openstack.org] > >发送时间: 2021年1月11日 21:12 > >收件人: openstack-discuss at lists.openstack.org >> >主题: Re: [all]Introduction to venus which is the project of log management > and has been contributed to the OpenStack community > > >Liye Pang(逄立业) wrote: >> >> Hello everyone, after feedback from a large number of operations and > >> maintenance personnel in InCloud OpenStack, we developed the log > >> management project “Venus” for the OpenStack projects [...] > > OpenStack-aware centralized log management sounds very interesting to > me... >> >> > If others are interested in collaborating on producing that component, I > personally think it would be a great fit for the > > "operations tooling" >> > section of the OpenStack Map[1]. >> >> Yes, after Inspur did a 1,000-nodes OpenStack single-cluster large-scale test, I was more convinced of the benefits Venus can bring to operation and maintenance. By Venus, we can quickly locate and find problems with the OpenStack platform, which can bring great convenience to operation and maintenance. >> >> https://mp.weixin.qq.com/s/RSrjjZjVFn086StNLV1Ivg This is the article of 1000-nodes test, but it's wrote by Chinese, don't worry ^^, we will publish the English article in future. >> >> This is the demo for Venus, hope that can help you to know what it can be done: https://youtu.be/mE2MoEx3awM> https://youtu.be/mE2MoEx3awM > > > [1] https://www.openstack.org/software/ >Thanks Liye, Brin for the details, I also things this is a valuable project for day-to-day operation on a large-scale cloud or even small scale to automate the log to alter, etc. >Just one question, can we configuration the particular error log msg string/pattern for raising alarm? and different levels of alarm (critical, high priority etc)? >For example: If there is a known limitation on my cloud (due to RBAC or backend deps) and requests end up in error so I do not want to raise an alarm for those. >-gmann At present, the alarm task for the error log is placed in our monitoring system, and the data is obtained from venus. Only the alarms for the number of error logs and the regular matching of typical error log are realized,and alarm level can be defined by yourself. In the future, we will migrate the alarm notification function to venus, and at the same time, we will comprehensively organize the matching mode of the error log to form a configuration template. Everyone is welcome to join us. > >> >> > -- >> > Thierry Carrez (ttx) >> >> Original ML: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019748.html >> >> brinzhang >> >> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From licanwei_cn at 163.com Wed Jan 13 02:31:02 2021 From: licanwei_cn at 163.com (licanwei) Date: Wed, 13 Jan 2021 10:31:02 +0800 (GMT+08:00) Subject: [Watcher]stepping down as PTL Message-ID: <50e905b1.101f.176f99506f3.Coremail.licanwei_cn@163.com> Hi all, For personal reason, I have no time to continue the work for Watcher PTL. My colleage, Chen Ke(irc:chenke) will temporarily replace me as PTL, thank you for your support. thanks, | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From anost1986 at gmail.com Wed Jan 13 04:17:00 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Tue, 12 Jan 2021 22:17:00 -0600 Subject: [all][stackalytics] stackalytics.io Message-ID: Hi all! Since https://stackalytics.com is not operational for quite some time already, I exposed another instance on https://stackalytics.io It shares the same configuration and is kept updated. You're welcome to use it at least until stackalytics.com is up and running (just please be gentle). Sincerely, Andrii Ostapenko From smooney at redhat.com Wed Jan 13 04:34:33 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jan 2021 04:34:33 +0000 Subject: [all][stackalytics] stackalytics.io In-Reply-To: References: Message-ID: On Tue, 2021-01-12 at 22:17 -0600, Andrii Ostapenko wrote: > Hi all! > > Since https://stackalytics.com is not operational for quite some time > already, I exposed another instance on https://stackalytics.io > It shares the same configuration and is kept updated. You're welcome > to use it at least until stackalytics.com is up and running (just > please be gentle). is there any documentaiton on how to deploy our own instance of it if we want too or either go offline in the future? > > Sincerely, > Andrii Ostapenko > From araragi222 at gmail.com Wed Jan 13 06:05:22 2021 From: araragi222 at gmail.com (=?UTF-8?B?5ZGC6Imv?=) Date: Wed, 13 Jan 2021 15:05:22 +0900 Subject: Ask for patch review Message-ID: Hi, Mr.Ogawa & other members of tacker team: This is Liang Lu, may I invite you to review my patch ? https://review.opendev.org/c/openstack/tacker/+/764138 (now it is retesting but no new patch sets in future) This is a tiny patch fix calling parameter of kubernetes_driver.instantiate_vnf(), however it affects FT(will continuously fail if not merge), so I want to push it and merge it soon. best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From yasufum.o at gmail.com Wed Jan 13 07:04:31 2021 From: yasufum.o at gmail.com (Yasufumi Ogawa) Date: Wed, 13 Jan 2021 16:04:31 +0900 Subject: [tacker] Ask for patch review In-Reply-To: References: Message-ID: <922a907d-385f-4229-87e3-1b65f0a13c72@gmail.com> Liang, (Added "[tacker]" in the title) Sure. I've already added myself and reviewed at once. I'd to like to check it again after the tests will be all green. Thanks, Yausufmi On 2021/01/13 15:05, 呂良 wrote: > Hi, Mr.Ogawa & other members of tacker team: > > This is Liang Lu, may I invite you to review my patch ? > https://review.opendev.org/c/openstack/tacker/+/764138 > > (now it is retesting but no new patch sets in future) > > This is a tiny patch fix calling parameter of > kubernetes_driver.instantiate_vnf(), > however it affects FT(will continuously fail if not merge), > so I want to push it and merge it soon. > > best regards From christian.rohmann at inovex.de Wed Jan 13 09:37:56 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 13 Jan 2021 10:37:56 +0100 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup Message-ID: Hey everyone, I wrote a tiny patch to add the Ceph RDB feature of fast-diff to backups created by cinder-backup:  * https://review.opendev.org/c/openstack/cinder/+/766856/ Could someone please take a peek and let me know of this is sufficient to be merged? Thanks and with kind regards, Christian From thierry at openstack.org Wed Jan 13 10:59:47 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 13 Jan 2021 11:59:47 +0100 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community In-Reply-To: References: Message-ID: Laurent Dumont wrote: > This seems really interesting. Tracing events with request-ids is > something that is quite useful. > > What is the current state? Can it be deployed by a third party? I see code up at https://opendev.org/inspur/ but I haven't tried deploying it. If it gathers momentum, I suspect it will be proposed as a new official OpenStack project, and if the Technical Committee approves it, it will be moved under the openstack/ namespace on opendev.org. It already follows our usual repository structure (venus, python-venusclient, venus-tempest-plugin...) -- Thierry From rosmaita.fossdev at gmail.com Wed Jan 13 13:27:35 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 13 Jan 2021 08:27:35 -0500 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: References: Message-ID: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> On 1/13/21 4:37 AM, Christian Rohmann wrote: > Hey everyone, > > I wrote a tiny patch to add the Ceph RDB feature of fast-diff to backups > created by cinder-backup: > >  * https://review.opendev.org/c/openstack/cinder/+/766856/ > > > Could someone please take a peek and let me know of this is sufficient > to be merged? Thanks for raising awareness of your patch. Right now, the cinder team is prioritizing new driver reviews in light of the impending merge deadline at wallaby milestone-2 next week: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019720.html So there may be a slight delay to your patch being reviewed. If you have some time, you can always help things along by reviewing some small patches by other authors. cheers, brian > > > > Thanks and with kind regards, > > > Christian > > > From rosmaita.fossdev at gmail.com Wed Jan 13 13:41:21 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 13 Jan 2021 08:41:21 -0500 Subject: [barbican][oslo][nova][glance][cinder] cursive library status In-Reply-To: References: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Message-ID: On 1/11/21 10:57 AM, Moises Guimaraes de Medeiros wrote: > Hi Brian, > > During Oslo's last weekly meeting [1] we decided that Oslo can take > cursive under its umbrella with collaboration of Barbican folks. I just > waited a bit with this confirmation as the Barbican PTL was on PTO and I > wanted to confirm with him. > > What are the next steps from here? Thanks so much for following up! I think you need to do something like these patches from Ghanshyam to move devstack-plugin-nfs from x/ to openstack/ and bring it under QA governance: https://review.opendev.org/c/openstack/project-config/+/711834 https://review.opendev.org/c/openstack/governance/+/711835 LMK if you want me to propose the patches, my intent is to get this issue solved, not to make more work for you! cheers, brian > > [1]: > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log.html#l-64 > > > Thanks, > Moisés > > On Fri, Dec 18, 2020 at 10:06 PM Douglas Mendizabal > wrote: > > On 12/16/20 12:50 PM, Ben Nemec wrote: > > > > > > On 12/16/20 12:02 PM, Brian Rosmaita wrote: > >> Hello Barbican team, > >> > >> Apologies for not including barbican in the previous thread on this > >> topic: > >> > >> > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html > > > >> > >> > >> The situation is that cursive is used by Nova, Glance, and > Cinder and > >> we'd like to move it out of the 'x' namespace into openstack > >> governance.   The question is then what team would oversee it.  It > >> seems like a good fit for Oslo, and the Oslo team seems OK with > that, > >> but since barbican-core is currently included in cursive-core, > it make > >> sense to give the Barbican team first dibs. > >> > >>  From the consuming teams' side, I don't think we have a > preference as > >> long as it's clear who we need to bother about approvals if a > bugfix > >> is posted for review. > >> > >> Thus my ask is that the Barbican team indicate whether they'd > like to > >> move cursive to the 'openstack' namespace under their > governance, or > >> whether they'd prefer Oslo to oversee the library. > > > > Note that this is not necessarily an either/or thing. Castellan > is under > > Oslo governance but is co-owned by the Oslo and Barbican teams. > We could > > do a similar thing with Cursive. > > > > Hi Brian and Ben, > > Sorry I missed the original thread.  Given that the end of the year is > around the corner, most of the Barbican team is out on PTO and we > haven't had a chance to discuss this in our weekly meeting. > > That said, I doubt anyone would object to moving cursive into the > openstack namespace. > > I personally do not mind the Oslo team taking over maintenace, and I am > also willing to help review patches if the Oslo team would like to > co-own this library just like we currently do for Castellan. > > - Douglas Mendizábal (redrobot) > > > > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > From thierry at openstack.org Wed Jan 13 15:26:38 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 13 Jan 2021 16:26:38 +0100 Subject: [largescale-sig] Next meeting: January 13, 15utc In-Reply-To: References: Message-ID: We held our meeting today, with a focus on rebooting the Large Scale SIG engine for the new year. From a Scaling Journey[1] perspective, amorin will focus on stage 1, genekuo will focus on stage 2, belmoreira will focus on stage 4 and 5, while I'll focus on stewarding the group and more janitorial tasks. [1] https://wiki.openstack.org/wiki/Large_Scale_SIG/ScaleUp Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-01-13-15.00.html Our next meeting will be Wednesday, January 27 at 15utc in #openstack-meeting-3 on Freenode IRC. Please join us if interested in helping make large scale openstack less scary! -- Thierry Carrez (ttx) From anost1986 at gmail.com Wed Jan 13 16:01:46 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Wed, 13 Jan 2021 10:01:46 -0600 Subject: [all][stackalytics] stackalytics.io In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 10:36 PM Sean Mooney wrote: > > On Tue, 2021-01-12 at 22:17 -0600, Andrii Ostapenko wrote: > > Hi all! > > > > Since https://stackalytics.com is not operational for quite some time > > already, I exposed another instance on https://stackalytics.io > > It shares the same configuration and is kept updated. You're welcome > > to use it at least until stackalytics.com is up and running (just > > please be gentle). > > is there any documentaiton on how to deploy our own instance of it if we want > too or either go offline in the future? I've started with https://wiki.openstack.org/wiki/Stackalytics/HowToRun Though addressed some blocking code issues and did a couple of enhancements. Will push them once have some time to make the code better. > > > > Sincerely, > > Andrii Ostapenko > > > > > From ankelezhang at gmail.com Wed Jan 13 05:54:08 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Wed, 13 Jan 2021 13:54:08 +0800 Subject: ironic-python-agent erase block device failed Message-ID: Hi, I have an Rocky OpenStack platform. I used ironic-python-agent-builder to build a ipa.initramfs image. I called the clean API to erase my baremetal node disk. But it was failed. I traced the ipa source code, and got it used 'hdparm -I /dev/sdx' to check if the device is frozen [image: image.png] my device is frozen, then I switched to another SSD, and it's not frozen but it was also failed, [image: image.png] I don't know if I need to erase my bare metal nodes' block devices in product environment. If so, how to make it successful? Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 12688 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 24678 bytes Desc: not available URL: From openinfradn at gmail.com Wed Jan 13 18:32:16 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 14 Jan 2021 00:02:16 +0530 Subject: Enabling KVM on Openstack Inbox Message-ID: Hi (This mail has been sent to the community mailing list already, and I was suggested to reach out to this mailing list.) I have enabled KVM after deploying openstack single node setup. Following entries has been added to /etc/nova/nova.conf and then restart libvirtd.service and openstack-nova-compute.servic. compute_driver = libvirt.LibvirtDriver [libvirt] virt_type=kvm #virt_type=qemu # lsmod | grep kvm kvm_intel 188740 0 kvm 637289 1 kvm_intel irqbypass 13503 1 kvm I have disabled qemu in nova config and restarted the entire box (including the base OS and the VM where I deployed Openstack. But still no luck. $ openstack hypervisor list +----+------------------------+-----------------+----------------+-------+ | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | +----+------------------------+-----------------+----------------+-------+ | 1 | openstack.centos.local | QEMU | 192.168.122.63 | up | +----+------------------------+-----------------+----------------+-------+ I still can see Hypervisor Type as QEMU, when I issue openstack hypervisor list command. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jan 13 19:18:44 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jan 2021 19:18:44 +0000 Subject: Enabling KVM on Openstack Inbox In-Reply-To: References: Message-ID: <5adeeecde601ab7d774d0bf4e09d50083efbaf68.camel@redhat.com> On Thu, 2021-01-14 at 00:02 +0530, open infra wrote: > Hi > > (This mail has been sent to the community mailing list already, and I was > suggested to reach out to this mailing list.) > > I have enabled KVM after deploying openstack single node setup. > Following entries has been added to /etc/nova/nova.conf and then restart > libvirtd.service and openstack-nova-compute.servic. > > compute_driver = libvirt.LibvirtDriver [libvirt] virt_type=kvm > #virt_type=qemu > > # lsmod | grep kvm > kvm_intel 188740 0 > kvm 637289 1 kvm_intel > irqbypass 13503 1 kvm > > > I have disabled qemu in nova config and restarted the entire box (including > the base OS and the VM where I deployed Openstack. > But still no luck. > > > $ openstack hypervisor list > +----+------------------------+-----------------+----------------+-------+ > > ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | > +----+------------------------+-----------------+----------------+-------+ > >  1 | openstack.centos.local | QEMU | 192.168.122.63 | up | > +----+------------------------+-----------------+----------------+-------+ > > I still can see Hypervisor Type as QEMU, when I issue openstack hypervisor > list command. no that is expected enableing kvm does not change the hypervior type its still qemu its just using the kvm accleartion instead of the tsc backend. so that wont change but your vms should not change tehre domain type and be using kvm. From radoslaw.piliszek at gmail.com Wed Jan 13 19:36:06 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 13 Jan 2021 20:36:06 +0100 Subject: [all][dev] Beware how fun the new pip can be Message-ID: Hiya, Folks! Sharing what I have just learnt about the new pip's solver. pip install PROJECT no longer guarantees to install the latest version of PROJECT (or, well, giving you the ERROR that it cannot do it because something something :-) ). In fact, it will install the latest version *matching other constraints* and do it *silently*. Like it was recently only with Python version (i.e. py3-only would not get installed on py2 - that is cool) but now it moved into any-package territory. As an example, I can give you [1] where we are experimenting with getting some extracurricular package into our containers, notably fluent-logger. The only dep of fluent-logger is msgpack but the latest msgpack (as in upper constraints: 1.0.2, or any 1.x for that matter) is not compatible. However, the pin was introduced in fluent-logger in its 0.9.5 release (0.9.6 is the latest). Guess what pip does? Here is what it does: INFO:kolla.common.utils.openstack-base:Collecting fluent-logger INFO:kolla.common.utils.openstack-base: Downloading http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/1a/f5/e6c30ec7a81e9c32c652c684004334187db4cc09eccf78ae7b69e62c7b10/fluent_logger-0.9.6-py2.py3-none-any.whl (12 kB) INFO:kolla.common.utils.openstack-base: Downloading http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d5/cb/19d838561ec210321aea24c496ec61930d6fdbb2f98d3f06cebab33c1331/fluent_logger-0.9.5-py2.py3-none-any.whl (12 kB) INFO:kolla.common.utils.openstack-base: Downloading http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d1/d4/f9b3493f974cdac831bf707c3d9fec93b1a0ebd986eae4db4f101dd72378/fluent_logger-0.9.4-py2.py3-none-any.whl (12 kB) And that's it. Pip is happy, you got your "latest" version. In previous pip one would get the latest version AND a warning. Now just pip's view on what the "latest" version is. I am glad we have upper-constraints which save the day here (forcing the ERROR) but beware of this "in the wild". [1] https://review.opendev.org/c/openstack/kolla/+/759855 -yoctozepto From anost1986 at gmail.com Wed Jan 13 19:54:24 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Wed, 13 Jan 2021 13:54:24 -0600 Subject: [all][dev] Beware how fun the new pip can be In-Reply-To: References: Message-ID: On Wed, Jan 13, 2021 at 1:37 PM Radosław Piliszek wrote: > > Hiya, Folks! > > Sharing what I have just learnt about the new pip's solver. > pip install PROJECT no longer guarantees to install the latest version > of PROJECT (or, well, giving you the ERROR that it cannot do it > because something something :-) ). > In fact, it will install the latest version *matching other > constraints* and do it *silently*. > Like it was recently only with Python version (i.e. py3-only would not > get installed on py2 - that is cool) but now it moved into any-package > territory. > > As an example, I can give you [1] where we are experimenting with > getting some extracurricular package into our containers, notably > fluent-logger. > The only dep of fluent-logger is msgpack but the latest msgpack (as in > upper constraints: 1.0.2, or any 1.x for that matter) is not > compatible. However, the pin was introduced in fluent-logger in its > 0.9.5 release (0.9.6 is the latest). Guess what pip does? Here is what > it does: > > INFO:kolla.common.utils.openstack-base:Collecting fluent-logger > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/1a/f5/e6c30ec7a81e9c32c652c684004334187db4cc09eccf78ae7b69e62c7b10/fluent_logger-0.9.6-py2.py3-none-any.whl > (12 kB) > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d5/cb/19d838561ec210321aea24c496ec61930d6fdbb2f98d3f06cebab33c1331/fluent_logger-0.9.5-py2.py3-none-any.whl > (12 kB) > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d1/d4/f9b3493f974cdac831bf707c3d9fec93b1a0ebd986eae4db4f101dd72378/fluent_logger-0.9.4-py2.py3-none-any.whl > (12 kB) > > And that's it. Pip is happy, you got your "latest" version. > In previous pip one would get the latest version AND a warning. Now > just pip's view on what the "latest" version is. > > I am glad we have upper-constraints which save the day here (forcing > the ERROR) but beware of this "in the wild". > > [1] https://review.opendev.org/c/openstack/kolla/+/759855 > > -yoctozepto > Really big change, no surprise it's full of bugs. I had a situation with an infinite loop of 'Requirement already satisfied' just yesterday. Can only suggest to file issues https://github.com/pypa/pip/issues and fall back to 20.2, i think virtualenv==20.2.1 is the latest that comes with 20.2 pip From radoslaw.piliszek at gmail.com Wed Jan 13 20:10:27 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 13 Jan 2021 21:10:27 +0100 Subject: [all][dev] Beware how fun the new pip can be In-Reply-To: References: Message-ID: A quick addendum after discussion with Clark (to make it easier to digest for everyone). The actors from PyPI: msgpack - the latest is 1.0.2 and that is what is in the upper-constraints that are being used fluent-logger - the latest is 0.9.6 and this is being installed *unconstrained* fluent-logger sets one dep: msgpack<1.0.0 since fluent-logger 0.9.5 The result: 1) old pip msgpack==1.0.2 fluent-logger==0.9.6 and a WARNING that fluent-logger 0.9.6 wants msgpack<1.0.0 2) new pip msgpack==1.0.2 fluent-logger==0.9.4 and no WARNINGs, no ERRORs, no anything, just happy silent "I got you your package, so what if it is not the latest, I am the smart one here" i.e. controlling *dependencies* controls *dependants* And don't get me wrong, pip did what it advertised - took a list of constraints and found a solution. The outtake is simple: beware! :-) -yoctozepto On Wed, Jan 13, 2021 at 8:36 PM Radosław Piliszek wrote: > > Hiya, Folks! > > Sharing what I have just learnt about the new pip's solver. > pip install PROJECT no longer guarantees to install the latest version > of PROJECT (or, well, giving you the ERROR that it cannot do it > because something something :-) ). > In fact, it will install the latest version *matching other > constraints* and do it *silently*. > Like it was recently only with Python version (i.e. py3-only would not > get installed on py2 - that is cool) but now it moved into any-package > territory. > > As an example, I can give you [1] where we are experimenting with > getting some extracurricular package into our containers, notably > fluent-logger. > The only dep of fluent-logger is msgpack but the latest msgpack (as in > upper constraints: 1.0.2, or any 1.x for that matter) is not > compatible. However, the pin was introduced in fluent-logger in its > 0.9.5 release (0.9.6 is the latest). Guess what pip does? Here is what > it does: > > INFO:kolla.common.utils.openstack-base:Collecting fluent-logger > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/1a/f5/e6c30ec7a81e9c32c652c684004334187db4cc09eccf78ae7b69e62c7b10/fluent_logger-0.9.6-py2.py3-none-any.whl > (12 kB) > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d5/cb/19d838561ec210321aea24c496ec61930d6fdbb2f98d3f06cebab33c1331/fluent_logger-0.9.5-py2.py3-none-any.whl > (12 kB) > INFO:kolla.common.utils.openstack-base: Downloading > http://mirror-int.dfw.rax.opendev.org:8080/pypifiles/packages/d1/d4/f9b3493f974cdac831bf707c3d9fec93b1a0ebd986eae4db4f101dd72378/fluent_logger-0.9.4-py2.py3-none-any.whl > (12 kB) > > And that's it. Pip is happy, you got your "latest" version. > In previous pip one would get the latest version AND a warning. Now > just pip's view on what the "latest" version is. > > I am glad we have upper-constraints which save the day here (forcing > the ERROR) but beware of this "in the wild". > > [1] https://review.opendev.org/c/openstack/kolla/+/759855 > > -yoctozepto From gmann at ghanshyammann.com Wed Jan 13 20:16:33 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Jan 2021 14:16:33 -0600 Subject: [tc][all][ptl] Encouraging projects to apply for tag 'assert:supports-api-interoperability' In-Reply-To: <17671275da9.121a655b1251298.6157149575252344776@ghanshyammann.com> References: <17671275da9.121a655b1251298.6157149575252344776@ghanshyammann.com> Message-ID: <176fd648bd6.126a9266a1208370.2628135282566323654@ghanshyammann.com> Bumping this email in case you missed this during holiday time. There are many projects that are eligible for this tag, requesting you to start the application review to governance. -gmann ---- On Thu, 17 Dec 2020 08:42:53 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > TC defined a tag for API interoperability (cover both stable and compatible APIs) called > 'assert:supports-api-interoperability' which assert on API won’t break any users when they > upgrade a cloud or start using their code on a new OpenStack cloud. > > Basically, Projects will not change (or remove) an API in a way that will break existing users > of an API. We have updated the tag documentation to clarify its definition and requirements. > > If your projects follow the API interoperability guidelines[1] and some API versioning mechanism > that does not need to be microversion then you should start thinking to apply for this tag. The > complete requirements can be found here[2]. > > Currently, only nova has this tag but I am sure many projects are eligible for this, and TC encourage > them to apply for this. > > [1] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > [2] https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html > > > -gmann > From fungi at yuggoth.org Wed Jan 13 20:19:14 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jan 2021 20:19:14 +0000 Subject: [all][dev] Beware how fun the new pip can be In-Reply-To: References: Message-ID: <20210113201914.tqqgxz7s474ve2lh@yuggoth.org> On 2021-01-13 20:36:06 +0100 (+0100), Radosław Piliszek wrote: [...] > As an example, I can give you [1] where we are experimenting with > getting some extracurricular package into our containers, notably > fluent-logger. The only dep of fluent-logger is msgpack but the > latest msgpack (as in upper constraints: 1.0.2, or any 1.x for > that matter) is not compatible. However, the pin was introduced in > fluent-logger in its 0.9.5 release (0.9.6 is the latest). [...] So just to clarify, your concern is that because you've tried to install newer msgpack, pip is selecting an older version of fluent-logger which doesn't declare an incompatibility with that newer version of msgpack. This seems technically correct. I'm willing to bet if you insisted on installing fluent-logger>0.9.5 you would get the behavior you're expecting. The underlying problem is that the package ecosystem has long based dependency versioning choices on side effect behaviors of pip's (lack of coherent) dep resolution. From the user side of things, if you want to install more than one package explicitly, you need to start specifying how new you want those packages to be. However surprising it is, pip seems to be working as intended here. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Wed Jan 13 20:27:59 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 13 Jan 2021 21:27:59 +0100 Subject: [all][dev] Beware how fun the new pip can be In-Reply-To: <20210113201914.tqqgxz7s474ve2lh@yuggoth.org> References: <20210113201914.tqqgxz7s474ve2lh@yuggoth.org> Message-ID: On Wed, Jan 13, 2021 at 9:22 PM Jeremy Stanley wrote: > > On 2021-01-13 20:36:06 +0100 (+0100), Radosław Piliszek wrote: > [...] > > As an example, I can give you [1] where we are experimenting with > > getting some extracurricular package into our containers, notably > > fluent-logger. The only dep of fluent-logger is msgpack but the > > latest msgpack (as in upper constraints: 1.0.2, or any 1.x for > > that matter) is not compatible. However, the pin was introduced in > > fluent-logger in its 0.9.5 release (0.9.6 is the latest). > [...] > > So just to clarify, your concern is that because you've tried to > install newer msgpack, pip is selecting an older version of > fluent-logger which doesn't declare an incompatibility with that > newer version of msgpack. This seems technically correct. I'm > willing to bet if you insisted on installing fluent-logger>0.9.5 you > would get the behavior you're expecting. > > The underlying problem is that the package ecosystem has long based > dependency versioning choices on side effect behaviors of pip's > (lack of coherent) dep resolution. From the user side of things, if > you want to install more than one package explicitly, you need to > start specifying how new you want those packages to be. > > However surprising it is, pip seems to be working as intended here. Yes, it does! See my addendum as well. I will recap once more that I am not saying pip is doing anything wrong. Just BEWARE because you are most likely used to a different behaviour, just like me. Trying to use two conflicting constraints will make pip ERROR out and this is great now. I like new pip for this reason. But, as you mention, the ecosystem is not prepared. -yoctozepto From jp.methot at planethoster.info Wed Jan 13 20:54:20 2021 From: jp.methot at planethoster.info (=?utf-8?Q?Jean-Philippe_M=C3=A9thot?=) Date: Wed, 13 Jan 2021 15:54:20 -0500 Subject: [nova] Nova evacuate issue In-Reply-To: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> References: <20210107142638.3u2duxgdsrj5fe5y@lyarwood-laptop.usersys.redhat.com> Message-ID: I was not able to find anything in the event list, possibly because the instance was recreated so its ID doesn’t exist anymore? Anyway, I did just create a bug report with as much info as I could, which is not much more than what I already posted in this mail chain. Hopefully we can get somewhere with this : https://bugs.launchpad.net/nova/+bug/1911474 Jean-Philippe Méthot Senior Openstack system administrator Administrateur système Openstack sénior PlanetHoster inc. 4414-4416 Louis B Mayer Laval, QC, H7P 0G1, Canada -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jan 13 21:39:28 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Jan 2021 15:39:28 -0600 Subject: [Watcher]stepping down as PTL In-Reply-To: <50e905b1.101f.176f99506f3.Coremail.licanwei_cn@163.com> References: <50e905b1.101f.176f99506f3.Coremail.licanwei_cn@163.com> Message-ID: <176fdb0764f.bf3138361210922.8265496414053290356@ghanshyammann.com> Thanks licanwei for your contribution to the Watcher project. Please propose this change to openstack/governance repo. Example: https://review.opendev.org/c/openstack/governance/+/770075 -gmann ---- On Tue, 12 Jan 2021 20:31:02 -0600 licanwei wrote ---- > Hi all, > For personal reason, I have no time to continue the work for Watcher PTL. My colleage, Chen Ke(irc:chenke) will temporarily replace me as PTL, thank you for your support. > > thanks, licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > From allison at openstack.org Wed Jan 13 23:03:02 2021 From: allison at openstack.org (Allison Price) Date: Wed, 13 Jan 2021 17:03:02 -0600 Subject: Only a few hours left: 2021 Board elections and Bylaws amendments Message-ID: <5126098E-790F-4129-8591-5C603FD176B2@openstack.org> Hi everyone, The 2021 Individual Director elections are currently open. All members who are eligible to vote should have received an email ballot earlier this week. You can review the candidates here: https://www.openstack.org/election/2021-individual-director-election/CandidateList . If you are not currently a member of the Open Infrastructure Foundation, you can join here to vote in next year’s elections: https://openinfra.dev/join This year, the election includes a set of amendments to the Foundation’s Bylaws. The amendments were approved unanimously by the Foundation Board in 2020. As a member class, the Individual Members must also approve these amendments. You can find more information in this mailing list post by Jonathan Bryce: http://lists.openstack.org/pipermail/foundation/2021-January/002939.html The deadline to vote is this Friday, January 15 at 1800 UTC / 12pm CST. Please let me know if you have any questions. Allison Allison Price Open Infrastructure Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From v at prokofev.me Thu Jan 14 00:39:29 2021 From: v at prokofev.me (Vladimir Prokofev) Date: Thu, 14 Jan 2021 03:39:29 +0300 Subject: [cinder] Review of tiny patch to add Ceph RBD fast-diff to cinder-backup In-Reply-To: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> References: <721b4405-b19f-5433-feff-d595442ce6e4@gmail.com> Message-ID: I apologise if this is not the place to ask, but this question was actually bugging me for a long time now. Is there any particular reason fast-diff was disabled for backup images in the first place? Like, any actual technical limitation, i.e. "don't enable it or it will break your backups"? Or is it only because earlier versions of CEPH did not support fast-diff, so it was disabled for compatibility reasons? ср, 13 янв. 2021 г. в 16:40, Brian Rosmaita : > On 1/13/21 4:37 AM, Christian Rohmann wrote: > > Hey everyone, > > > > I wrote a tiny patch to add the Ceph RDB feature of fast-diff to backups > > created by cinder-backup: > > > > * https://review.opendev.org/c/openstack/cinder/+/766856/ > > > > > > Could someone please take a peek and let me know of this is sufficient > > to be merged? > > Thanks for raising awareness of your patch. Right now, the cinder team > is prioritizing new driver reviews in light of the impending merge > deadline at wallaby milestone-2 next week: > > > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019720.html > > So there may be a slight delay to your patch being reviewed. If you > have some time, you can always help things along by reviewing some small > patches by other authors. > > cheers, > brian > > > > > > > > > Thanks and with kind regards, > > > > > > Christian > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Thu Jan 14 04:47:31 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Thu, 14 Jan 2021 04:47:31 +0000 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> , Message-ID: Thanks Ionut. If you are able could you test this patch instead. I think I better understand what the issue was now. We were not only creating a new RPC Client for each HTTP request, but also a brand-new transport for each request. https://review.opendev.org/c/openstack/magnum/+/770707 ________________________________ From: Ionut Biru Sent: Tuesday, January 12, 2021 3:17 AM To: Erik Olof Gunnar Andersson Cc: Spyros Trigazis ; feilong ; openstack-discuss Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here it is: https://paste.xinu.at/LgH8dT/ On Mon, Jan 11, 2021 at 10:45 PM Erik Olof Gunnar Andersson > wrote: Thanks I added it to the commit. Could you share your uwsgi config as well. Best Regards, Erik Olof Gunnar Andersson Technical Lead, Senior Cloud Engineer From: Ionut Biru > Sent: Tuesday, January 5, 2021 1:51 AM To: Erik Olof Gunnar Andersson > Cc: Spyros Trigazis >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi, Here is my config. maybe something is fishy. I did have around 300 messages in the queue in notification.info and notification.err and I purged them. https://paste.xinu.at/woMt/ On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson > wrote: Yea - tested locally as well and wasn't able to reproduce it either. I changed the health service job to run every second and maxed out at about 42 connections to RabbitMQ with two conductor workers. /etc/magnum/magnun.conf [conductor] workers = 2 ________________________________ From: Spyros Trigazis > Sent: Tuesday, January 5, 2021 12:59 AM To: Ionut Biru > Cc: Erik Olof Gunnar Andersson >; feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru > wrote: Hi, I tried with process=1 and it reached 1016 connections to rabbitmq. lsof https://paste.xinu.at/jGg/ i think it goes into error when it reaches 1024 file descriptors. I'm out of ideas of how to resolve this. I only have 3 clusters available and it's kinda weird and It doesn't scale. No issues here with 100s of clusters. Not sure what doesn't scale. * Maybe your rabbit is flooded with notifications that are not consumed? * You can use way more than 1024 file descriptors, maybe 2^10? Spyros On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson > wrote: Sure looks like RabbitMQ. How many workers do have you configured? Could you try to changing the uwsgi configuration to workers=1 (or processes=1) and then see if it goes beyond 30 connections to amqp. From: Ionut Biru > Sent: Monday, January 4, 2021 4:07 AM To: Erik Olof Gunnar Andersson > Cc: feilong >; openstack-discuss > Subject: Re: [magnum][api] Error system library fopen too many open files with magnum-auto-healer Hi Erik, Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ I have kubernetes 12.0.1 installed in env. On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson > wrote: Maybe something similar to this? https://github.com/kubernetes-client/python/issues/1158 What does lsof say? -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jan 14 10:43:22 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 14 Jan 2021 11:43:22 +0100 Subject: [neutron] Drivers meeting agenda for 15.01.2021 Message-ID: <20210114104322.fv77hnspeckqbsnm@p1.localdomain> Hi, For tomorrow's drivers meeting we have one new RFE to discuss: * https://bugs.launchpad.net/neutron/+bug/1911126 Spec for that is proposed already: https://review.opendev.org/c/openstack/neutron-specs/+/770540 See You on the meeting tomorrow :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From ionut at fleio.com Thu Jan 14 11:08:45 2021 From: ionut at fleio.com (Ionut Biru) Date: Thu, 14 Jan 2021 13:08:45 +0200 Subject: [magnum][api] Error system library fopen too many open files with magnum-auto-healer In-Reply-To: References: <185a9715-4667-9610-0048-5434e6e2cd4e@catalyst.net.nz> Message-ID: Hi Erik, Seems that this one works better than the previous one. I have 19 connections with this patch vs 38. I'll keep it for the following days. On Thu, Jan 14, 2021 at 6:47 AM Erik Olof Gunnar Andersson < eandersson at blizzard.com> wrote: > Thanks Ionut. > > If you are able could you test this patch instead. I think I better > understand what the issue was now. We were not only creating a new RPC > Client for each HTTP request, but also a brand-new transport for each > request. > https://review.opendev.org/c/openstack/magnum/+/770707 > > ------------------------------ > *From:* Ionut Biru > *Sent:* Tuesday, January 12, 2021 3:17 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > Hi Erik, > > Here it is: https://paste.xinu.at/LgH8dT/ > > > On Mon, Jan 11, 2021 at 10:45 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Thanks I added it to the commit. > > > > Could you share your uwsgi config as well. > > > > Best Regards, Erik Olof Gunnar Andersson > > Technical Lead, Senior Cloud Engineer > > > > *From:* Ionut Biru > *Sent:* Tuesday, January 5, 2021 1:51 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* Spyros Trigazis ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi, > > > > Here is my config. maybe something is fishy. > > > > I did have around 300 messages in the queue in notification.info > > and notification.err and I purged them. > > > > https://paste.xinu.at/woMt/ > > > > > > > > > On Tue, Jan 5, 2021 at 11:23 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Yea - tested locally as well and wasn't able to reproduce it either. I > changed the health service job to run every second and maxed out at about > 42 connections to RabbitMQ with two conductor workers. > > /etc/magnum/magnun.conf > > [conductor] > > workers = 2 > > > ------------------------------ > > *From:* Spyros Trigazis > *Sent:* Tuesday, January 5, 2021 12:59 AM > *To:* Ionut Biru > *Cc:* Erik Olof Gunnar Andersson ; feilong < > feilong at catalyst.net.nz>; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > > > > > On Tue, Jan 5, 2021 at 9:36 AM Ionut Biru wrote: > > Hi, > > > I tried with process=1 and it reached 1016 connections to rabbitmq. > > lsof > > https://paste.xinu.at/jGg/ > > > > > i think it goes into error when it reaches 1024 file descriptors. > > > > I'm out of ideas of how to resolve this. I only have 3 clusters available > and it's kinda weird and It doesn't scale. > > > > No issues here with 100s of clusters. Not sure what doesn't scale. > > > > * Maybe your rabbit is flooded with notifications that are not consumed? > > * You can use way more than 1024 file descriptors, maybe 2^10? > > > > Spyros > > > > On Mon, Jan 4, 2021 at 9:53 PM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Sure looks like RabbitMQ. How many workers do have you configured? > > > > Could you try to changing the uwsgi configuration to workers=1 (or > processes=1) and then see if it goes beyond 30 connections to amqp. > > > > *From:* Ionut Biru > *Sent:* Monday, January 4, 2021 4:07 AM > *To:* Erik Olof Gunnar Andersson > *Cc:* feilong ; openstack-discuss < > openstack-discuss at lists.openstack.org> > *Subject:* Re: [magnum][api] Error system library fopen too many open > files with magnum-auto-healer > > > > Hi Erik, > > > > Here is lsof of one uwsgi api. https://paste.xinu.at/5YUWf/ > > > > > I have kubernetes 12.0.1 installed in env. > > > > > > On Sun, Jan 3, 2021 at 3:06 AM Erik Olof Gunnar Andersson < > eandersson at blizzard.com> wrote: > > Maybe something similar to this? > https://github.com/kubernetes-client/python/issues/1158 > > > What does lsof say? > > > > > > > > > -- > > Ionut Biru - https://fleio.com > > > > > > -- > > Ionut Biru - https://fleio.com > > > > > -- > Ionut Biru - https://fleio.com > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Jan 14 11:27:38 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 14 Jan 2021 12:27:38 +0100 Subject: [release] Status: ORANGE - pip resolver issue with publish-openstack-releasenotes-python3 In-Reply-To: References: Message-ID: Hello, @release managers: Just a heads-up to highlight projects that need to be approved carefully. I think we could improve our filtering by only considering the failing projects in the list of patched and unmerged projects: https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open) Most of these projects CI met a pip resolver issue, so if we release these projects without merging the associated patches we will fail at least with: - publish-openstack-releasenotes-python3 - publish-openstack-sphinx-doc Take a look to the following example: - https://review.opendev.org/c/openstack/os-service-types/+/769766 - https://zuul.opendev.org/t/openstack/build/53eec1ae61734bf39fb24a106920bbcf @ptl: Please ensure that your projects aren't in the list of failing projects and if so, please try to address the resolver issue in your requirements. Thanks for your reading PS: Notice that you can consider that this email is also a friendly reminder about the fact that we are still at the Orange level status :) Le jeu. 7 janv. 2021 à 17:53, Herve Beraud a écrit : > Hello everyone, > > @release managers: all impacted projects now have fixes submitted, so > before validating a patch you only have to ensure that the released > projects aren't in the list of opened patches: > > > https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open) > > I move our status to ORANGE as the situation seems improving for now and > also because we can easily monitor the state. > > @all: Notice that some projects have been ignored here because they aren't > released, here is the list: > > i18n > ideas > openstack-manuals > openstack-zuul-roles > os-apply-config > os-collect-config > os-refresh-config > ossa > pyeclib > security-analysis > security-doc > tempest-lib > tempest-stress > training-guides > workload-ref-archs > > However it could be worth it to uniformize them, but we leave it to the > teams to update them. > > Also notice that we proposed to add the capabilities to zuul to retrieve > requirements from a dedicated place: > > https://review.opendev.org/c/zuul/zuul-jobs/+/769292 > > It will help projects that haven't documentation but that produce release > notes to split their requirements more properly. > > If you've questions do not hesitate to ping us on #openstack-release > > Thanks for your reading > > Le mer. 6 janv. 2021 à 12:47, Herve Beraud a écrit : > >> @release mangaers: For now I think we can restart validating projects >> that aren't present in the previous list (c.f >> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html >> ). >> Normally they aren't impacted by this problem. >> >> I'll move to the "Orange" state when all the projects of list will be >> patched or at least when a related patch will be present in the list (c.f >> https://review.opendev.org/q/topic:%2522fix-relmgt-pip-doc%2522+(status:open+OR+status:merged)). >> For now my monitoring indicates that ~50 projects still need related >> changes. >> >> So, for now, please, ensure that the repos aren't listed here before >> validate a patch >> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019612.html >> >> Thanks to everyone who helped here! Much appreciated! >> >> Le mar. 5 janv. 2021 à 12:05, Martin Chacon Piza >> a écrit : >> >>> Hi Herve, >>> >>> I have added this topic to the Monasca irc meeting today. >>> >>> Thank you, >>> Martin (chaconpiza) >>> >>> >>> >>> El lun, 4 de ene. de 2021 a la(s) 18:30, Herve Beraud ( >>> hberaud at redhat.com) escribió: >>> >>>> Thanks all! >>>> >>>> Here we can track our advancement: >>>> >>>> https://review.opendev.org/q/topic:%22fix-relmgt-pip-doc%22+(status:open%20OR%20status:merged) >>>> >>>> Le lun. 4 janv. 2021 à 18:02, Radosław Piliszek < >>>> radoslaw.piliszek at gmail.com> a écrit : >>>> >>>>> On Mon, Jan 4, 2021 at 4:34 PM Herve Beraud >>>>> wrote: >>>>> > >>>>> > Here is the filtered list of projects that meet the conditions >>>>> leading to the bug, and who should be fixed to completely solve our issue: >>>>> > >>>>> > ... >>>>> > etcd3gw >>>>> > ... >>>>> > python-masakariclient >>>>> > ... >>>>> > >>>>> > Notice that some of these projects aren't deliverables but if >>>>> possible it could be worth fixing them too. >>>>> > >>>>> > These projects have an incompatibility between entries in their >>>>> test-requirements.txt, and they're missing a doc/requirements.txt file. >>>>> > >>>>> > The more straightforward path to unlock our job >>>>> "publish-openstack-releasenotes-python3" is to create a >>>>> doc/requirements.txt file that only contains the needed dependencies to >>>>> reduce the possibility of pip resolver issues. I personally think that we >>>>> could use the latest allowed version of requirements (sphinx, reno, etc...). >>>>> > >>>>> > I propose to track the related advancement by using the >>>>> "fix-relmgt-pip-doc" gerrit topic, when all the projects will be fixed we >>>>> would be able to update our status. >>>>> > >>>>> > Also it could be worth fixing test-requirements.txt >>>>> incompatibilities but this task is more on the projects teams sides and >>>>> this task could be done with a follow up patch. >>>>> > >>>>> > Thoughts? >>>>> >>>>> Thanks, Hervé! >>>>> >>>>> Done for python-masakariclient in [1]. >>>>> >>>>> etcd3gw needs more love in general but I will have this split in mind. >>>>> >>>>> [1] >>>>> https://review.opendev.org/c/openstack/python-masakariclient/+/769163 >>>>> >>>>> -yoctozepto >>>>> >>>>> >>>> >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer at Red Hat >>>> irc: hberaud >>>> https://github.com/4383/ >>>> https://twitter.com/4383hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> *Martín Chacón Pizá* >>> *chacon.piza at gmail.com * >>> >> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jan 14 11:44:07 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 14 Jan 2021 17:14:07 +0530 Subject: Enabling KVM on Openstack Inbox In-Reply-To: <5adeeecde601ab7d774d0bf4e09d50083efbaf68.camel@redhat.com> References: <5adeeecde601ab7d774d0bf4e09d50083efbaf68.camel@redhat.com> Message-ID: Thank you Sean. On Thu, Jan 14, 2021 at 12:54 AM Sean Mooney wrote: > On Thu, 2021-01-14 at 00:02 +0530, open infra wrote: > > Hi > > > > (This mail has been sent to the community mailing list already, and I was > > suggested to reach out to this mailing list.) > > > > I have enabled KVM after deploying openstack single node setup. > > Following entries has been added to /etc/nova/nova.conf and then restart > > libvirtd.service and openstack-nova-compute.servic. > > > > compute_driver = libvirt.LibvirtDriver [libvirt] virt_type=kvm > > #virt_type=qemu > > > > # lsmod | grep kvm > > kvm_intel 188740 0 > > kvm 637289 1 kvm_intel > > irqbypass 13503 1 kvm > > > > > > I have disabled qemu in nova config and restarted the entire box > (including > > the base OS and the VM where I deployed Openstack. > > But still no luck. > > > > > > $ openstack hypervisor list > > > +----+------------------------+-----------------+----------------+-------+ > > > ID | Hypervisor Hostname | Hypervisor Type | Host IP | State > | > > > +----+------------------------+-----------------+----------------+-------+ > > > 1 | openstack.centos.local | QEMU | 192.168.122.63 | up > | > > > +----+------------------------+-----------------+----------------+-------+ > > > > I still can see Hypervisor Type as QEMU, when I issue openstack > hypervisor > > list command. > no that is expected > enableing kvm does not change the hypervior type > its still qemu its just using the kvm accleartion instead of the tsc > backend. > so that wont change but your vms should not change tehre domain type and > be using kvm. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jan 14 11:51:34 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jan 2021 11:51:34 +0000 Subject: [ops][nova][designate] Does anyone rely on fully-qualified instance names? In-Reply-To: References: Message-ID: On Mon, 2020-11-30 at 11:51 +0000, Stephen Finucane wrote: > When attaching a port to an instance, nova will check for DNS support in neutron > and set a 'dns_name' attribute if found. To populate this attribute, nova uses a > sanitised version of the instance name, stored in the instance.hostname > attribute. This sanitisation simply strips out any unicode characters and > replaces underscores and spaces with dashes, before truncating to 63 characters. > It does not currently replace periods and this is the cause of bug 1581977 [1], > where an instance name such as 'ubuntu20.04' will fail to schedule since neutron > identifies '04' as an invalid TLD. > > The question now is what to do to resolve this. There are two obvious paths > available to us. The first is to simply catch these invalid hostnames and > replace them with an arbitrary hostname of format 'Server-{serverUUID}'. This is > what we currently do for purely unicode instance names and is what I've proposed > at [2]. The other option is to strip all periods, or rather replace them with > hyphens, when sanitizing the instance name. This is more predictable but breaks > the ability to use the instance name as a FQDN. Such usage is something I'm told > we've never supported, but I'm concerned that there are users out there who are > relying on this all the same and I'd like to get a feel for whether this is the > case first. > > So, the question: does anyone currently rely on this inadvertent "feature"? A quick update. I've reworked the change [1] such that it will always replace periods with hyphens. From the sounds of things, there are people who name their instance using FQDNs for management purposes but there does not appear to be anyone using the name published via the metadata service for DNS integration purposes. This makes replacing the periods the least complex solution to the immediate issue. A future change can look at exposing a way to configure this information via the API when creating a new instance. We might also want to change from stripping of unicode to replacement using punycode. If anyone missed this discussion the first time around and has concerns, please raise them here or on the review. Cheers, Stephen [1] https://review.opendev.org/c/openstack/nova/+/764482 > Cheers, > Stephen > > [1] https://launchpad.net/bugs/1581977 > [2] https://review.opendev.org/c/openstack/nova/+/764482 > From muhammad.ahsan991 at gmail.com Thu Jan 14 13:23:29 2021 From: muhammad.ahsan991 at gmail.com (ahsan ashraf) Date: Thu, 14 Jan 2021 18:23:29 +0500 Subject: Getting CPU Utilization null value Message-ID: Hi Team, I have been using openstack apis and have following issue: - API used: /servers/{server_id}/diagnostics - API version: v2.1 Problem: Couldn't get utilization value of my server it is showing null even i have applied stress on the cpu API Result: { "state": "running", "driver": "libvirt", "hypervisor": "kvm", "hypervisor_os": "linux", "uptime": 10020419, "config_drive": false, "num_cpus": 2, "num_nics": 1, "num_disks": 2, "disk_details": [ { "read_bytes": 983884800, "read_requests": 26229, "write_bytes": 7373907456, "write_requests": 574537, "errors_count": -1 }, { "read_bytes": 3215872, "read_requests": 147, "write_bytes": 0, "write_requests": 0, "errors_count": -1 } ], "cpu_details": [ { "id": 0, "time": 12424380000000, "utilisation": null }, { "id": 1, "time": 12775460000000, "utilisation": null } ], "nic_details": [ { "mac_address": "fa:16:3e:0e:c7:f2", "rx_octets": 943004980, "rx_errors": 0, "rx_drop": 0, "rx_packets": 4464445, "rx_rate": null, "tx_octets": 785254710, "tx_errors": 0, "tx_drop": 0, "tx_packets": 4696786, "tx_rate": null } ], "memory_details": { "maximum": 2, "used": 2 } } Regards, Muhammad Ahsan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jan 14 13:49:57 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jan 2021 13:49:57 +0000 Subject: Getting CPU Utilization null value In-Reply-To: References: Message-ID: <7dfeb6b8ad3419ff4e9168adf50015b4ccc26c47.camel@redhat.com> On Thu, 2021-01-14 at 18:23 +0500, ahsan ashraf wrote: > Hi Team, > I have been using openstack apis and have following issue: > * API used: /servers/{server_id}/diagnostics > * API version: v2.1 > Problem: Couldn't get utilization value of my server it is showing null even i > have applied stress on the cpu  This feature is specific to certain virt drivers. Only the XenAPI virt driver supported reporting this metric and this driver was removed from nova in the Victoria release, meaning there are no longer any in-tree drivers with support for this. The libvirt driver, which I suspect you're using here, reports ID and time but not utilization. Hope this helps, Stephen > API Result: > { "state": "running", "driver": "libvirt", "hypervisor": "kvm", > "hypervisor_os": "linux", "uptime": 10020419, "config_drive": false, > "num_cpus": 2, "num_nics": 1, "num_disks": 2, "disk_details": [ { > "read_bytes": 983884800, "read_requests": 26229, "write_bytes": 7373907456, > "write_requests": 574537, "errors_count": -1 }, { "read_bytes": 3215872, > "read_requests": 147, "write_bytes": 0, "write_requests": 0, "errors_count": - > 1 } ], "cpu_details": [ { "id": 0, "time": 12424380000000, "utilisation": null > }, { "id": 1, "time": 12775460000000, "utilisation": null } ], "nic_details": > [ { "mac_address": "fa:16:3e:0e:c7:f2", "rx_octets": 943004980, "rx_errors": > 0, "rx_drop": 0, "rx_packets": 4464445, "rx_rate": null, "tx_octets": > 785254710, "tx_errors": 0, "tx_drop": 0, "tx_packets": 4696786, "tx_rate": > null } ], "memory_details": { "maximum": 2, "used": 2 } } > > Regards, > Muhammad Ahsan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Thu Jan 14 15:43:35 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 14 Jan 2021 16:43:35 +0100 Subject: [RDO][CentOS] RDO plans to move to CentOS Stream Message-ID: Hi, As you know, CentOS announced recently that they will be focusing on CentOS Stream [1] and that CentOS Linux 8 will be EOL at the end of 2021 [2]. In order to align with this change, the RDO project is adapting the roadmap for the current and upcoming releases: - RDO Wallaby (ETA is end of April 2021 [3]) will be built, tested and released only for CentOS 8 Stream. - RDO CloudSIG repos for Victoria and Ussuri will be updated and tested for both CentOS Stream and CentOS Linux 8 until end of 2021 and then continue on CentOS Stream. - In the upcoming weeks, we will create new RDO CloudSIG repos for Victoria and Ussuri for CentOS Stream 8. - RDO Trunk repositories (aka DLRN repos) will be built and tested using CentOS 8 Stream for all releases currently using CentOS Linux 8 (starting with Train). If you are interested in the details about these plans and how we plan to implement it, we've created a blog post [4]. Don't hesitate to ask us in the ML or in the #rdo channel on freenode if you have questions or suggestions about this. We are also organizing a video AMA about RDO and CentOS Stream on Thursday Jan 21 1600 UTC in the RDO Meet room https://meet.google.com/uzo-tfkt-top On behalf of RDO, Alfredo [1] https://centos.org/distro-faq/ [2] https://blog.centos.org/2020/12/future-is-centos-stream/ [3] https://releases.openstack.org/ [4] https://blogs.rdoproject.org/2021/01/rdo-plans-to-move-to-centos-stream/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jan 14 15:45:35 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jan 2021 15:45:35 +0000 Subject: Getting CPU Utilization null value In-Reply-To: <7dfeb6b8ad3419ff4e9168adf50015b4ccc26c47.camel@redhat.com> References: <7dfeb6b8ad3419ff4e9168adf50015b4ccc26c47.camel@redhat.com> Message-ID: On Thu, 2021-01-14 at 13:49 +0000, Stephen Finucane wrote: > On Thu, 2021-01-14 at 18:23 +0500, ahsan ashraf wrote: > > Hi Team, > > I have been using openstack apis and have following issue: > > * API used: /servers/{server_id}/diagnostics > > * API version: v2.1 > > Problem: Couldn't get utilization value of my server it is showing null even i > > have applied stress on the cpu  > This feature is specific to certain virt drivers. Only the XenAPI virt driver > supported reporting this metric and this driver was removed from nova in the > Victoria release, meaning there are no longer any in-tree drivers with support > for this. The libvirt driver, which I suspect you're using here, reports ID and > time but not utilization. so technically we have the ablity to plug in metric monitors for this. we have a cpu monitor avaihble https://github.com/openstack/nova/blob/master/nova/compute/monitors/cpu/virt_driver.py but that is based on the host cpu utilisation which is why we dont hook that up to this api. there was also a numa mem bandwith one proposed at one point https://review.opendev.org/c/openstack/nova/+/270344 they can be use with the metrics filter and since this is a stevadore entroy point you can write your own. since that data is nto hooked up to the diagnostics endpoint we dont have that info in the api responce. i belive we can get a per instace view from libvirt too so the libvirt driver could provide some of this info but it was never implemeted. there is a performace overhad to collecting that info however so we support disabling the PMU in the guest to reduce that. that normally only important for realtime instances. when its disabled there is no way for libvirt to get this info form qemu as far as i know. in anycase i agree with stephn that htis is not expected to work for libvirt currently. > > Hope this helps, > Stephen > > > API Result: > > { "state": "running", "driver": "libvirt", "hypervisor": "kvm", > > "hypervisor_os": "linux", "uptime": 10020419, "config_drive": false, > > "num_cpus": 2, "num_nics": 1, "num_disks": 2, "disk_details": [ { > > "read_bytes": 983884800, "read_requests": 26229, "write_bytes": 7373907456, > > "write_requests": 574537, "errors_count": -1 }, { "read_bytes": 3215872, > > "read_requests": 147, "write_bytes": 0, "write_requests": 0, "errors_count": - > > 1 } ], "cpu_details": [ { "id": 0, "time": 12424380000000, "utilisation": null > > }, { "id": 1, "time": 12775460000000, "utilisation": null } ], "nic_details": > > [ { "mac_address": "fa:16:3e:0e:c7:f2", "rx_octets": 943004980, "rx_errors": > > 0, "rx_drop": 0, "rx_packets": 4464445, "rx_rate": null, "tx_octets": > > 785254710, "tx_errors": 0, "tx_drop": 0, "tx_packets": 4696786, "tx_rate": > > null } ], "memory_details": { "maximum": 2, "used": 2 } } > > > > Regards, > > Muhammad Ahsan > From juliaashleykreger at gmail.com Thu Jan 14 16:08:25 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 14 Jan 2021 08:08:25 -0800 Subject: [ironic] virtual midcycle Message-ID: Greetings awesome humans, We in the lands of baremetal and irony (err... Ironic, but you get the idea) have been discussing having a virtual midcycle in the next few weeks to help sort through some items and enable us to prioritize the remainder of the development cycle. I've created an etherpad[0], and a doodle poll[1]. Please keep in mind we have active contributors across the globe at this time, so if anyone feels like we need more time windows, please let me know. I'm hoping to schedule the midcycle by early next week. Additional topics are welcome on the etherpad. We will use that to identify how much time we should try and schedule. Thanks everyone! [0] https://etherpad.opendev.org/p/ironic-wallaby-midcycle [1] https://doodle.com/poll/y9afrz6hhq7s23km?utm_source=poll&utm_medium=link From rodrigo.barbieri2010 at gmail.com Thu Jan 14 16:50:03 2021 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Thu, 14 Jan 2021 13:50:03 -0300 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: <98fb643dc6ec0d0201c8e4aea114cd07cf46fef7.camel@redhat.com> References: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> <98fb643dc6ec0d0201c8e4aea114cd07cf46fef7.camel@redhat.com> Message-ID: Hello there, Thanks Sean for the suggestions. I've tested them and reported my findings in https://bugs.launchpad.net/nova/+bug/1908133 Your links helped me a lot of figuring out that my placement aggregates were set up incorrectly, and the fake reservation worked slightly better than the reserved_host_disk_mb (more details on that in the bug update). And it works very well on Rocky+, so that's very good. This problem is now much more manageable, thanks for the suggestions! Regards, On Fri, Jan 8, 2021 at 7:13 PM Sean Mooney wrote: > On Fri, 2021-01-08 at 18:27 -0300, Rodrigo Barbieri wrote: > > Thanks for the responses Eugen and Sean! > > > > The placement.yaml approach sounds good if it can prevent the compute > host > > from reporting local_gb repeatedly, and then as you suggested use > Placement > > Aggregates I can perhaps make that work for a subset of use cases. Too > bad > > it is only available on Victoria+. I was looking for something that could > > work, even if partially, on Queens and Stein. > > > > The cron job updating the reservation, I'm not sure if it will clash with > > the host updates (being overriden, as I've described in the LP bug), but > > you actually gave me another idea. I may be able to create a fake > > allocation in the nodes to cancel out their reported values, and then > rely > > only on the shared value through placement. > well actully you could use the host reserved disk space config value to do > that on older releases > just set it equal to the pool size. > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.reserved_host_disk_mb > not sure why that is in MB it really should be GB but anyway if you set > that then it will set the placement value. > > > > > Monitoring Ceph is only part of the problem. The second part, if you end > up > > needing it (and you may if you're not very conservative in the monitoring > > parameters and have unpredictable workload) is to prevent new instances > > from being created, thus new data from being stored, to prevent it from > > filling up before you can react to it (think of an accidental DoS attack > by > > running a certain storage-heavy workloads). > > > > @Eugen, yes. I was actually looking for more reliable ways to prevent it > > from happening. > > > > Overall, the shared placement + fake allocation sounded like the cleanest > > workaround for me. I will try that and report back. > > if i get time in the next week or two im hoping ot try and tweak our ceph > ci job to test > that toplogy in the upstream ci. but just looking a the placemnt > funcitonal tests it should work. > > This covers the use of sharing resouce providers > > https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml > > the final section thes the allocation candiate endpoint and asserts we > getan allocation for both providres > > https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml#L135-L143 > > its relitivly simple to read this file top to bottom and its only 143 > lines long but it basically step > through and constucte the topolgoy i was descifbing or at least a similar > ones and shows step by step what > the different behavior will be as the rps are created and aggreates are > created exctra. > > the main issue with this approch is we dont really have a good way to > upgrade existing deployments to this toplogy beyond > live migrating everything one node at a time so that there allcoation will > get reshaped as a side effect of the move operation. > > looking a tthe history of this file it was added 3 years ago > https://github.com/openstack/placement/commit/caeae7a41ed41535195640dfa6c5bb58a7999a9b > around stien although it may also have worked before thatim not sure when > we added sharing providers. > > > > > Thanks for the help! > > > > On Wed, Jan 6, 2021 at 10:57 AM Eugen Block wrote: > > > > > Hi, > > > > > > we're using OpenStack with Ceph in production and also have customers > > > doing that. > > > From my point of view fixing nova to be able to deal with shared > > > storage of course would improve many things, but it doesn't liberate > > > you from monitoring your systems. Filling up a ceph cluster should be > > > avoided and therefore proper monitoring is required. > > > > > > I assume you were able to resolve the frozen instances? > > > > > > Regards, > > > Eugen > > > > > > > > > Zitat von Sean Mooney : > > > > > > > On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: > > > > > Hi Nova folks and OpenStack operators! > > > > > > > > > > I have had some trouble recently where while using the > "images_type = > > > rbd" > > > > > libvirt option my ceph cluster got filled up without I noticing and > > > froze > > > > > all my nova services and instances. > > > > > > > > > > I started digging and investigating why and how I could prevent or > > > > > workaround this issue, but I didn't find a very reliable clean way. > > > > > > > > > > I documented all my steps and investigation in bug 1908133 [0]. It > has > > > been > > > > > marked as a duplicate of 1522307 [1] which has been around for > quite > > > some > > > > > time, so I am wondering if any operators have been using nova + > ceph in > > > > > production with "images_type = rbd" config set and how you have > been > > > > > handling/working around the issue. > > > > > > > > this is indeed a know issue and the long term plan to fix it was to > > > > track shared storae > > > > as a sharing resouce provide in plamcent. that never happend so > > > > there si currenlty no mechanium > > > > available to prevent this explcitly in nova. > > > > > > > > the disk filter which is nolonger used could prevnet the boot of a > > > > vm that would fill the ceph pool but > > > > it could not protect against two concurrent request form filling the > > > pool. > > > > > > > > placement can protect against that due to the transational nature of > > > > allocations which serialise > > > > all resouce useage however since each host reports the total size of > > > > the ceph pool as its local storage that wont work out of the box. > > > > > > > > as a quick hack what you can do is set the > > > > [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) > > > > > > > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > > > > on each of your compute agents configs. > > > > > > > > > > > > that will prevent over subscription however it has other negitve > > > sidefects. > > > > mainly that you will fail to scudle instance that could boot if a > > > > host exced its 1/n usage > > > > so unless you have perfectly blanced consumtion this is not a good > > > approch. > > > > > > > > a better appoch but one that requires external scripting is to have > > > > a chron job that will update the resrved > > > > usaage of each of the disk_gb inventores to the actull amount of of > > > > stoarge allocated form the pool. > > > > > > > > the real fix however is for nova to tack its shared usage in > > > > placment correctly as a sharing resouce provide. > > > > > > > > its possible you might be able to do that via the porvider.yaml file > > > > > > > > by overriding the local disk_gb to 0 on all comupte nodes > > > > then creating a singel haring resouce provider of disk_gb that > > > > models the ceph pool. > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html > > > > currently that does not support the addtion of providers to placment > > > > agggreate so while it could be used to 0 out the comptue node > > > > disk inventoies and to create a sharing provider it with the > > > > MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping > > > > which compute nodes can consume form sharing provider via the > > > > agggrate but you could do that form. > > > > that assume that "sharing resouce provdiers" actully work. > > > > > > > > > > > > bacialy what it comes down to today is you need to monitor the > > > > avaiable resouce yourslef externally and ensure you never run out of > > > > space. > > > > that sucks but untill we proably track things in plamcent there is > > > > nothign we can really do. > > > > the two approch i suggested above might work for a subset of > > > > usecasue but really this is a feature that need native suport in > > > > nova to adress properly. > > > > > > > > > > > > > > Thanks in advance! > > > > > > > > > > [0] https://bugs.launchpad.net/nova/+bug/1908133 > > > > > [1] https://bugs.launchpad.net/nova/+bug/1522307 > > > > > > > > > > > > > > > > > > > > > > > > > > -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jan 14 17:15:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jan 2021 17:15:38 +0000 Subject: [nova] workarounds and operator experience around bug 1522307/1908133 In-Reply-To: References: <7b4f6f10e682698dfaed22a86397f5b174fed7e8.camel@redhat.com> <20210106134822.Horde.dGo06NkPF_G8_AogeK9L2i7@webmail.nde.ag> <98fb643dc6ec0d0201c8e4aea114cd07cf46fef7.camel@redhat.com> Message-ID: <83cd20278b68a8e6ec25f2d99bb33e96a34e6d8d.camel@redhat.com> On Thu, 2021-01-14 at 13:50 -0300, Rodrigo Barbieri wrote: > Hello there, > > Thanks Sean for the suggestions. I've tested them and reported my findings > in https://bugs.launchpad.net/nova/+bug/1908133 > > Your links helped me a lot of figuring out that my placement aggregates > were set up incorrectly, and the fake reservation worked slightly better > than the reserved_host_disk_mb (more details on that in the bug update). > And it works very well on Rocky+, so that's very good. > > This problem is now much more manageable, thanks for the suggestions! im glad to hear it worked. im still hoping to see if i can configure our ceph multi node job to replciate this shared provider configuretion in our ci and test it but i likely wont get to that untill after feature freeze at m3. assuming i can get it to work there too we can docudment a procedure for how to do this and next cycle we can consider if there is a clean way to automate the process. thanks for updateing the bug with your findings :) > > Regards, > > On Fri, Jan 8, 2021 at 7:13 PM Sean Mooney wrote: > > > On Fri, 2021-01-08 at 18:27 -0300, Rodrigo Barbieri wrote: > > > Thanks for the responses Eugen and Sean! > > > > > > The placement.yaml approach sounds good if it can prevent the compute > > host > > > from reporting local_gb repeatedly, and then as you suggested use > > Placement > > > Aggregates I can perhaps make that work for a subset of use cases. Too > > bad > > > it is only available on Victoria+. I was looking for something that could > > > work, even if partially, on Queens and Stein. > > > > > > The cron job updating the reservation, I'm not sure if it will clash with > > > the host updates (being overriden, as I've described in the LP bug), but > > > you actually gave me another idea. I may be able to create a fake > > > allocation in the nodes to cancel out their reported values, and then > > rely > > > only on the shared value through placement. > > well actully you could use the host reserved disk space config value to do > > that on older releases > > just set it equal to the pool size. > > > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.reserved_host_disk_mb > > not sure why that is in MB it really should be GB but anyway if you set > > that then it will set the placement value. > > > > > > > > Monitoring Ceph is only part of the problem. The second part, if you end > > up > > > needing it (and you may if you're not very conservative in the monitoring > > > parameters and have unpredictable workload) is to prevent new instances > > > from being created, thus new data from being stored, to prevent it from > > > filling up before you can react to it (think of an accidental DoS attack > > by > > > running a certain storage-heavy workloads). > > > > > > @Eugen, yes. I was actually looking for more reliable ways to prevent it > > > from happening. > > > > > > Overall, the shared placement + fake allocation sounded like the cleanest > > > workaround for me. I will try that and report back. > > > > if i get time in the next week or two im hoping ot try and tweak our ceph > > ci job to test > > that toplogy in the upstream ci. but just looking a the placemnt > > funcitonal tests it should work. > > > > This covers the use of sharing resouce providers > > > > https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml > > > > the final section thes the allocation candiate endpoint and asserts we > > getan allocation for both providres > > > > https://github.com/openstack/placement/blob/master/placement/tests/functional/gabbits/shared-resources.yaml#L135-L143 > > > > its relitivly simple to read this file top to bottom and its only 143 > > lines long but it basically step > > through and constucte the topolgoy i was descifbing or at least a similar > > ones and shows step by step what > > the different behavior will be as the rps are created and aggreates are > > created exctra. > > > > the main issue with this approch is we dont really have a good way to > > upgrade existing deployments to this toplogy beyond > > live migrating everything one node at a time so that there allcoation will > > get reshaped as a side effect of the move operation. > > > > looking a tthe history of this file it was added 3 years ago > > https://github.com/openstack/placement/commit/caeae7a41ed41535195640dfa6c5bb58a7999a9b > > around stien although it may also have worked before thatim not sure when > > we added sharing providers. > > > > > > > > Thanks for the help! > > > > > > On Wed, Jan 6, 2021 at 10:57 AM Eugen Block wrote: > > > > > > > Hi, > > > > > > > > we're using OpenStack with Ceph in production and also have customers > > > > doing that. > > > >  From my point of view fixing nova to be able to deal with shared > > > > storage of course would improve many things, but it doesn't liberate > > > > you from monitoring your systems. Filling up a ceph cluster should be > > > > avoided and therefore proper monitoring is required. > > > > > > > > I assume you were able to resolve the frozen instances? > > > > > > > > Regards, > > > > Eugen > > > > > > > > > > > > Zitat von Sean Mooney : > > > > > > > > > On Tue, 2021-01-05 at 14:17 -0300, Rodrigo Barbieri wrote: > > > > > > Hi Nova folks and OpenStack operators! > > > > > > > > > > > > I have had some trouble recently where while using the > > "images_type = > > > > rbd" > > > > > > libvirt option my ceph cluster got filled up without I noticing and > > > > froze > > > > > > all my nova services and instances. > > > > > > > > > > > > I started digging and investigating why and how I could prevent or > > > > > > workaround this issue, but I didn't find a very reliable clean way. > > > > > > > > > > > > I documented all my steps and investigation in bug 1908133 [0]. It > > has > > > > been > > > > > > marked as a duplicate of 1522307 [1] which has been around for > > quite > > > > some > > > > > > time, so I am wondering if any operators have been using nova + > > ceph in > > > > > > production with "images_type = rbd" config set and how you have > > been > > > > > > handling/working around the issue. > > > > > > > > > > this is indeed a know issue and the long term plan to fix it was to > > > > > track shared storae > > > > > as a sharing resouce provide in plamcent. that never happend so > > > > > there si currenlty no mechanium > > > > > available to prevent this explcitly in nova. > > > > > > > > > > the disk filter which is nolonger used could prevnet the boot of a > > > > > vm that would fill the ceph pool but > > > > > it could not protect against two concurrent request form filling the > > > > pool. > > > > > > > > > > placement can protect against that due to the transational nature of > > > > > allocations which serialise > > > > > all resouce useage however since each host reports the total size of > > > > > the ceph pool as its local storage that wont work out of the box. > > > > > > > > > > as a quick hack what you can do is set the > > > > > [DEFAULT]/disk_allocation_ratio=(1/number of compute nodes) > > > > > > > > > > > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.disk_allocation_ratio > > > > > on each of your compute agents configs. > > > > > > > > > > > > > > > that will prevent over subscription however it has other negitve > > > > sidefects. > > > > > mainly that you will fail to scudle instance that could boot if a > > > > > host exced its 1/n usage > > > > > so unless you have perfectly blanced consumtion this is not a good > > > > approch. > > > > > > > > > > a better appoch but one that requires external scripting is to have > > > > > a chron job that will update the resrved > > > > >  usaage of each of the disk_gb inventores to the actull amount of of > > > > > stoarge allocated form the pool. > > > > > > > > > > the real fix however is for nova to tack its shared usage in > > > > > placment correctly as a sharing resouce provide. > > > > > > > > > > its possible you might be able to do that via the porvider.yaml file > > > > > > > > > > by overriding the local disk_gb to 0 on all comupte nodes > > > > > then creating a singel haring resouce provider of disk_gb that > > > > > models the ceph pool. > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/approved/provider-config-file.html > > > > > currently that does not support the addtion of providers to placment > > > > > agggreate so while it could be used to 0 out the comptue node > > > > > disk inventoies and to create a sharing provider it with the > > > > > MISC_SHARES_VIA_AGGREGATE trait it cant do the final step of mapping > > > > > which compute nodes can consume form sharing provider via the > > > > > agggrate but you could do that form. > > > > > that assume that "sharing resouce provdiers" actully work. > > > > > > > > > > > > > > > bacialy what it comes down to today is you need to monitor the > > > > > avaiable resouce yourslef externally and ensure you never run out of > > > > > space. > > > > > that sucks but untill we proably track things in plamcent there is > > > > > nothign we can really do. > > > > > the two approch i suggested above might work for a subset of > > > > > usecasue but really this is a feature that need native suport in > > > > > nova to adress properly. > > > > > > > > > > > > > > > > > Thanks in advance! > > > > > > > > > > > > [0] https://bugs.launchpad.net/nova/+bug/1908133 > > > > > > [1] https://bugs.launchpad.net/nova/+bug/1522307 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From ryan at messagecloud.com Thu Jan 14 17:17:27 2021 From: ryan at messagecloud.com (Ryan Price-King) Date: Thu, 14 Jan 2021 17:17:27 +0000 Subject: Cannot login to Built trovestack image In-Reply-To: References: Message-ID: Hi, Sorry for the delay, I had time off and had to research what the issues were. Your doc really helped to get the SSH working on the instance, by creating a keypair in the services project and referencing it. With that I was able to diagnose many issues I faced. The image I initially built was not working properly for a start. Now I can successfully build a trove DB on OpenStack. Yay me lol. I do have other issues though that the trove database is not showing the floating IP and I need to get it working with designate properly, but I am on the right track now. This seemed like the best way for me to get a working image: https://www.server-world.info/en/note?os=CentOS_8&p=openstack_victoria3&f=15 Thank-you very much for your help :) Regards, Ryan On Fri, 8 Jan 2021 at 02:25, Lingxian Kong wrote: > In this case, we need to check the trove-guestagent log. For how to ssh > into the guest instance, > https://docs.openstack.org/trove/latest/admin/troubleshooting.html > > You can also jump into #openstack-trove IRC channel we could have a chat > there. > > --- > Lingxian Kong > Senior Software Engineer > Catalyst Cloud > www.catalystcloud.nz > > > On Fri, Jan 8, 2021 at 1:34 PM Ryan Price-King > wrote: > >> Hi, >> >> Sorry I should have clarified. The build step I am stuck on is while >> spinning up the trove database on openstack horizon. I have built the qcow >> image fine. Also, I can view the login prompt of the trove instance that >> was created when creating a trove database. So it seems the agent is not >> running in the instance properly. >> >> I have read that document lots and need to login to the image to see the >> files and Ubuntu Ubuntu doesnt work. >> >> Regards, >> Ryan >> >> On 8 Jan 2021 at 00:16, Lingxian Kong wrote: >> >> Hi, >> >> Have you read >> https://docs.openstack.org/trove/latest/admin/building_guest_images.html? >> >> --- >> Lingxian Kong >> Senior Software Engineer >> Catalyst Cloud >> www.catalystcloud.nz >> >> >> On Fri, Jan 8, 2021 at 10:09 AM Ryan Price-King >> wrote: >> >>> Hi, >>> >>> Sorry I meant the instance is being deployed in nova correctly and >>> entering running state and console gives me login screen. >>> >>> Regards, >>> Ryan >>> >>> On Thu, 7 Jan 2021 at 17:14, Ryan Price-King >>> wrote: >>> >>>> Hi, >>>> >>>> I am having problems with the image being deployed correctly with nova, >>>> but the communication to the guestagent is timing out and i am stuck in >>>> build stage. >>>> >>>> ./trovestack build-image ubuntu bionic true ubuntu >>>> >>>> >>>> Also with that, I dont know which mysql version it is building. >>>> >>>> I am assuming it is 5.7.29. >>>> >>>> I cannot diagnose as i cannot login to the guest image instance. >>>> >>>> I assume the user is ubuntu, but i cannot login with any password that >>>> i have tried. >>>> >>>> Can you tell me what username/password to login to the instance by >>>> console in openstack please. >>>> >>>> Regards, >>>> Ryan >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From shubjero at gmail.com Thu Jan 14 19:59:16 2021 From: shubjero at gmail.com (shubjero) Date: Thu, 14 Jan 2021 14:59:16 -0500 Subject: Trying to resolve 2 deprecation warnings Message-ID: Hi all, I am trying to resolve two deprecation warnings which I've not been able to locate exactly what package or file is causing them. We are running OpenStack Train on Ubuntu 18.04 The log entries are found in /etc/apache2/cinder_error.log : 2021-01-06 15:54:05.332604 /usr/lib/python3/dist-packages/oslo_context/context.py:108: DeprecationWarning: Policy enforcement is depending on the value of is_admin. This key is deprecated. Please update your policy file to use the standard policy values. 2021-01-06 15:54:20.147590 /usr/lib/python3/dist-packages/webob/acceptparse.py:1051: DeprecationWarning: The behavior of AcceptValidHeader.best_match is currently being maintained for backward compatibility, but it will be deprecated in the future, as it does not conform to the RFC. Any help on what needs to be changed/removed/updated/upgraded would be appreciated. Thanks! Jared Ontario Institute for Cancer Research From rlandy at redhat.com Thu Jan 14 21:25:41 2021 From: rlandy at redhat.com (Ronelle Landy) Date: Thu, 14 Jan 2021 16:25:41 -0500 Subject: [RDO][CentOS] RDO plans to move to CentOS Stream In-Reply-To: References: Message-ID: On Thu, Jan 14, 2021 at 10:45 AM Alfredo Moralejo Alonso < amoralej at redhat.com> wrote: > Hi, > Alfredo, thanks for posting this CentOS 8 stream related release info. > > As you know, CentOS announced recently that they will be focusing on > CentOS Stream [1] and that CentOS Linux 8 will be EOL at the end of 2021 > [2]. > > In order to align with this change, the RDO project is adapting the > roadmap for the current and upcoming releases: > > - RDO Wallaby (ETA is end of April 2021 [3]) will be built, tested and > released only for CentOS 8 Stream. > - RDO CloudSIG repos for Victoria and Ussuri will be updated and tested > for both CentOS Stream and CentOS Linux 8 until end of 2021 and then > continue on CentOS Stream. > - In the upcoming weeks, we will create new RDO CloudSIG repos for > Victoria and Ussuri for CentOS Stream 8. > - RDO Trunk repositories (aka DLRN repos) will be built and tested using > CentOS 8 Stream for all releases currently using CentOS Linux 8 (starting > with Train). > > If you are interested in the details about these plans and how we plan to > implement it, we've created a blog post [4]. > > Don't hesitate to ask us in the ML or in the #rdo channel on freenode if > you have questions or suggestions about this. > > We are also organizing a video AMA about RDO and CentOS Stream on > Thursday Jan 21 1600 UTC in the RDO Meet room > https://meet.google.com/uzo-tfkt-top > We are doing some initial CentOS 8 stream testing (on Train -> Wallaby releases) as tracked on http://dashboard-ci.tripleo.org/d/jwDYSidGz/rpm-dependency-pipeline?orgId=1. However, since the majority of the testing for RDO is done in upstream openstack, is there an estimate on when an upstream CentOS-8-Stream nodepool node will be available for check/gate testing? Thanks! > > On behalf of RDO, > > Alfredo > > [1] https://centos.org/distro-faq/ > [2] https://blog.centos.org/2020/12/future-is-centos-stream/ > [3] https://releases.openstack.org/ > [4] > https://blogs.rdoproject.org/2021/01/rdo-plans-to-move-to-centos-stream/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jan 14 22:44:42 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jan 2021 22:44:42 +0000 Subject: [RDO][CentOS] RDO plans to move to CentOS Stream In-Reply-To: References: Message-ID: <20210114224104.pja5atja53dmsjt6@yuggoth.org> On 2021-01-14 16:25:41 -0500 (-0500), Ronelle Landy wrote: [...] > is there an estimate on when an upstream CentOS-8-Stream nodepool > node will be available for check/gate testing? [...] We expect to add it in June of last year. ;) https://review.opendev.org/734791 According to codesearch, looks like it's already being used for jobs in diskimage-builder, bifrost, openstacksdk... -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Thu Jan 14 22:55:48 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jan 2021 22:55:48 +0000 Subject: [RDO][CentOS] RDO plans to move to CentOS Stream In-Reply-To: <20210114224104.pja5atja53dmsjt6@yuggoth.org> References: <20210114224104.pja5atja53dmsjt6@yuggoth.org> Message-ID: <20210114225548.dbma2gkgnxdi3isr@yuggoth.org> On 2021-01-14 22:44:42 +0000 (+0000), Jeremy Stanley wrote: > On 2021-01-14 16:25:41 -0500 (-0500), Ronelle Landy wrote: > [...] > > is there an estimate on when an upstream CentOS-8-Stream nodepool > > node will be available for check/gate testing? > [...] > > We expect to add it in June of last year. ;) > > https://review.opendev.org/734791 > > According to codesearch, looks like it's already being used for jobs > in diskimage-builder, bifrost, openstacksdk... Oh sorry, I was looking at the wrong date, we added it in October. We also have package mirroring set up as best as I can tell (centos/8-stream on our mirror sites). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From hberaud at redhat.com Fri Jan 15 07:07:13 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 08:07:13 +0100 Subject: [release] Release countdown for week R-12 Jan 18 - Jan 22 Message-ID: Greetings, Development Focus ----------------- The Wallaby-2 milestone is next week, on 21 January, 2021! Wallaby-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/wallaby/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Wallaby' deliverables need to have a deliverable file in https://opendev.org/openstack/releases/src/branch/master/deliverables and need to have done a release by milestone-2. Changes proposing those deliverables for inclusion in Wallaby have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Wallaby, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Wallaby-2 Milestone: 21 January, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 15 07:55:32 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 08:55:32 +0100 Subject: [oslo][release] etcd3gw release model - series vs independent Message-ID: Greetings Osloers, etcd3gw was moved under the Oslo team's scope a few months ago [1][2]. This project hasn't yet been released by us and we should choose to release it either during Wallaby or to make it an independent [3] project. I didn't see significant changes merged since the project transition, so I think we could directly adopt the independent model for etcd3gw. However I want first to discuss this with all the core maintainers before and then I'll act accordingly. Notice that Wallaby Milestone 2 is next week, so it could be a good time for us to decide to continue with a coordinated series or to adopt the independent model directly. Let me know what you think about that. Thanks, [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016457.html [2] https://governance.openstack.org/tc/reference/projects/oslo.html#etcd3gw [3] https://releases.openstack.org/reference/release_models.html#independent -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From roshananvekar at gmail.com Fri Jan 15 07:58:35 2021 From: roshananvekar at gmail.com (roshan anvekar) Date: Fri, 15 Jan 2021 13:28:35 +0530 Subject: Openstack stein TLS configuration with combined method of interfaces Message-ID: Hello, Openstack version: stein Deployment method: kolla-ansible I am trying to set up TLS for Openstack endpoint. I have chosen combined method of vip address where I supply only kolla_internal_vip_address and network_interface details. I do not enable external kolla vip address. After this I set up kolla_enable_tls_external: 'yes' and pass the kolla_external_fqdn_cert certificates. The installation is successful but I see that http link opens but https:// endpoint does not open at all. Is as good as not available. Any reason for this? Regards, Roshan -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 15 08:42:01 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 09:42:01 +0100 Subject: [rpm-packaging]release] Update our deliverables accordingly to the SIG governance model Message-ID: Dear rpm team fellows, I take off my release PTL cap to discuss with you as a teammate about some inconsistencies in our deliverables. Indeed, last week we noticed some inconsistencies between the rpm-packaging team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo. Since rpm-packaging became a SIG our deliverables [1][2][3] aren't managed by the release team anymore, so we should act accordingly to reflect that. Two choices are available to us: 1) mark these project as "release-model: abandoned" in the governance repo [4]; 2) remove the corresponding files from openstack/releases [1][2][3]. Each choice has pros and cons: - by choosing 1) it will reflect that our projects are abandoned, which is wrong, our projects are still active but not released by the coordinated releases anymore; - by choosing 2) it will reflect that we have always been a SIG, which is wrong too. I personally prefer 2) to allow us to avoid misleading someone by having an abandoned status, indeed, reflecting a SIG is less misleading than reflecting an abandoned project. Here is the list of our deliverables that should be addressed: - renderspec [1] - pymod2pkg [2] - rpm-packaging [3] Any opinion? For further details please take a look to our previous meeting discussion [5]. Thanks for reading, [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/renderspec.yaml [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/pymod2pkg.yaml [3] https://opendev.org/openstack/releases/src/branch/master/deliverables/wallaby/rpm-packaging.yaml [4] https://opendev.org/openstack/governance/src/branch/master/reference/sigs-repos.yaml#L24-L28 [5] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:12:27 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Fri Jan 15 08:45:13 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 15 Jan 2021 09:45:13 +0100 Subject: [RDO][CentOS] RDO plans to move to CentOS Stream In-Reply-To: <20210114225548.dbma2gkgnxdi3isr@yuggoth.org> References: <20210114224104.pja5atja53dmsjt6@yuggoth.org> <20210114225548.dbma2gkgnxdi3isr@yuggoth.org> Message-ID: On Thu, Jan 14, 2021 at 11:59 PM Jeremy Stanley wrote: > On 2021-01-14 22:44:42 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-01-14 16:25:41 -0500 (-0500), Ronelle Landy wrote: > > [...] > > > is there an estimate on when an upstream CentOS-8-Stream nodepool > > > node will be available for check/gate testing? > > [...] > > > > We expect to add it in June of last year. ;) > > > > https://review.opendev.org/734791 > > > > According to codesearch, looks like it's already being used for jobs > > in diskimage-builder, bifrost, openstacksdk... > > Yes, I see centos-8-stream images ready for testing. I'm also adding jobs for packstack in https://review.opendev.org/c/x/packstack/+/770771 > Oh sorry, I was looking at the wrong date, we added it in October. > We also have package mirroring set up as best as I can tell > (centos/8-stream on our mirror sites). > BTW, we need to merge https://review.opendev.org/c/zuul/zuul-jobs/+/770815 to get the proper 8-stream repos configured by configure-mirrors role > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Jan 15 08:50:47 2021 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 15 Jan 2021 08:50:47 +0000 Subject: Openstack stein TLS configuration with combined method of interfaces In-Reply-To: References: Message-ID: On Fri, 15 Jan 2021 at 07:59, roshan anvekar wrote: > > Hello, > > Openstack version: stein > Deployment method: kolla-ansible > > I am trying to set up TLS for Openstack endpoint. > > I have chosen combined method of vip address where I supply only kolla_internal_vip_address and network_interface details. I do not enable external kolla vip address. > > After this I set up kolla_enable_tls_external: 'yes' and pass the kolla_external_fqdn_cert certificates. > > The installation is successful but I see that http link opens but https:// endpoint does not open at all. Is as good as not available. > > Any reason for this? Hi. From the Stein documentation [1]: "The kolla_internal_vip_address and kolla_external_vip_address must be different to enable TLS on the external network." >From the Train release it is possible to enable TLS on the internal VIP, although Ussuri is typically necessary to make it work if you have a private CA. [1] https://docs.openstack.org/kolla-ansible/stein/admin/advanced-configuration.html#tls-configuration > > Regards, > Roshan From hberaud at redhat.com Fri Jan 15 08:57:38 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 09:57:38 +0100 Subject: [barbican][release] Barbican deliverables questions Message-ID: Hi Barbican Team, The release team noticed some inconsistencies between the Barbican team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). First, we noticed that ansible-role-atos-hsm and ansible-role-thales-hsm are new barbican deliverables, and we would appreciate to know if we should plan to release them for Wallaby. The second thing is that barbican-ui (added in Oct 2019) was never released yet and was not ready yet for ussuri and victoria. maybe we should abandon this instead of waiting? Notice that Wallaby's milestone 2 is next week so it could be the good time to update all these things. Let us know your thoughts, we are waiting for your replies. Thanks for reading, [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 15 09:07:12 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 10:07:12 +0100 Subject: [qa][release] Abandoning openstack-tempest-skiplist? Message-ID: Dear QA team, The release team noticed an inconsistency between the QA team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never released yet, was not ready yet for ussuri and victoria. maybe we should abandon this instead of waiting? Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. Let us know your thoughts, we are waiting for your replies. Thanks for reading, [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 15 09:11:15 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 10:11:15 +0100 Subject: [OpenStackSDK][release] Abandoning js-openstack-lib? Message-ID: Dear OpenStackSDK team, The release team noticed an inconsistency between the OpenStackSDK team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). Indeed, js-openstack-lib (added January 9, 2020) was never released yet and was not ready for ussuri or victoria. maybe we should abandon this instead of waiting? Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. Let us know your thoughts, we are waiting for your replies. Thanks for reading, [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jan 15 09:15:11 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 10:15:11 +0100 Subject: [monasca][release] Abandoning monasca-ceilometer and monasca-log-api? Message-ID: Dear Monasca team, The release team noticed an inconsistency between the Monasca team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). Indeed, monasca-ceilometer and monasca-log-api were released in train but not released in ussuri nor victoria. Do you think that they should be deprecated (abandoned) in governance? Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. Let us know your thoughts, we are waiting for your replies. Thanks for reading, [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Fri Jan 15 09:37:23 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Fri, 15 Jan 2021 09:37:23 +0000 Subject: [nova] MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION bumps in Wallaby Message-ID: <20210115093723.q2uqabf2emedxc2z@lyarwood-laptop.usersys.redhat.com> Hello all, We are once again looking to increase the minimum and next minimum versions of libvirt and QEMU in the libvirt virt driver. These were previously increased late in the Victoria cycle: MIN_LIBVIRT_VERSION = (5, 0, 0) MIN_QEMU_VERSION = (4, 0, 0) NEXT_MIN_LIBVIRT_VERSION = (6, 0, 0) NEXT_MIN_QEMU_VERSION = (4, 2, 0) libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION https://review.opendev.org/c/openstack/nova/+/746981 I am now proposing the following updates to these versions in Wallaby: MIN_LIBVIRT_VERSION = (6, 0, 0) MIN_QEMU_VERSION = (4, 2, 0) NEXT_MIN_LIBVIRT_VERSION = (7, 0, 0) NEXT_MIN_QEMU_VERSION = (5, 2, 0) libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION https://review.opendev.org/c/openstack/nova/+/754700 libvirt v6.0.0 and QEMU 4.2.0 being supported by the three LTS releases supported by the Wallaby release of OpenStack [1][2]. libvirt v7.0.0 and QEMU 5.2.0 being slightly aspirational for the next minimum versions at this point but given current releases for both projects they seem appropriate for the time being. Please let me know ideally in the review if there are any issues with this increase. Regards, [1] https://governance.openstack.org/tc/reference/runtimes/wallaby.html [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From geguileo at redhat.com Fri Jan 15 10:01:23 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 15 Jan 2021 11:01:23 +0100 Subject: Trying to resolve 2 deprecation warnings In-Reply-To: References: Message-ID: <20210115100123.ezptbmoripuzbmvh@localhost> On 14/01, shubjero wrote: > Hi all, > > I am trying to resolve two deprecation warnings which I've not been > able to locate exactly what package or file is causing them. > > We are running OpenStack Train on Ubuntu 18.04 > > The log entries are found in /etc/apache2/cinder_error.log : > 2021-01-06 15:54:05.332604 > /usr/lib/python3/dist-packages/oslo_context/context.py:108: > DeprecationWarning: Policy enforcement is depending on the value of > is_admin. This key is deprecated. Please update your policy file to > use the standard policy values. Hi, Maybe it is caused by one of these 2 options: https://github.com/openstack/cinder/blob/60a73610910257abf5bdd5eae7606f5bd012ae5d/cinder/policies/base.py#L25-L32 > 2021-01-06 15:54:20.147590 > /usr/lib/python3/dist-packages/webob/acceptparse.py:1051: > DeprecationWarning: The behavior of AcceptValidHeader.best_match is > currently being maintained for backward compatibility, but it will be > deprecated in the future, as it does not conform to the RFC. > This could be one of these: https://github.com/openstack/cinder/blob/60a73610910257abf5bdd5eae7606f5bd012ae5d/cinder/api/openstack/wsgi.py#L253 https://github.com/openstack/cinder/blob/60a73610910257abf5bdd5eae7606f5bd012ae5d/cinder/api/openstack/wsgi.py#L840 Cheers, Gorka. > Any help on what needs to be changed/removed/updated/upgraded would be > appreciated. > > Thanks! > > Jared > Ontario Institute for Cancer Research > From kchamart at redhat.com Fri Jan 15 10:40:13 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 15 Jan 2021 11:40:13 +0100 Subject: [nova] MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION bumps in Wallaby In-Reply-To: <20210115093723.q2uqabf2emedxc2z@lyarwood-laptop.usersys.redhat.com> References: <20210115093723.q2uqabf2emedxc2z@lyarwood-laptop.usersys.redhat.com> Message-ID: <20210115104013.GA484402@paraplu.home> On Fri, Jan 15, 2021 at 09:37:23AM +0000, Lee Yarwood wrote: > > Hello all, Hey, > We are once again looking to increase the minimum and next minimum > versions of libvirt and QEMU in the libvirt virt driver. > > These were previously increased late in the Victoria cycle: > > MIN_LIBVIRT_VERSION = (5, 0, 0) > MIN_QEMU_VERSION = (4, 0, 0) > > NEXT_MIN_LIBVIRT_VERSION = (6, 0, 0) > NEXT_MIN_QEMU_VERSION = (4, 2, 0) > > libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION > https://review.opendev.org/c/openstack/nova/+/746981 > > I am now proposing the following updates to these versions in Wallaby: > > MIN_LIBVIRT_VERSION = (6, 0, 0) > MIN_QEMU_VERSION = (4, 2, 0) > > NEXT_MIN_LIBVIRT_VERSION = (7, 0, 0) > NEXT_MIN_QEMU_VERSION = (5, 2, 0) > > libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION > https://review.opendev.org/c/openstack/nova/+/754700 > > libvirt v6.0.0 and QEMU 4.2.0 being supported by the three LTS releases > supported by the Wallaby release of OpenStack [1][2]. > > libvirt v7.0.0 and QEMU 5.2.0 being slightly aspirational for the next > minimum versions at this point but given current releases for both > projects they seem appropriate for the time being. Yeah, v7.0.0. is fine — it's only one year further (released: Jan 2021) from v6.0.0 (released: Jan 2020). > Please let me know ideally in the review if there are any issues with > this increase. Looks fine to me. Thanks! > Regards, > > [1] https://governance.openstack.org/tc/reference/runtimes/wallaby.html > [2] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -- /kashyap From jegor at greenedge.cloud Fri Jan 15 11:39:41 2021 From: jegor at greenedge.cloud (Jegor van Opdorp) Date: Fri, 15 Jan 2021 11:39:41 +0000 Subject: [masakari] Masakari Team Meeting - the schedule In-Reply-To: References: Message-ID: <271344d8-61d1-4fa9-b40c-3bb4550a290c@email.android.com> I really doesn't matter to me, choose any :) On Jan 12, 2021 10:56, Radosław Piliszek wrote: Hello, Folks! I have realised the Masakari Team Meeting is to run on even weeks [1]. However, anyone who created the meeting record in their calendar (including me) has likely gotten the meeting schedule in odd weeks this year (because last year finished with an odd week and obviously numbering also starts on odd: the 1). So I have run the first meeting this year the previous week but someone came for the meeting this week. :-) According to the "new wrong" schedule, the next meeting would be on Jan 19, but according to the "proper" one it would be on Jan 26. I am available both weeks the same so can run either term (or both as well, why not). The question is whether we don't want to simply move to the weekly meeting schedule. We usually don't have much to discuss but it might be less confusing and a better way to form a habit if we met every week. Please let me know your thoughts. [1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Jan 15 13:05:22 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 15 Jan 2021 14:05:22 +0100 Subject: [OpenStackSDK][release] Abandoning js-openstack-lib? In-Reply-To: References: Message-ID: On Fri, Jan 15, 2021 at 10:11 AM Herve Beraud wrote: > > Dear OpenStackSDK team, > > The release team noticed an inconsistency between the OpenStackSDK team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). > > Indeed, js-openstack-lib (added January 9, 2020) was never released yet and was not ready for ussuri or victoria. maybe we should abandon this instead of waiting? Probably. It was me who saved it from demise but I did not get time to really work on it and not planning to any time soon nowadays. It has been established that the project more-or-less requires a serious redesign and rewrite to keep up with current technology and practices established in other, established OpenStackSDK deliverables. -yoctozepto From radoslaw.piliszek at gmail.com Fri Jan 15 13:55:18 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 15 Jan 2021 14:55:18 +0100 Subject: [oslo][release] etcd3gw release model - series vs independent In-Reply-To: References: Message-ID: On Fri, Jan 15, 2021 at 8:56 AM Herve Beraud wrote: > > Greetings Osloers, > > etcd3gw was moved under the Oslo team's scope a few months ago [1][2]. > > This project hasn't yet been released by us and we should choose to release it either during Wallaby or to make it an independent [3] project. > > I didn't see significant changes merged since the project transition, so I think we could directly adopt the independent model for etcd3gw. Based on the current status, I would say the independent release model would suit it best. -yoctozepto From gmann at ghanshyammann.com Fri Jan 15 14:54:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Jan 2021 08:54:07 -0600 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: References: Message-ID: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud wrote ---- > Dear QA team, > The release team noticed an inconsistency between the QA team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never released yet, was not ready yet for ussuri and victoria. maybe we should abandon this instead of waiting? > Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. > Let us know your thoughts, we are waiting for your replies. > Thanks for reading, Thanks hberaud for brining it, I did not know about this repo until it was discussed in yesterday's release meeting. As per the governance project.yaml 'openstack-tempest-skiplist' is under the TripleO project, not QA [1] (tagging tripleo in sub). This repo is to maintain the test skiplist so not sure if it needed release but marios or Chandan can decide. [1] https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 -gmann > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From lei12zhang12 at gmail.com Fri Jan 15 03:26:28 2021 From: lei12zhang12 at gmail.com (Lei Zhang) Date: Fri, 15 Jan 2021 11:26:28 +0800 Subject: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community In-Reply-To: References: Message-ID: This looks cool. One question about the Venus api, does it support full Elasticsearch DSL or just a subset of queries On Mon, Jan 11, 2021 at 4:59 AM Liye Pang(逄立业) wrote: > Hello everyone, after feedback from a large number of operations and > maintenance personnel in InCloud OpenStack, we developed the log management > project “Venus” for the OpenStack projects and that has contributed to the > OpenStack community. The following is an introduction to “Venus”. If there > is interest in the community, we are interested in proposing it to become > an official OpenStack project in the future. > Background > > In the day-to-day operation and maintenance of large-scale cloud platform, > the following problems are encountered: > > l Time-consuming for log querying while the server increasing to > thousands. > > l Difficult to retrieve logs, since there are many modules in the > platform, e.g. systems service, compute, storage, network and other > platform services. > > l The large amount and dispersion of log make faults are difficult to be > discovered. > > l Because of distributed and interaction between components of the cloud > platform, and scattered logs between components, it will take more time to > locate problems. > About Venus > > According to the key requirements of OpenStack in log storage, retrieval, > analysis and so on, we introduced *Venus *project, a unified log > management module. This module can provide a one-stop solution to log > collection, cleaning, indexing, analysis, alarm, visualization, report > generation and other needs, which involves helping operator or maintainer > to quickly solve retrieve problems, grasp the operational health of the > platform, and improve the management capabilities of the cloud platform. > > Additionally, this module plans to use machine learning algorithms to > quickly locate IT failures and root causes, and improve operation and > maintenance efficiency. > Application scenario > > Venus played a key role in the following scenarios: > > l *Retrieval:* Provide a simple and easy-to-use way to retrieve all log > and the context. > > l *Analysis*: Realize log association, field value statistics, and > provide multi-scene and multi-dimensional visual analysis reports. > > l *Alerts*:Convert retrieval into active alerts to realize the error > finding in massive logs. > > l *Issue location*: Establish a chain relationship and knowledge graphs > to quickly locate problems. > Overall structure > > The architecture of log management system based on Venus and elastic > search is as follows: > > Diagram 0: Architecture of Venus > > *venus_api*: API module,provide API、rest-api service. > > *venus_manager*: Internal timing task module to realize the core > functions of the log system. > Current progress > > The current progress of the Venus project is as follows: > > l Collection:Develop *fluentd* collection tasks based on collectd to > read, filter, format and send plug-ins for OpenStack, operating systems, > and platform services, etc. > > l Index:Dealing with multi-dimensional index data in *elasticsearch*, > and provide more concise and comprehensive authentication interface to > return query results. > > l Analysis:Analyzing and display the related module errors, Mariadb > connection errors, and Rabbitmq connection errors. > > l Alerts:Develop alarm task code to set threshold for the number of > error logs of different modules at different times, and provides alarm > services and notification services. > > l Location:Develop the call chain analysis function based on > *global_requested* series, which can show the execution sequence, time > and error information, etc., and provide the export operation. > > l Management:Develop configuration management functions in the log > system, such as alarm threshold setting, timing task management, and log > saving time setting, etc. > Application examples > > Two examples of Venus application scenarios are as follows. > > 1. The virtual machine creation operation was performed on the > cloud platform and it was found that the virtual machine was not created > successfully. > > First, we can find the request id of the operation and jump to the virtual > machine creation call chain page. > > Then, we can query the calling process, view and download the details of > the log of the call. > > 2. In the cloud platform, the error log of each module can be > converted into alarms to remind the users. > > Further, we can retrieve the details of the error log and error log > statistics. > > Next step > > The next step of the Venus project is as follows: > > l *Collection*:In addition to fluent, other collection plugins such as > logstash will be integrated. > > l *Analysis*: Explore more operation and maintenance scenarios, and > conduct statistical analysis and alarm on key data. > > l *display*: The configuration, analysis and alarm of Venus will be > integrated into horizon in the form of plugin. > > l *location*: Form clustering log and construct knowledge map, and > integrate algorithm class library to locate the root cause of the fault. > Venus Project Registry > > *Venus library*: https://opendev.org/inspur/venus > > You can grab the source code using the following git command: > > git clone https://opendev.org/inspur/venus.git > > > Venus Demo > > *Youtu.be*: https://youtu.be/mE2MoEx3awM > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 8136 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 15944 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 8405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 15175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 8496 bytes Desc: not available URL: From pangliye at inspur.com Fri Jan 15 04:20:17 2021 From: pangliye at inspur.com (=?utf-8?B?TGl5ZSBQYW5nKOmAhOeri+S4mik=?=) Date: Fri, 15 Jan 2021 04:20:17 +0000 Subject: =?utf-8?B?562U5aSNOiBbYWxsXUludHJvZHVjdGlvbiB0byB2ZW51cyB3aGljaCBpcyB0?= =?utf-8?B?aGUgcHJvamVjdCBvZiBsb2cgbWFuYWdlbWVudCBhbmQgaGFzIGJlZW4gY29u?= =?utf-8?Q?tributed_to_the_OpenStack_community?= In-Reply-To: References: Message-ID: <308c7cdf6f0d4ea2806a9a736df71b11@inspur.com> Not all es DSL Some venus api will be directly converted to es api, some will query es data and return the result after calculation, and some will query mysql data, such as alarms. 发件人: Lei Zhang 发送时间: 2021年1月15日 11:26 收件人: Liye Pang(逄立业) 抄送: openstack-discuss at lists.openstack.org 主题: Re: [all]Introduction to venus which is the project of log management and has been contributed to the OpenStack community This looks cool. One question about the Venus api, does it support full Elasticsearch DSL or just a subset of queries On Mon, Jan 11, 2021 at 4:59 AM Liye Pang(逄立业) > wrote: Hello everyone, after feedback from a large number of operations and maintenance personnel in InCloud OpenStack, we developed the log management project “Venus” for the OpenStack projects and that has contributed to the OpenStack community. The following is an introduction to “Venus”. If there is interest in the community, we are interested in proposing it to become an official OpenStack project in the future. Background In the day-to-day operation and maintenance of large-scale cloud platform, the following problems are encountered: l Time-consuming for log querying while the server increasing to thousands. l Difficult to retrieve logs, since there are many modules in the platform, e.g. systems service, compute, storage, network and other platform services. l The large amount and dispersion of log make faults are difficult to be discovered. l Because of distributed and interaction between components of the cloud platform, and scattered logs between components, it will take more time to locate problems. About Venus According to the key requirements of OpenStack in log storage, retrieval, analysis and so on, we introduced Venus project, a unified log management module. This module can provide a one-stop solution to log collection, cleaning, indexing, analysis, alarm, visualization, report generation and other needs, which involves helping operator or maintainer to quickly solve retrieve problems, grasp the operational health of the platform, and improve the management capabilities of the cloud platform. Additionally, this module plans to use machine learning algorithms to quickly locate IT failures and root causes, and improve operation and maintenance efficiency. Application scenario Venus played a key role in the following scenarios: l Retrieval: Provide a simple and easy-to-use way to retrieve all log and the context. l Analysis: Realize log association, field value statistics, and provide multi-scene and multi-dimensional visual analysis reports. l Alerts:Convert retrieval into active alerts to realize the error finding in massive logs. l Issue location: Establish a chain relationship and knowledge graphs to quickly locate problems. Overall structure The architecture of log management system based on Venus and elastic search is as follows: Diagram 0: Architecture of Venus venus_api: API module,provide API、rest-api service. venus_manager: Internal timing task module to realize the core functions of the log system. Current progress The current progress of the Venus project is as follows: l Collection:Develop fluentd collection tasks based on collectd to read, filter, format and send plug-ins for OpenStack, operating systems, and platform services, etc. l Index:Dealing with multi-dimensional index data in elasticsearch, and provide more concise and comprehensive authentication interface to return query results. l Analysis:Analyzing and display the related module errors, Mariadb connection errors, and Rabbitmq connection errors. l Alerts:Develop alarm task code to set threshold for the number of error logs of different modules at different times, and provides alarm services and notification services. l Location:Develop the call chain analysis function based on global_requested series, which can show the execution sequence, time and error information, etc., and provide the export operation. l Management:Develop configuration management functions in the log system, such as alarm threshold setting, timing task management, and log saving time setting, etc. Application examples Two examples of Venus application scenarios are as follows. 1. The virtual machine creation operation was performed on the cloud platform and it was found that the virtual machine was not created successfully. First, we can find the request id of the operation and jump to the virtual machine creation call chain page. Then, we can query the calling process, view and download the details of the log of the call. 2. In the cloud platform, the error log of each module can be converted into alarms to remind the users. Further, we can retrieve the details of the error log and error log statistics. Next step The next step of the Venus project is as follows: * Collection:In addition to fluent, other collection plugins such as logstash will be integrated. * Analysis: Explore more operation and maintenance scenarios, and conduct statistical analysis and alarm on key data. * display: The configuration, analysis and alarm of Venus will be integrated into horizon in the form of plugin. * location: Form clustering log and construct knowledge map, and integrate algorithm class library to locate the root cause of the fault. Venus Project Registry Venus library: https://opendev.org/inspur/venus You can grab the source code using the following git command: git clone https://opendev.org/inspur/venus.git Venus Demo Youtu.be: https://youtu.be/mE2MoEx3awM -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24507 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 3184 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 8136 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 15944 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 8405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 3046 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 15175 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 8496 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3786 bytes Desc: not available URL: From bshewale at redhat.com Fri Jan 15 13:47:09 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Fri, 15 Jan 2021 19:17:09 +0530 Subject: [tripleo] TripleO CI Summary: Unified Sprint 37 and 38 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 36 and 37** (Dec 04 to Jan 13). The following is a summary of completed work during this sprint cycle: - Component standalone-upgrade added to tripleo component: 1. https://review.rdoproject.org/zuul/builds?job_name=periodic-tripleo-ci-centos-8-standalone-upgrade-tripleo-ussuri&job_name=periodic-tripleo-ci-centos-8-standalone-upgrade-tripleo-victoria&job_name=periodic-tripleo-ci-centos-8-standalone-upgrade-tripleo-master - Docs https://review.opendev.org/c/openstack/tripleo-docs/+/767375 https://docs.openstack.org/tripleo-docs/latest/ci/stages-overview.html - Reduce content-providers in upgrade job layouts: https://review.opendev.org/q/topic:reduce-content-providers - Improve tripleo-upgrade repo job coverage: https://review.opendev.org/q/topic:tripleo-upgrade-provider-jobs-coverage - (content providers) remove registry_ip_address: https://bugs.launchpad.net/tripleo/+bug/1904565 https://review.opendev.org/c/openstack/tripleo-ci/+/764359 - Build report for container builds and content provider jobs: https://review.opendev.org/q/topic:%2522build-report%2522+(status:merged) - Added a build report for the container builds and content provider jobs: https://review.opendev.org/q/topic:%2522build-report%2522+(status:merged) - Promoter configuration fix: https://review.rdoproject.org/r/#/c/28014/ - rpm diff comparison script added to dependency jobs logs: https://github.com/rdo-infra/ci-config/commit/bf829ed2b7bda23d85d5128907147403e8f1e1b4#diff-c946e763e9253e2ac6aa5b8a237dc956edd2dc583fa6578ad74ce53a363c0446 - centos 8 stream: - rpm comparison skip list was updated with release delivery's help: https://github.com/rdo-infra/ci-config/commit/ca144362ee520dbd6e89f1e23dcb8cb02979d09c - branch jobs were added: victoria, ussuri, train - dependency lines now run with a controller job for rpm comparisons - working to get the stream jobs running on stream nodes https://bugs.launchpad.net/tripleo/+bug/1910791 https://review.rdoproject.org/zuul/builds?pipeline=openstack-dependencies-centos8stream - Provisioning via metalsmith in 16.2 and created a new job for it in 16.2 component pipeline https://review.opendev.org/c/openstack/tripleo-ci/+/769469 - Stable version of scenario manager. Apart from that, https://review.opendev.org/c/openstack/tempest/+/766472 https://review.opendev.org/c/openstack/tempest/+/766015 - Writing unit test cases for the cockpit (telegraf container and moving scipts from py2 to py3 directory). https://review.rdoproject.org/r/#/c/30492/ - Done https://review.rdoproject.org/r/#/c/31442/ - Work in progress - Working on Promoter Stuff (Get familiar with the new promoter code). https://review.rdoproject.org/r/#/c/31318/ - Done - Released 1.2.1 version of Openstack Ansible collections: https://opendev.org/openstack/ansible-collections-openstack - Released 1.4.0 and 1.4.1 versions of Podman Ansible collections: https://github.com/containers/ansible-podman-collections - Created OSP 13 and OSP 16.2 jobs in CI that run Openstack Ansible collections tests to ensure collections work on these OSP versions. - Enable dracut list installed modules https://review.opendev.org/c/openstack/diskimage-builder/+/766232 - Collect dnf module related infos - https://review.opendev.org/c/openstack/ansible-role-collect-logs/+/768595 - still working on finishing copy-quay role: https://review.rdoproject.org/r/#/c/31395/ - Bugs related to os_tempest that is affecting upgrade jobs: https://bugs.launchpad.net/tripleo/+bug/1911020 - add test command on tempest-skip list and documentation https://review.opendev.org/c/openstack/openstack-tempest-skiplist/+/754994 Upstream promotions [1] NEXT SPRINT =========== - Bring the new promoter code online - Dependency pipeline - tripleo repos (As a design discussion for this sprint) - openstack Health for TripleO - Ruck and Rover - Akahat|rover - sshnaidm|ruck Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. [1]: http://dashboard-ci.tripleo.org/d/HkOLImOMk/upstream-and-rdo-promotions?orgId=1 Sprint 37 and 38 ruck/rover notes: https://hackmd.io/R0kCgz_7SHSix_cNgoC9pg Thanks and Regards, Bhagyashri Shewale -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Jan 15 15:23:12 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 15 Jan 2021 17:23:12 +0200 Subject: [tripleo] next irc meeting Tuesday Jan 19 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 19th January at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to hilight at https://etherpad.opendev.org/p/tripleo-meeting-items This can include recently completed things, ongoing review requests, blocking issues, or anything else tripleo you'd like to share. Our last meeting was on Jan 05 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-01-05-14.00.html Hope you can make it on Tuesday, thanks, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri Jan 15 15:52:21 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 15 Jan 2021 17:52:21 +0200 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? In-Reply-To: References: Message-ID: On Thu, Dec 10, 2020 at 7:01 PM Marios Andreou wrote: > Hello TripleO > > I would like to propose that we move all tripleo stable/rocky repos [1] to > "unmaintained", with a view to tagging as end-of-life in due course. > > This will allow us to focus our efforts on keeping the check and gate > queues green and continue to deliver weekly promotions for the more recent > and active stable/* branches train ussuri victoria and master. > > The stable/rocky repos have not had much action in the last few months - I > collected some info at [2] about the most recent stable/rocky commits for > each of the tripleo repos. For many of those there are no commits in the > last 6 months and for some even longer. > > The tripleo stable/rocky repos were tagged as "extended maintenance" > (rocky-em) [2] in April 2020 with [3]. > > We have already reduced our CI commitment for rocky - these [4] are the > current check/gate jobs and these [5] are the jobs that run for promotion > to current-tripleo. However maintaining this doesn’t make sense if we are > not even using it e.g. merging things into tripleo-* stable/rocky. > > Please raise your objections or any other comments or thoughts about this. > Unless there are any blockers raised here, the plan is to put this into > motion early in January. > > One still unanswered question I have is that since there is no > ‘unmaintained’ tag, in the same way as we have the -em or > for extended maintenance and end-of-life, do we simply > _declare_ that the repos are unmaintained? Then after a period of “0 to 6 > months” per [6] we can tag the tripleo repos with rocky-eol. If any one > reading this knows please tell us! > > o/ hello ! replying to bump the thread - this was sent ~1 month ago now and there hasn't been any comment thus far. ping @Herve please do you know the answer to that question in the last paragraph above about 'declaring unmaintained' ? please thank you ;) As discussed at the last tripleo bi-weekly we can consider moving forward with this so I think it's prudent to give folks more chance to comment if they object for _reason_ thanks, marios > Thanks for reading! > > regards, marios > > > [1] https://releases.openstack.org/teams/tripleo.html#rocky > > [2] http://paste.openstack.org/raw/800464/ > > [3] https://review.opendev.org/c/openstack/releases/+/709912 > > [4] > http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky > > [5] > http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F > > [6] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Jan 15 16:06:12 2021 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 15 Jan 2021 10:06:12 -0600 Subject: [oslo][release] etcd3gw release model - series vs independent In-Reply-To: References: Message-ID: <20210115160612.GA2966915@sm-workstation> On Fri, Jan 15, 2021 at 02:55:18PM +0100, Radosław Piliszek wrote: > On Fri, Jan 15, 2021 at 8:56 AM Herve Beraud wrote: > > > > Greetings Osloers, > > > > etcd3gw was moved under the Oslo team's scope a few months ago [1][2]. > > > > This project hasn't yet been released by us and we should choose to release it either during Wallaby or to make it an independent [3] project. > > > > I didn't see significant changes merged since the project transition, so I think we could directly adopt the independent model for etcd3gw. > > Based on the current status, I would say the independent release model > would suit it best. > > -yoctozepto > +1 for independent. That makes sense for this type of library. From hberaud at redhat.com Fri Jan 15 17:01:24 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jan 2021 18:01:24 +0100 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? In-Reply-To: References: Message-ID: Hello, Sorry for my late reply, and thanks for the heads up. Can't we move directly to EOL [1]? I don't see reason to keep an unmaintained repo open, and if the repos remain open and patches merged then it's not really unmaintained repos. The goal of the extended maintenance was to offer more chances to downstream maintainers to get/share patches and fixes, if you decide to not maintain them anymore then I would suggest you to move to "EOL" directly, it would be less misleading. Notice that if you move rocky to eol all the corresponding branches will be dropped in your repos. Also notice that last week we proposed a new kind of tag (-last) [2][3] for Tempest's needs, but because tempest is branchless... Maybe we could extend this notion (-last) to allow the project to reflect the last step... It could reflect that it will be your last release, and that the project is near from the end. But if you don't plan to merge patches, or if you don't have patches to merge anymore, then I would really suggest to you to move directly to EOL, else it means that you're not really "unmaintained". Hopefully it will help you to find the solution that fits your needs. Let me know if you have more questions. [1] https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life [2] https://review.opendev.org/c/openstack/releases/+/770265 [3] https://review.opendev.org/c/openstack/project-team-guide/+/769821 Le ven. 15 janv. 2021 à 16:52, Marios Andreou a écrit : > > On Thu, Dec 10, 2020 at 7:01 PM Marios Andreou wrote: > >> Hello TripleO >> >> I would like to propose that we move all tripleo stable/rocky repos [1] >> to "unmaintained", with a view to tagging as end-of-life in due course. >> >> This will allow us to focus our efforts on keeping the check and gate >> queues green and continue to deliver weekly promotions for the more recent >> and active stable/* branches train ussuri victoria and master. >> >> The stable/rocky repos have not had much action in the last few months - >> I collected some info at [2] about the most recent stable/rocky commits for >> each of the tripleo repos. For many of those there are no commits in the >> last 6 months and for some even longer. >> >> The tripleo stable/rocky repos were tagged as "extended maintenance" >> (rocky-em) [2] in April 2020 with [3]. >> >> We have already reduced our CI commitment for rocky - these [4] are the >> current check/gate jobs and these [5] are the jobs that run for promotion >> to current-tripleo. However maintaining this doesn’t make sense if we are >> not even using it e.g. merging things into tripleo-* stable/rocky. >> >> Please raise your objections or any other comments or thoughts about >> this. Unless there are any blockers raised here, the plan is to put this >> into motion early in January. >> >> One still unanswered question I have is that since there is no >> ‘unmaintained’ tag, in the same way as we have the -em or >> for extended maintenance and end-of-life, do we simply >> _declare_ that the repos are unmaintained? Then after a period of “0 to 6 >> months” per [6] we can tag the tripleo repos with rocky-eol. If any one >> reading this knows please tell us! >> >> > o/ hello ! > > replying to bump the thread - this was sent ~1 month ago now and there > hasn't been any comment thus far. > > ping @Herve please do you know the answer to that question in the last > paragraph above about 'declaring unmaintained' ? please thank you ;) > > As discussed at the last tripleo bi-weekly we can consider moving forward > with this so I think it's prudent to give folks more chance to comment if > they object for _reason_ > > thanks, marios > > > > > > >> Thanks for reading! >> >> regards, marios >> >> >> [1] https://releases.openstack.org/teams/tripleo.html#rocky >> >> [2] http://paste.openstack.org/raw/800464/ >> >> [3] https://review.opendev.org/c/openstack/releases/+/709912 >> >> [4] >> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky >> >> [5] >> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F >> >> [6] >> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases >> > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Jan 15 19:36:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Jan 2021 13:36:30 -0600 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> Message-ID: <177078c987c.c394eaa31331699.3566137682643757540@ghanshyammann.com> Thanks for continuing or coming forward for help. I have done the required modification and below is the current core group to handle day to day activity for interop source code repo: Amy Marrich amy at demarco.com Ghanshyam gmann at ghanshyammann.com Goutham Pacha Ravi gouthampravi at gmail.com Luigi Toscano ltoscano at redhat.com Martin Kopec mkopec at redhat.com Thierry Carrez thierry at openstack.org (ttx mentioned not having enough time for core roles but I kept him as it will be helpful for issue reporting and merging the things faster) Vida Haririan vhariria at redhat.com Chandan Kumar chkumar at redhat.com Also added a project-config change to combine the core group acl file - https://review.opendev.org/c/openstack/project-config/+/771066 -gmann ---- On Tue, 15 Dec 2020 08:53:54 -0600 Amy Marrich wrote ---- > Let me know if I can help. > Thanks, > Amy (spotz) > > On Fri, Dec 11, 2020 at 1:25 PM Ghanshyam Mann wrote: > Hello Everyone, > > As Goutham mentioned in a separate ML thread[2] that there is no active maintainer for refstack repo > which we discussed in today's interop meeting[1]. We had a few volunteers who can help to maintain the > refstack and other interop repo which is good news. > > I would like to call for more volunteers (new or existing ones), if you are interested to help please do reply > to this email. The role is to maintain the source code of the below repos. I will propose the ACL changes in infra sometime > next Friday (18th dec) or so. > > For easy maintenance, we thought of merging the below repo core group into a single group called 'refstack-core' > > - openstack/python-tempestconf > - openstack/refstack > - openstack/refstack-client > - x/ansible-role-refstack-client (moving to osf/ via https://review.opendev.org/765787) > > Current Volunteers: > - martin (mkopec at redhat.com) > - gouthamr (gouthampravi at gmail.com) > - gmann (gmann at ghanshyammann.com) > - Vida (vhariria at redhat.com) > > - interop-core (we will add this group also which has interop WG chairs so that it will be easy to maintain in the future changes) > > NOTE: there is no change in the 'interop' repo group which has interop guidelines and doc etc. > > [1] https://etherpad.opendev.org/p/interop > [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019263.html > > -gmann > > From Arkady.Kanevsky at dell.com Fri Jan 15 20:45:40 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 15 Jan 2021 20:45:40 +0000 Subject: [all] https://github.com/OpenStackweb/openstack-org Message-ID: Looking at out current website code I wonder if we should 1. Reword to be OpenInfra not just OpenStack or even move to new repo for OpenInfraWeb? 2. Keep it to be dedicated to OpenStack and create a parent Web for OpenInfra. That parent page will point to all projects under OpenInftastructure umbrella? In related question, should we also create openinfra-discuss mail list? Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Jan 15 20:48:17 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 15 Jan 2021 20:48:17 +0000 Subject: [all][interop] Reforming the refstack maintainers team In-Reply-To: <177078c987c.c394eaa31331699.3566137682643757540@ghanshyammann.com> References: <1765341f106.111a3f243109782.5942668683123760803@ghanshyammann.com> <177078c987c.c394eaa31331699.3566137682643757540@ghanshyammann.com> Message-ID: Thanks for driving it Ghanshyam. I will help where I can. Arkady -----Original Message----- From: Ghanshyam Mann Sent: Friday, January 15, 2021 1:37 PM To: Amy Marrich Cc: openstack-discuss; foundation-board at lists.openstack.org Subject: Re: [all][interop] Reforming the refstack maintainers team [EXTERNAL EMAIL] Thanks for continuing or coming forward for help. I have done the required modification and below is the current core group to handle day to day activity for interop source code repo: Amy Marrich amy at demarco.com Ghanshyam gmann at ghanshyammann.com Goutham Pacha Ravi gouthampravi at gmail.com Luigi Toscano ltoscano at redhat.com Martin Kopec mkopec at redhat.com Thierry Carrez thierry at openstack.org (ttx mentioned not having enough time for core roles but I kept him as it will be helpful for issue reporting and merging the things faster) Vida Haririan vhariria at redhat.com Chandan Kumar chkumar at redhat.com Also added a project-config change to combine the core group acl file - https://review.opendev.org/c/openstack/project-config/+/771066 -gmann ---- On Tue, 15 Dec 2020 08:53:54 -0600 Amy Marrich wrote ---- > Let me know if I can help. > Thanks, > Amy (spotz) > > On Fri, Dec 11, 2020 at 1:25 PM Ghanshyam Mann wrote: > Hello Everyone, > > As Goutham mentioned in a separate ML thread[2] that there is no active maintainer for refstack repo > which we discussed in today's interop meeting[1]. We had a few volunteers who can help to maintain the > refstack and other interop repo which is good news. > > I would like to call for more volunteers (new or existing ones), if you are interested to help please do reply > to this email. The role is to maintain the source code of the below repos. I will propose the ACL changes in infra sometime > next Friday (18th dec) or so. > > For easy maintenance, we thought of merging the below repo core group into a single group called 'refstack-core' > > - openstack/python-tempestconf > - openstack/refstack > - openstack/refstack-client > - x/ansible-role-refstack-client (moving to osf/ via https://review.opendev.org/765787) > > Current Volunteers: > - martin (mkopec at redhat.com) > - gouthamr (gouthampravi at gmail.com) > - gmann (gmann at ghanshyammann.com) > - Vida (vhariria at redhat.com) > > - interop-core (we will add this group also which has interop WG chairs so that it will be easy to maintain in the future changes) > > NOTE: there is no change in the 'interop' repo group which has interop guidelines and doc etc. > > [1] https://etherpad.opendev.org/p/interop > [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019263.html > > -gmann > > From jimmy at openstack.org Fri Jan 15 23:00:08 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 15 Jan 2021 17:00:08 -0600 Subject: [all] https://github.com/OpenStackweb/openstack-org In-Reply-To: References: Message-ID: <4DC37310-4C2A-4502-8C1F-42622768585B@getmailspring.com> Hi Arkady, Renaming the repo is on the list. We have quite a few pieces we're still working to clean up (repos, legal, etc...) as we transfer to the new Foundation name. When you say "parent web", I think you're referring to a parent repo with the name openinfra, which would be taken care of by the above. But we do already have https://openinfra.dev which points to all of our projects and such. Re: the ml - we already have a Foundation and Foundation Board ml, so I'm not sure any renaming or new ml is necessary on that front. If you feel those won't cover it, can you let us know what you have in mind? Cheers, Jimmy On Jan 15 2021, at 2:45 pm, Kanevsky, Arkady wrote: > > Looking at out current website code I wonder if we should > > Reword to be OpenInfra not just OpenStack or even move to new repo for OpenInfraWeb? > Keep it to be dedicated to OpenStack and create a parent Web for OpenInfra. That parent page will point to all projects under OpenInftastructure umbrella? > > > > > In related question, should we also create openinfra-discuss mail list? > > Thanks, > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell EMC office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jan 16 16:41:17 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 16 Jan 2021 16:41:17 +0000 Subject: bindep, pacman, and PyPI release In-Reply-To: References: Message-ID: <20210116164116.gqn6stp7kyvli4ao@yuggoth.org> On 2020-12-13 11:43:35 +0100 (+0100), Jakob Lykke Andersen wrote: > I'm trying to use bindep on Arch but hitting a problem with the > output handling. It seems that 'pacman -Q' may output warning > lines if you have a file with the same name as the package. E.g.,: > > $ mkdir cmake > $ pacman -Q cmake > error: package 'cmake' was not found > warning: 'cmake' is a file, you might want to use -p/--file. > > The current detection assumes the "was not found" is the last part > of the output. Applying the patch below seems to fix it. Apologies for taking this long to reply, I've proposed the patch you submitted here and listed you as co-author: https://review.opendev.org/771108 > After applying this patch, or whichever change you deem reasonable > to fix the issue, it would be great if you could make a new > release on PyPI. [...] Yes, once that patch is approved by other reviewers for bindep, I'll make sure to tag a new release so it will appear on PyPI. Thanks for providing this fix! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Sat Jan 16 21:06:09 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 16 Jan 2021 22:06:09 +0100 Subject: [nova] MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION bumps in Wallaby In-Reply-To: <20210115093723.q2uqabf2emedxc2z@lyarwood-laptop.usersys.redhat.com> References: <20210115093723.q2uqabf2emedxc2z@lyarwood-laptop.usersys.redhat.com> Message-ID: On 1/15/21 10:37 AM, Lee Yarwood wrote: > > Hello all, > > We are once again looking to increase the minimum and next minimum > versions of libvirt and QEMU in the libvirt virt driver. > > These were previously increased late in the Victoria cycle: > > MIN_LIBVIRT_VERSION = (5, 0, 0) > MIN_QEMU_VERSION = (4, 0, 0) > > NEXT_MIN_LIBVIRT_VERSION = (6, 0, 0) > NEXT_MIN_QEMU_VERSION = (4, 2, 0) > > libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION > https://review.opendev.org/c/openstack/nova/+/746981 > > I am now proposing the following updates to these versions in Wallaby: > > MIN_LIBVIRT_VERSION = (6, 0, 0) > MIN_QEMU_VERSION = (4, 2, 0) > > NEXT_MIN_LIBVIRT_VERSION = (7, 0, 0) > NEXT_MIN_QEMU_VERSION = (5, 2, 0) > > libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION and NEXT_MIN_{LIBVIRT,QEMU}_VERSION > https://review.opendev.org/c/openstack/nova/+/754700 > > libvirt v6.0.0 and QEMU 4.2.0 being supported by the three LTS releases > supported by the Wallaby release of OpenStack [1][2]. > > libvirt v7.0.0 and QEMU 5.2.0 being slightly aspirational for the next > minimum versions at this point but given current releases for both > projects they seem appropriate for the time being. > > Please let me know ideally in the review if there are any issues with > this increase. > > Regards, Hi, Wallaby will be on Debian Bullseye, for which the freeze has started, and which already has libvirt 6.9.0 and qemu 5.2, so no worries for me. Libvirt 7 would be a problem though. Cheers, Thomas Goirand (zigo) From mailakkina at gmail.com Sun Jan 17 10:51:56 2021 From: mailakkina at gmail.com (Nagaraj Akkina) Date: Sun, 17 Jan 2021 11:51:56 +0100 Subject: [$nova] [$horizon] openstack host show statistics Message-ID: Hello, Openstack host show compute-host, shows more memory allocated to instances than actual physical memory, although the memory allocation ratio is 1.0 in nova.conf, the same is also shown in horizon dashboard hypervisor statistics. We can see this in many compute nodes in the cluster. Also these statistics does not match with free -g or cat /proc/meminfo. For Example : Memory_mb | compute-node23 | (total) | 257915 | | compute-node23 | (used_now) | 271360 | | compute-node23 | (used_max) | 267264 | How to avoid misleading statistics, also which statistics are actually correct? versions : openstack : Stein nova : 2:19.3.0-0ubuntu1~cloud0 Regards, Akkina -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon Jan 18 09:21:43 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 18 Jan 2021 11:21:43 +0200 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? In-Reply-To: References: Message-ID: On Fri, Jan 15, 2021 at 7:01 PM Herve Beraud wrote: > Hello, > > Sorry for my late reply, and thanks for the heads up. > > Can't we move directly to EOL [1]? > I don't see reason to keep an unmaintained repo open, and if the repos > remain open and patches merged then it's not really unmaintained repos. > > The goal of the extended maintenance was to offer more chances to > downstream maintainers to get/share patches and fixes, if you decide to not > maintain them anymore then I would suggest you to move to "EOL" directly, > it would be less misleading. > > Notice that if you move rocky to eol all the corresponding branches will > be dropped in your repos. > > Also notice that last week we proposed a new kind of tag (-last) > [2][3] for Tempest's needs, but because tempest is branchless... > > Maybe we could extend this notion (-last) to allow the project to reflect > the last step... > It could reflect that it will be your last release, and that the project > is near from the end. > > But if you don't plan to merge patches, or if you don't have patches to > merge anymore, then I would really suggest to you to move directly to EOL, > else it means that you're not really "unmaintained". > OK thanks very much Herve as always for your time and thoughts here. I am not against the EOL I just thought it was more of a requirement to declare it 'unmaintained' first. The advantage is it is a softer path to completely closing it off for any future submissions. Possibly the '-last' tag fits this need but if I have understood correctly it might need some adjustment to the definition of that tag ('we could extend this notion') and honestly I don't know if it is necessary. More likely straight to EOL is what we want here. I will bring this up again in tomorrow's tripleo irc meeting http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019859.html and point to this thread. Let's see what other opinions there are around EOL vs -last for stable/rocky thank you, marios > > Hopefully it will help you to find the solution that fits your needs. > > Let me know if you have more questions. > > [1] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > [2] https://review.opendev.org/c/openstack/releases/+/770265 > [3] https://review.opendev.org/c/openstack/project-team-guide/+/769821 > > Le ven. 15 janv. 2021 à 16:52, Marios Andreou a > écrit : > >> >> On Thu, Dec 10, 2020 at 7:01 PM Marios Andreou wrote: >> >>> Hello TripleO >>> >>> I would like to propose that we move all tripleo stable/rocky repos [1] >>> to "unmaintained", with a view to tagging as end-of-life in due course. >>> >>> This will allow us to focus our efforts on keeping the check and gate >>> queues green and continue to deliver weekly promotions for the more recent >>> and active stable/* branches train ussuri victoria and master. >>> >>> The stable/rocky repos have not had much action in the last few months - >>> I collected some info at [2] about the most recent stable/rocky commits for >>> each of the tripleo repos. For many of those there are no commits in the >>> last 6 months and for some even longer. >>> >>> The tripleo stable/rocky repos were tagged as "extended maintenance" >>> (rocky-em) [2] in April 2020 with [3]. >>> >>> We have already reduced our CI commitment for rocky - these [4] are the >>> current check/gate jobs and these [5] are the jobs that run for promotion >>> to current-tripleo. However maintaining this doesn’t make sense if we are >>> not even using it e.g. merging things into tripleo-* stable/rocky. >>> >>> Please raise your objections or any other comments or thoughts about >>> this. Unless there are any blockers raised here, the plan is to put this >>> into motion early in January. >>> >>> One still unanswered question I have is that since there is no >>> ‘unmaintained’ tag, in the same way as we have the -em or >>> for extended maintenance and end-of-life, do we simply >>> _declare_ that the repos are unmaintained? Then after a period of “0 to 6 >>> months” per [6] we can tag the tripleo repos with rocky-eol. If any one >>> reading this knows please tell us! >>> >>> >> o/ hello ! >> >> replying to bump the thread - this was sent ~1 month ago now and there >> hasn't been any comment thus far. >> >> ping @Herve please do you know the answer to that question in the last >> paragraph above about 'declaring unmaintained' ? please thank you ;) >> >> As discussed at the last tripleo bi-weekly we can consider moving forward >> with this so I think it's prudent to give folks more chance to comment if >> they object for _reason_ >> >> thanks, marios >> >> >> >> >> >> >>> Thanks for reading! >>> >>> regards, marios >>> >>> >>> [1] https://releases.openstack.org/teams/tripleo.html#rocky >>> >>> [2] http://paste.openstack.org/raw/800464/ >>> >>> [3] https://review.opendev.org/c/openstack/releases/+/709912 >>> >>> [4] >>> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky >>> >>> [5] >>> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F >>> >>> [6] >>> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases >>> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jan 18 10:44:32 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 18 Jan 2021 11:44:32 +0100 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? In-Reply-To: References: Message-ID: Hello Le lun. 18 janv. 2021 à 10:22, Marios Andreou a écrit : > > > On Fri, Jan 15, 2021 at 7:01 PM Herve Beraud wrote: > >> Hello, >> >> Sorry for my late reply, and thanks for the heads up. >> >> Can't we move directly to EOL [1]? >> I don't see reason to keep an unmaintained repo open, and if the repos >> remain open and patches merged then it's not really unmaintained repos. >> >> The goal of the extended maintenance was to offer more chances to >> downstream maintainers to get/share patches and fixes, if you decide to not >> maintain them anymore then I would suggest you to move to "EOL" directly, >> it would be less misleading. >> >> Notice that if you move rocky to eol all the corresponding branches will >> be dropped in your repos. >> >> Also notice that last week we proposed a new kind of tag (-last) >> [2][3] for Tempest's needs, but because tempest is branchless... >> >> Maybe we could extend this notion (-last) to allow the project to reflect >> the last step... >> It could reflect that it will be your last release, and that the project >> is near from the end. >> >> But if you don't plan to merge patches, or if you don't have patches to >> merge anymore, then I would really suggest to you to move directly to EOL, >> else it means that you're not really "unmaintained". >> > > OK thanks very much Herve as always for your time and thoughts here. I am > not against the EOL I just thought it was more of a requirement to declare > it 'unmaintained' first. The advantage is it is a softer path to completely > closing it off for any future submissions. Possibly the '-last' tag fits > this need but if I have understood correctly it might need some adjustment > to the definition of that tag ('we could extend this notion') and honestly > I don't know if it is necessary. More likely straight to EOL is what we > want here. > You're welcome, Do not hesitate to ping us. Concerning the "-last", I said that we surely need to extend this kind of tag because it is referenced for tempest's usages. I think EOL fits our needs and is the shortest path. > I will bring this up again in tomorrow's tripleo irc meeting > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019859.html > and point to this thread. Let's see what other opinions there are around > EOL vs -last for stable/rocky > Same thing on my side, I added this topic to our next relmgt irc meeting (thursday) to see opinions of the team. > thank you, marios > Thank you > > > >> >> Hopefully it will help you to find the solution that fits your needs. >> >> Let me know if you have more questions. >> >> [1] >> https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life >> [2] https://review.opendev.org/c/openstack/releases/+/770265 >> [3] https://review.opendev.org/c/openstack/project-team-guide/+/769821 >> >> Le ven. 15 janv. 2021 à 16:52, Marios Andreou a >> écrit : >> >>> >>> On Thu, Dec 10, 2020 at 7:01 PM Marios Andreou >>> wrote: >>> >>>> Hello TripleO >>>> >>>> I would like to propose that we move all tripleo stable/rocky repos [1] >>>> to "unmaintained", with a view to tagging as end-of-life in due course. >>>> >>>> This will allow us to focus our efforts on keeping the check and gate >>>> queues green and continue to deliver weekly promotions for the more recent >>>> and active stable/* branches train ussuri victoria and master. >>>> >>>> The stable/rocky repos have not had much action in the last few months >>>> - I collected some info at [2] about the most recent stable/rocky commits >>>> for each of the tripleo repos. For many of those there are no commits in >>>> the last 6 months and for some even longer. >>>> >>>> The tripleo stable/rocky repos were tagged as "extended maintenance" >>>> (rocky-em) [2] in April 2020 with [3]. >>>> >>>> We have already reduced our CI commitment for rocky - these [4] are the >>>> current check/gate jobs and these [5] are the jobs that run for promotion >>>> to current-tripleo. However maintaining this doesn’t make sense if we are >>>> not even using it e.g. merging things into tripleo-* stable/rocky. >>>> >>>> Please raise your objections or any other comments or thoughts about >>>> this. Unless there are any blockers raised here, the plan is to put this >>>> into motion early in January. >>>> >>>> One still unanswered question I have is that since there is no >>>> ‘unmaintained’ tag, in the same way as we have the -em or >>>> for extended maintenance and end-of-life, do we simply >>>> _declare_ that the repos are unmaintained? Then after a period of “0 to 6 >>>> months” per [6] we can tag the tripleo repos with rocky-eol. If any one >>>> reading this knows please tell us! >>>> >>>> >>> o/ hello ! >>> >>> replying to bump the thread - this was sent ~1 month ago now and there >>> hasn't been any comment thus far. >>> >>> ping @Herve please do you know the answer to that question in the last >>> paragraph above about 'declaring unmaintained' ? please thank you ;) >>> >>> As discussed at the last tripleo bi-weekly we can consider moving >>> forward with this so I think it's prudent to give folks more chance to >>> comment if they object for _reason_ >>> >>> thanks, marios >>> >>> >>> >>> >>> >>> >>>> Thanks for reading! >>>> >>>> regards, marios >>>> >>>> >>>> [1] https://releases.openstack.org/teams/tripleo.html#rocky >>>> >>>> [2] http://paste.openstack.org/raw/800464/ >>>> >>>> [3] https://review.opendev.org/c/openstack/releases/+/709912 >>>> >>>> [4] >>>> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky >>>> >>>> [5] >>>> http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F >>>> >>>> [6] >>>> https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases >>>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Mon Jan 18 14:15:46 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 18 Jan 2021 14:15:46 +0000 Subject: [$nova] [$horizon] openstack host show statistics In-Reply-To: References: Message-ID: On Sun, 2021-01-17 at 11:51 +0100, Nagaraj Akkina wrote: > Hello, > > Openstack host show compute-host, shows more memory allocated to instances > than actual physical memory, although the memory allocation ratio is 1.0 in > nova.conf, the same is also shown in horizon dashboard hypervisor statistics. > We can see this in many compute nodes in the cluster. Also these statistics > does not match with free -g or cat /proc/meminfo. > > For Example :                                  Memory_mb > | compute-node23 | (total)                     |  257915 |     > | compute-node23 | (used_now)           |  271360 |     > | compute-node23 | (used_max)           |  267264 |     > > > How to avoid misleading statistics, also which statistics are actually > correct? > > versions : >  openstack : Stein >  nova : 2:19.3.0-0ubuntu1~cloud0 > > Regards, > Akkina  As you've noted, these statistics are generally misleading or outright incorrect. They're so often wrong, in fact, that we have removed them from the API starting with the 2.88 API microversion, which is being introduced in the next release (Wallaby). There are two suggestions. Firstly, Placement provides far better information as to the state of all the things nova knows about and should be used instead. You can find examples of how to replace this using the following link: https://specs.openstack.org/openstack/nova-specs/specs/wallaby/approved/modernize-os-hypervisors-api.html Secondly, the fact that you have overcommit enabled and are still seeing this happen suggests you don't have enough memory reserved for non-VM processes and other overhead. This is configured using the '[DEFAULT] reserved_host_memory_mb', config option, which defaults to a mere 512MB. You may need to increase this, basing your values on the average memory consumption of the host with and without load. For reference, the way that 'memory_mb_used' is calculated is by subtracting the total amount free memory reported via '/proc/meminfo' from the total amount of memory reported by libvirt. Hope this helps, Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Mon Jan 18 16:24:27 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Mon, 18 Jan 2021 17:24:27 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion Message-ID: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> Hello Openstack-Discuss, I was wondering what the right approach to clean up all resources of a project was in order for it to be deleted? 1) There is ospurge (https://opendev.org/x/ospurge) which works, but seems kind of abandoned looking at the last commits and pending issues (https://bugs.launchpad.net/ospurge). Will there still be any further development of this tool? 2) Then there is the project purge command (https://docs.openstack.org/python-openstackclient/victoria/cli/command-objects/project-purge.html) introduced with the Pike release (https://docs.openstack.org/releasenotes/python-openstackclient/pike.html#relnotes-3-12-0-stable-pike // https://bugs.launchpad.net/python-openstackclient/+bug/1584596 // https://review.opendev.org/c/openstack/python-openstackclient/+/367673/). While it seems only natural to have the cleanup / purge functionality built into the openstack client, I was wondering  .. . a) if all potentially existing resources types are covered and this can fully replace ospurge to allow any project to be cleaned up and deleted? What about i.e. Networks, Containers or objects? b) Also ospurge seems to still have some extended options to target or exclude certain resources by using the following options: > [--resource > {Backups,Snapshots,Volumes,Zones,Images,Stacks,FloatingIPs,RouterInterfaces,Routers,Ports,Networks,SecurityGroups,Servers,LoadBalancers,Receivers,Policies,Clusters,Profiles,Objects,Containers}] > [--exclude-resource > {Backups,Snapshots,Volumes,Zones,Images,Stacks,FloatingIPs,RouterInterfaces,Routers,Ports,Networks,SecurityGroups,Servers,LoadBalancers,Receivers,Policies,Clusters,Profiles,Objects,Containers}] those options are not (yet) available for the "project purge" command of the openstack client. With regards, Christian From artem.goncharov at gmail.com Mon Jan 18 17:10:49 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 18 Jan 2021 18:10:49 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> Message-ID: Hi, Please check https://review.opendev.org/c/openstack/python-openstackclient/+/734485 out. This is a followup of the ospurge, which bases on the project cleanup functionality built into OpenStackSDK. Idea is to replace “openstack project purge” functionality to completely rely on SDK only. Please also leave comments whether this works for you or not. Regards, Artem > On 18. Jan 2021, at 17:24, Christian Rohmann wrote: > > Hello Openstack-Discuss, > > > I was wondering what the right approach to clean up all resources of a project was in order for it to be deleted? > > > 1) There is ospurge (https://opendev.org/x/ospurge) which works, but seems kind of abandoned looking at the last commits and pending issues (https://bugs.launchpad.net/ospurge). > Will there still be any further development of this tool? > > > 2) Then there is the project purge command (https://docs.openstack.org/python-openstackclient/victoria/cli/command-objects/project-purge.html) introduced with the > Pike release (https://docs.openstack.org/releasenotes/python-openstackclient/pike.html#relnotes-3-12-0-stable-pike // https://bugs.launchpad.net/python-openstackclient/+bug/1584596 // https://review.opendev.org/c/openstack/python-openstackclient/+/367673/). > > > While it seems only natural to have the cleanup / purge functionality built into the openstack client, I was wondering .. . > > a) if all potentially existing resources types are covered and this can fully replace ospurge to allow any project to be cleaned up and deleted? What about i.e. Networks, Containers or objects? > > b) Also ospurge seems to still have some extended options to target or exclude certain resources by using the following options: > >> [--resource {Backups,Snapshots,Volumes,Zones,Images,Stacks,FloatingIPs,RouterInterfaces,Routers,Ports,Networks,SecurityGroups,Servers,LoadBalancers,Receivers,Policies,Clusters,Profiles,Objects,Containers}] >> [--exclude-resource {Backups,Snapshots,Volumes,Zones,Images,Stacks,FloatingIPs,RouterInterfaces,Routers,Ports,Networks,SecurityGroups,Servers,LoadBalancers,Receivers,Policies,Clusters,Profiles,Objects,Containers}] > > those options are not (yet) available for the "project purge" command of the openstack client. > > > > With regards, > > > Christian > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon Jan 18 17:30:33 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 18 Jan 2021 18:30:33 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> Message-ID: On 1/18/21 6:10 PM, Artem Goncharov wrote: > Hi, > > Please > check https://review.opendev.org/c/openstack/python-openstackclient/+/734485 >  out. > This is a followup of the ospurge, which bases on the project cleanup > functionality built into OpenStackSDK. Idea is to replace “openstack > project purge” functionality to completely rely on SDK only. Please also > leave comments whether this works for you or not. > > Regards, > Artem Hi, While all of this looks like a good idea, it's not implementing anything at all, just a few command line options. So it's currently of no use for operators. I'm still saluting the initiative, but in 6 months, it's kind of slow moving, IMO. Cheers, Thomas Goirand (zigo) From artem.goncharov at gmail.com Mon Jan 18 17:56:42 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 18 Jan 2021 18:56:42 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> Message-ID: <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> What do you mean it doesn’t implement anything at all? It does clean up compute, network, block_storage, orchestrate resources. Moreover it gives you possibility to clean “old” resources (created before or last updated before). > On 18. Jan 2021, at 18:30, Thomas Goirand wrote: > > On 1/18/21 6:10 PM, Artem Goncharov wrote: >> Hi, >> >> Please >> check https://review.opendev.org/c/openstack/python-openstackclient/+/734485 >> out. >> This is a followup of the ospurge, which bases on the project cleanup >> functionality built into OpenStackSDK. Idea is to replace “openstack >> project purge” functionality to completely rely on SDK only. Please also >> leave comments whether this works for you or not. >> >> Regards, >> Artem > Hi, > > While all of this looks like a good idea, it's not implementing anything > at all, just a few command line options. So it's currently of no use for > operators. > > I'm still saluting the initiative, but in 6 months, it's kind of slow > moving, IMO. > > Cheers, > > Thomas Goirand (zigo) > From gmann at ghanshyammann.com Mon Jan 18 18:23:11 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jan 2021 12:23:11 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> Message-ID: <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> ---- On Wed, 06 Jan 2021 15:04:34 -0600 Thomas Goirand wrote ---- > Hi, > > On 1/6/21 6:59 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > You might have seen the discussion around dropping the lower constraints > > testing as it becomes more challenging than the current value of doing it. > > As a downstream distribution package maintainer, I see this as a major > regression of the code quality that upstream is shipping. Without l-c > tests, there's no assurance of the reality of a lower-bound dependency. > > So then we're back to 5 years ago, when OpenStack just artificially was > setting very high lower bound because we just didn't know... Hi Zigo, We discussed the usage of l-c file among different packagers in 14th Jan TC meeting[1], can you confirm if Debian directly depends on l-c file and use them OR it is good for code quality if project maintains it? Below packagers does not use l-c file instead use u-c - RDO - openstack-charms - ubuntu [1] http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.log.html#l-105 -gmann > > Please don't do it. > > Cheers, > > Thomas Goirand (zigo) > > From miguel at mlavalle.com Mon Jan 18 18:33:44 2021 From: miguel at mlavalle.com (Miguel Lavalle) Date: Mon, 18 Jan 2021 12:33:44 -0600 Subject: [neutron] bug deputy report January 11th to 17th Message-ID: Hi, Here is this week's bugs deputy report: Critical ====== https://bugs.launchpad.net/neutron/+bug/1911128 Neutron with ovn driver failed to start on Fedora. Needs owner Medium ====== https://bugs.launchpad.net/neutron/+bug/1911153 [FT] DB migration "test_walk_versions" failing frequently. Owner ralonsoh https://bugs.launchpad.net/neutron/+bug/1911214 Scenario test test_multiple_ports_secgroup_inheritance fails in ovn scenario job. Needs owner https://bugs.launchpad.net/neutron/+bug/1910946 ovs is dead but ovs agent is up. Owner Norman Shen. Patch https://review.opendev.org/c/openstack/neutron/+/770058 https://bugs.launchpad.net/neutron/+bug/1911462 Optimize "PortForwarding" OVO loading. Owner Rodolofo Alonso. Patch: https://review.opendev.org/c/openstack/neutron/+/770654 https://bugs.launchpad.net/neutron/+bug/1911925 [FT] "test_read_queue_change_state" failing to initialize the keepalived-state-change monitor. Owner Rodolfo Alonso. Patch https://review.opendev.org/c/openstack/neutron/+/770950 https://bugs.launchpad.net/neutron/+bug/1911927 [FT] "test_add_and_remove_multiple_ips" failing while decoding the JSON file returned by "ip_monitor". Owner Rodolfo Alonso. Patch https://review.opendev.org/c/openstack/neutron/+/770983 https://bugs.launchpad.net/neutron/+bug/1911132 OVN mech driver - can't find Logical_Router errors. Needs owner RFE === https://bugs.launchpad.net/neutron/+bug/1911126 [RFE][L3] add ability to control router SNAT more granularly. Under discussion https://bugs.launchpad.net/neutron/+bug/1911864 [DHCP] AgentBinding for network will be created no matter the state -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jan 18 18:45:38 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 18 Jan 2021 19:45:38 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> Message-ID: On Mon, Jan 18, 2021 at 7:24 PM Ghanshyam Mann wrote: > > ---- On Wed, 06 Jan 2021 15:04:34 -0600 Thomas Goirand wrote ---- > > Hi, > > > > On 1/6/21 6:59 PM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > You might have seen the discussion around dropping the lower constraints > > > testing as it becomes more challenging than the current value of doing it. > > > > As a downstream distribution package maintainer, I see this as a major > > regression of the code quality that upstream is shipping. Without l-c > > tests, there's no assurance of the reality of a lower-bound dependency. > > > > So then we're back to 5 years ago, when OpenStack just artificially was > > setting very high lower bound because we just didn't know... > > Hi Zigo, > > We discussed the usage of l-c file among different packagers in 14th Jan TC meeting[1], > > can you confirm if Debian directly depends on l-c file and use them OR it is good for > code quality if project maintains it? > > Below packagers does not use l-c file instead use u-c > - RDO > - openstack-charms > - ubuntu > FWIW, if we are including openstack-charms here, then I can also confirm the same for Kolla - we just use u-c. -yoctozepto From zigo at debian.org Mon Jan 18 18:52:06 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 18 Jan 2021 19:52:06 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> Message-ID: <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> On 1/18/21 6:56 PM, Artem Goncharov wrote: > What do you mean it doesn’t implement anything at all? It does clean up compute, network, block_storage, orchestrate resources. Moreover it gives you possibility to clean “old” resources (created before or last updated before). Oh really? With that few lines of code? I'll re-read the patch then, sorry for my bad assumptions. Can you point at the part that's actually deleting the resources? Thomas Goirand (zigo) From artem.goncharov at gmail.com Mon Jan 18 19:14:37 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 18 Jan 2021 20:14:37 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> Message-ID: <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> Ha, thats exactly the case, the whole logic sits in sdk and is spread across the supported services: - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/compute/v2/_proxy.py#L1798 - for compute. KeyPairs not dropped, since they belong to user, and not to the “project”; - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/block_storage/v3/_proxy.py#L547 - block storage; - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/orchestration/v1/_proxy.py#L490 - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/network/v2/_proxy.py#L4130 - the most complex one in order to give possibility to clean “old” resource without destroying everything else Adding image is few lines of code (never had enough time to add it), identity is a bit tricky, since also here mostly resources does not belong to Project. DNS would be also easy to do. OSC here is only providing I/F, while the logic sits in SDK and can be very easy extended for other services. P.S. I use it this on an hourly basis since more than a year already (not a complete cleanup, but with update_after filter in project where the cloud is monitored). Regards, Artem > On 18. Jan 2021, at 19:52, Thomas Goirand wrote: > > On 1/18/21 6:56 PM, Artem Goncharov wrote: >> What do you mean it doesn’t implement anything at all? It does clean up compute, network, block_storage, orchestrate resources. Moreover it gives you possibility to clean “old” resources (created before or last updated before). > > Oh really? With that few lines of code? I'll re-read the patch then, > sorry for my bad assumptions. > > Can you point at the part that's actually deleting the resources? > > Thomas Goirand (zigo) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Mon Jan 18 19:59:03 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 18 Jan 2021 14:59:03 -0500 Subject: [tc] weekly update Message-ID: Hi everyone, Here’s an update on what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Setting Ke Chen as Watcher's PTL https://review.opendev.org/c/openstack/governance/+/770913 - Drop openSUSE from commonly tested distro list https://review.opendev.org/c/openstack/governance/+/770855 - [manila] add assert:supports-api-interoperability https://review.opendev.org/c/openstack/governance/+/770859 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 - Define Xena release testing runtime https://review.opendev.org/c/openstack/governance/+/770860 - Cool-down cycle goal https://review.opendev.org/c/openstack/governance/+/770616 - Remove Karbor project team https://review.opendev.org/c/openstack/governance/+/767056 - WIP NO MERGE Move os-*-config to Heat project governance https://review.opendev.org/c/openstack/governance/+/770285 ## Project Updates - Setting Xinran Wang as Cyborg's PTL https://review.opendev.org/c/openstack/governance/+/770075 - Add glance-tempest-plugin to Glance https://review.opendev.org/c/openstack/governance/+/767666 - molteniron does not make releases https://review.opendev.org/c/openstack/governance/+/769805 ## General Changes - Add link to Xena announcement https://review.opendev.org/c/openstack/governance/+/769620 - Add doc/requirements https://review.opendev.org/c/openstack/governance/+/769696 # Other Reminders - Our next [TC] Weekly meeting is scheduled for January 21st at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, January 20th, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From m64m64m64ali at outlook.com Mon Jan 18 13:32:01 2021 From: m64m64m64ali at outlook.com (ali ali) Date: Mon, 18 Jan 2021 13:32:01 +0000 Subject: Openstack integration with NSX-T Message-ID: Hi. I use Openstack train version and I want to integrate openstack with nsx-t. I only found the following documentation. https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/nsxt-openstack-plugin-installation/GUID-FA2371AE-6B1D-4C81-9B69-C877ED138B17.html and at the start of integration, I install, following packages apt install python-vmware-nsxlib openstack-vmware-nsx python-neutron-fwaas outputs: E: Unable to locate package openstack-vmware-nsx The following packages have unmet dependencies: python-neutron-fwaas : Depends: python-neutron (>= 2:12.0.0~) but it is not going to be installed E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). when I checked there are no python-neutron packages but there was a python3-neutron package. I don't know what to do with this problem. I think if I delete python3-neutron and install python-neutron, everything will crash and mess up. is it basicly possible to integrate openstack train or ussuri or victoryia with nsx-t? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 19 00:34:58 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Jan 2021 00:34:58 +0000 Subject: Openstack integration with NSX-T In-Reply-To: References: Message-ID: <20210119003458.xmq22gbiiaept6hr@yuggoth.org> On 2021-01-18 13:32:01 +0000 (+0000), ali ali wrote: > I use Openstack train version and I want to integrate openstack > with nsx-t. I only found the following documentation. > https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/nsxt-openstack-plugin-installation/GUID-FA2371AE-6B1D-4C81-9B69-C877ED138B17.html [...] > is it basicly possible to integrate openstack train or ussuri or > victoryia with nsx-t? According to that documentation provided by vmWare: "The NSX-T Data Center Plugin for OpenStack has the following specific requirements[...] Stein" It looks like they'll need to provide updated packages which support newer OpenStack releases. In particular, they seem to want OpenStack packages run with the Python v2 interpreter, while modern releases of OpenStack only support Python v3 (hence the different package names you found in your distro). If you have a support contract with vmWare, you may want to reach out to your service representative and ask them what their plans are for NSX-T and OpenStack. I see the same site has documentation for a "VMware NSX-T Data Center Plugin 3.1" and mentions OpenStack Train, so maybe that's closer to what you need? Still doesn't take care of Ussuri for you though. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From iwienand at redhat.com Tue Jan 19 05:58:30 2021 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 19 Jan 2021 16:58:30 +1100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: References: Message-ID: <20210119055830.GB3137911@fedora19.localdomain> On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > I understand that adapting the old CI test result table to the new gerrit > review UI is not a simple task. We got there in the end :) Change [1] enabled the zuul-summary-results plugin, which is available from [2]. I just restarted opendev gerrit with it, and it seems to be working. Look for the new "Zuul Summary" tab next to "Files". I would consider it a 0.1 release and welcome any contributions to make it better. If you want to make changes, you should be able to submit a change to system-config with a Depends-On: and trigger the system-config-run-review test; in the results returned there are screenshot artifacts that will show the results (expanding this testing also welcome!). We can also a put a node on hold for you to work on the plugin if you have interest. It's also fairly easy to run the container locally, so there's plenty of options. Thanks, -i [1] https://review.opendev.org/c/opendev/system-config/+/767079 [2] https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary From anost1986 at gmail.com Tue Jan 19 06:48:53 2021 From: anost1986 at gmail.com (Andrii Ostapenko) Date: Tue, 19 Jan 2021 00:48:53 -0600 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <20210119055830.GB3137911@fedora19.localdomain> References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: One click more than before, but looks great! Thank you! On Tue, Jan 19, 2021 at 12:00 AM Ian Wienand wrote: > > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > I understand that adapting the old CI test result table to the new gerrit > > review UI is not a simple task. > > We got there in the end :) Change [1] enabled the zuul-summary-results > plugin, which is available from [2]. I just restarted opendev gerrit > with it, and it seems to be working. Look for the new "Zuul Summary" > tab next to "Files". I would consider it a 0.1 release and welcome > any contributions to make it better. > > If you want to make changes, you should be able to submit a change to > system-config with a Depends-On: and trigger the > system-config-run-review test; in the results returned there are > screenshot artifacts that will show the results (expanding this > testing also welcome!). We can also a put a node on hold for you to > work on the plugin if you have interest. It's also fairly easy to run > the container locally, so there's plenty of options. > > Thanks, > > -i > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > [2] https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > From zigo at debian.org Tue Jan 19 07:03:25 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 19 Jan 2021 08:03:25 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> Message-ID: On 1/18/21 7:23 PM, Ghanshyam Mann wrote: > ---- On Wed, 06 Jan 2021 15:04:34 -0600 Thomas Goirand wrote ---- > > Hi, > > > > On 1/6/21 6:59 PM, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > You might have seen the discussion around dropping the lower constraints > > > testing as it becomes more challenging than the current value of doing it. > > > > As a downstream distribution package maintainer, I see this as a major > > regression of the code quality that upstream is shipping. Without l-c > > tests, there's no assurance of the reality of a lower-bound dependency. > > > > So then we're back to 5 years ago, when OpenStack just artificially was > > setting very high lower bound because we just didn't know... > > Hi Zigo, > > We discussed the usage of l-c file among different packagers in 14th Jan TC meeting[1], > > can you confirm if Debian directly depends on l-c file and use them OR it is good for > code quality if project maintains it? > > Below packagers does not use l-c file instead use u-c > - RDO > - openstack-charms > - ubuntu > > > [1] http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.log.html#l-105 > > -gmann Hi, Of course, I'm using upper-constraints too, to try to package them as much as possible, however, the dependencies are expressed according to lower-constraints. Let's say we have dependency a that has ===2.1 in u-c, but 1.9 in l-c. I'll write: Depends: a (>= 1.9) but will try to get 2.1 in Debian. At the end, if 1.9 is in Debian stable backports, I may attempt to not do the backporting job for 2.1, as the project is telling me 1.9 works ok and that it's useless work. Does this make sense now? Cheers, Thomas Goirand (zigo) From marios at redhat.com Tue Jan 19 07:04:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 19 Jan 2021 09:04:07 +0200 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: On Tue, Jan 19, 2021 at 8:52 AM Andrii Ostapenko wrote: > One click more than before, but looks great! Thank you! > > +1 it really does look great very clear and easy to see the failure (and bonus it's even clearer than it was in the 'old' zuul) thanks very much Balazs for bringing this up and big thanks to Ian and anyone else who worked on that On Tue, Jan 19, 2021 at 12:00 AM Ian Wienand wrote: > > > > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > > I understand that adapting the old CI test result table to the new > gerrit > > > review UI is not a simple task. > > > > We got there in the end :) Change [1] enabled the zuul-summary-results > > plugin, which is available from [2]. I just restarted opendev gerrit > > with it, and it seems to be working. Look for the new "Zuul Summary" > > tab next to "Files". I would consider it a 0.1 release and welcome > > any contributions to make it better. > > > > If you want to make changes, you should be able to submit a change to > > system-config with a Depends-On: and trigger the > > system-config-run-review test; in the results returned there are > > screenshot artifacts that will show the results (expanding this > > testing also welcome!). We can also a put a node on hold for you to > > work on the plugin if you have interest. It's also fairly easy to run > > the container locally, so there's plenty of options. > > > > Thanks, > > > > -i > > > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > > [2] > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jan 19 07:52:16 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 Jan 2021 08:52:16 +0100 Subject: Openstack integration with NSX-T In-Reply-To: References: Message-ID: <20210119075216.typhykuhpkog6ylk@p1.localdomain> Hi, On Mon, Jan 18, 2021 at 01:32:01PM +0000, ali ali wrote: > Hi. > I use Openstack train version and I want to integrate openstack with nsx-t. I only found the following documentation. > https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.5/nsxt-openstack-plugin-installation/GUID-FA2371AE-6B1D-4C81-9B69-C877ED138B17.html > > > and at the start of integration, I install, following packages > apt install python-vmware-nsxlib openstack-vmware-nsx python-neutron-fwaas > > outputs: > E: Unable to locate package openstack-vmware-nsx > The following packages have unmet dependencies: > python-neutron-fwaas : Depends: python-neutron (>= 2:12.0.0~) but it is not going to be installed > E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution). Just FYI, please remember that neutron-fwaas project was deprecated as neutron stadium in Ussuri cycle and Ussuri is last stable release of it. There is no newer releases and this project is not maintained anymore. > > when I checked there are no python-neutron packages but there was a python3-neutron package. > > I don't know what to do with this problem. > I think if I delete python3-neutron and install python-neutron, everything will crash and mess up. > > is it basicly possible to integrate openstack train or ussuri or victoryia with nsx-t? > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Tue Jan 19 07:56:02 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 Jan 2021 08:56:02 +0100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: <20210119075602.eimellc6icvkhdbf@p1.localdomain> Hi, On Tue, Jan 19, 2021 at 09:04:07AM +0200, Marios Andreou wrote: > On Tue, Jan 19, 2021 at 8:52 AM Andrii Ostapenko > wrote: > > > One click more than before, but looks great! Thank you! > > > > > +1 it really does look great very clear and easy to see the failure (and > bonus it's even clearer than it was in the 'old' zuul) > > thanks very much Balazs for bringing this up and big thanks to Ian and > anyone else who worked on that +1, thx for that improvement :) > > > > On Tue, Jan 19, 2021 at 12:00 AM Ian Wienand wrote: > > > > > > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > > > I understand that adapting the old CI test result table to the new > > gerrit > > > > review UI is not a simple task. > > > > > > We got there in the end :) Change [1] enabled the zuul-summary-results > > > plugin, which is available from [2]. I just restarted opendev gerrit > > > with it, and it seems to be working. Look for the new "Zuul Summary" > > > tab next to "Files". I would consider it a 0.1 release and welcome > > > any contributions to make it better. > > > > > > If you want to make changes, you should be able to submit a change to > > > system-config with a Depends-On: and trigger the > > > system-config-run-review test; in the results returned there are > > > screenshot artifacts that will show the results (expanding this > > > testing also welcome!). We can also a put a node on hold for you to > > > work on the plugin if you have interest. It's also fairly easy to run > > > the container locally, so there's plenty of options. > > > > > > Thanks, > > > > > > -i > > > > > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > > > [2] > > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > > > > > > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Tue Jan 19 08:39:02 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 19 Jan 2021 09:39:02 +0100 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: References: Message-ID: Hello monasca team, FYI a release failure happened on monasca-tempest-plugin within the job publish-monasca-tempest-plugin-docker-image. This build was triggered by https://review.opendev.org/c/openstack/releases/+/768551 (the Wallaby part). A new incompatibility in requirements was found by pip's new resolver: > pykafka 2.8.0 has requirement kazoo==2.5.0, but you'll have kazoo 2.8.0 which is incompatible. (c.f https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1157 ) I didn't find a trace of these requirements on your repo so I think they are pulled/resolved from/for underlying libraries. After that the job fails to push the "latest" docker tag because the tag is not found, I think it's a side effect of the previous error. (c.f https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1443 ) Let us know if we can help you, The Release Team. Le lun. 18 janv. 2021 à 23:31, a écrit : > Build failed. > > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/8dd9a4fce3bc4ae2a32134b7d7fec5b5 > : SUCCESS in 52s > - release-openstack-python > https://zuul.opendev.org/t/openstack/build/ff7c8136563444df9c565f07f618c559 > : SUCCESS in 3m 44s > - announce-release > https://zuul.opendev.org/t/openstack/build/78731c6e0948490d82e1e2d14eb67857 > : SUCCESS in 3m 42s > - propose-update-constraints > https://zuul.opendev.org/t/openstack/build/3eab7a3209a84a33b8d7b69e41e185cb > : SUCCESS in 2m 49s > - publish-monasca-tempest-plugin-docker-image > https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa > : POST_FAILURE in 8m 19s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at stackhpc.com Tue Jan 19 09:36:14 2021 From: doug at stackhpc.com (Doug) Date: Tue, 19 Jan 2021 09:36:14 +0000 Subject: [monasca][release] Abandoning monasca-ceilometer and monasca-log-api? In-Reply-To: References: Message-ID: <72283d02-7f75-1589-e5e1-4f5ac0d91334@stackhpc.com> On 15/01/2021 09:15, Herve Beraud wrote: > Dear Monasca team, Hi Herve > > The release team noticed an inconsistency between the Monasca team's > deliverables described in the governance’s reference and deliverables > defined in the openstack/releases repo (c.f our related meeting topic > [1]). > > Indeed, monasca-ceilometer and monasca-log-api were released in train > but not released in ussuri nor victoria. Do you think that they should > be deprecated (abandoned) in governance? Both of these services have been deprecated (see details below). Please proceed. https://review.opendev.org/c/openstack/monasca-ceilometer/+/720319 https://review.opendev.org/c/openstack/monasca-log-api/+/704519 > > Notice that Wallaby's milestone 2 is next week so maybe it could be a > good time to update this. > > Let us know your thoughts, we are waiting for your replies. > > Thanks for reading, Thanks! > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Jan 19 10:17:39 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 19 Jan 2021 11:17:39 +0100 Subject: [monasca][release] Abandoning monasca-ceilometer and monasca-log-api? In-Reply-To: <72283d02-7f75-1589-e5e1-4f5ac0d91334@stackhpc.com> References: <72283d02-7f75-1589-e5e1-4f5ac0d91334@stackhpc.com> Message-ID: Thanks Doug for your response. I think that more steps are needed to deprecate your repos properly: https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository The release part is the last step of the deprecation. Also I think that it could be better to let a monasca team member submit these patches to inform the governance team and project-config team of your choices concerning these repos. With all these patches we will have a whole consistent state. Let me know what you think. Thanks, Le mar. 19 janv. 2021 à 10:36, Doug a écrit : > > On 15/01/2021 09:15, Herve Beraud wrote: > > Dear Monasca team, > > Hi Herve > > > The release team noticed an inconsistency between the Monasca team's > deliverables described in the governance’s reference and deliverables > defined in the openstack/releases repo (c.f our related meeting topic [1]). > > Indeed, monasca-ceilometer and monasca-log-api were released in train but > not released in ussuri nor victoria. Do you think that they should be > deprecated (abandoned) in governance? > > Both of these services have been deprecated (see details below). Please > proceed. > > https://review.opendev.org/c/openstack/monasca-ceilometer/+/720319 > > https://review.opendev.org/c/openstack/monasca-log-api/+/704519 > > > Notice that Wallaby's milestone 2 is next week so maybe it could be a good > time to update this. > > Let us know your thoughts, we are waiting for your replies. > > Thanks for reading, > > Thanks! > > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Jan 19 11:37:33 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 19 Jan 2021 12:37:33 +0100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <20210119055830.GB3137911@fedora19.localdomain> References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: On Tue, Jan 19, 2021 at 7:01 AM Ian Wienand wrote: > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > I understand that adapting the old CI test result table to the new gerrit > > review UI is not a simple task. > > We got there in the end :) Change [1] enabled the zuul-summary-results > plugin, which is available from [2]. I just restarted opendev gerrit > with it, and it seems to be working. Look for the new "Zuul Summary" > tab next to "Files". I would consider it a 0.1 release and welcome > any contributions to make it better. > Thank you, this is amazing! I wonder if we could also run the plugin that shows the live progress (it was mentioned somewhere in the thread). Dmitry > > If you want to make changes, you should be able to submit a change to > system-config with a Depends-On: and trigger the > system-config-run-review test; in the results returned there are > screenshot artifacts that will show the results (expanding this > testing also welcome!). We can also a put a node on hold for you to > work on the plugin if you have interest. It's also fairly easy to run > the container locally, so there's plenty of options. > > Thanks, > > -i > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > [2] > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Jan 19 11:44:09 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 19 Jan 2021 12:44:09 +0100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <20210119055830.GB3137911@fedora19.localdomain> References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: On Tue, Jan 19, 2021 at 16:58, Ian Wienand wrote: > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: >> I understand that adapting the old CI test result table to the new >> gerrit >> review UI is not a simple task. > > We got there in the end :) Change [1] enabled the zuul-summary-results > plugin, which is available from [2]. I just restarted opendev gerrit > with it, and it seems to be working. Look for the new "Zuul Summary" > tab next to "Files". I would consider it a 0.1 release and welcome > any contributions to make it better. > > If you want to make changes, you should be able to submit a change to > system-config with a Depends-On: and trigger the > system-config-run-review test; in the results returned there are > screenshot artifacts that will show the results (expanding this > testing also welcome!). We can also a put a node on hold for you to > work on the plugin if you have interest. It's also fairly easy to run > the container locally, so there's plenty of options. Awesome. Thank you Ian! Cheers, gibi > > Thanks, > > -i > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > [2] > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > From szabolcs at szabolcstoth.eu Tue Jan 19 11:49:19 2021 From: szabolcs at szabolcstoth.eu (=?iso-8859-1?Q?Szabolcs_T=F3th?=) Date: Tue, 19 Jan 2021 11:49:19 +0000 Subject: [tempest] extending python-tempestconf In-Reply-To: References: Message-ID: Hej! The official tool named python-tempestconf has a parameter named --create, which allows to create the following resources: * CirrOS image (uploads the image based on the location defined with --image parameter), * Flavors (based on default values - DEFAULT_FLAVOR_RAM, DEFAULT_FLAVOR_RAM_ALT, DEFAULT_FLAVOR_DISK - which can be changed with --flavor-min-mem and --flavor-min-disk). In order to verify our specific installation with Tempest, we need to create the basic resources as * Flavors (with extra-spec parameters like hw:mem_page_size). * Networks (one for fixed_network_name and one for floating_network_name). * python-tempestconf is able to find an already existing network created with router:external flag and set it as value for floating_network_name. * Router and port (for routing traffic between internal and external networks). I would like to ask the following: * Is there any particular reason why the basic resource create functionality is limited to the image and flavor? * Are there any plans to extend the basic resource create functionality? * Ability to set extra parameters for the flavors. * Creating networks, routers and ports (based on a user inputs, which can be separate parameters or a specific file). Would the community accept contributions extending python-tempestconf into this direction? -- BR, Szabolcs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltoscano at redhat.com Tue Jan 19 12:14:32 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Tue, 19 Jan 2021 13:14:32 +0100 Subject: [tempest] extending python-tempestconf In-Reply-To: References: Message-ID: <10167377.aFP6jjVeTY@whitebase.usersys.redhat.com> On Tuesday, 19 January 2021 12:49:19 CET Szabolcs Tóth wrote: > Hej! > > The official tool named python-tempestconf has a parameter named --create, > which allows to create the following resources: > > * CirrOS image (uploads the image based on the location defined with > --image parameter), * Flavors (based on default values - > DEFAULT_FLAVOR_RAM, DEFAULT_FLAVOR_RAM_ALT, DEFAULT_FLAVOR_DISK - which can > be changed with --flavor-min-mem and --flavor-min-disk). > > In order to verify our specific installation with Tempest, we need to create > the basic resources as > > * Flavors (with extra-spec parameters like hw:mem_page_size). > * Networks (one for fixed_network_name and one for > floating_network_name). > > * python-tempestconf is able to find an already existing network > created with router:external flag and set it as value for > floating_network_name. > > * Router and port (for routing traffic between internal and external > networks). > > I would like to ask the following: > > * Is there any particular reason why the basic resource create > functionality is limited to the image and flavor? > * Are there any plans > to extend the basic resource create functionality? The aim of python-tempestconf (which is not part of the QA/tempest project, but of the refstack project) is described as "for automatic generation of tempest configuration based on user’s cloud." This means that any resource creation is limited to what is needed for running "the basics" of tempest. >From an historical point of view, it is not meant to be able to discover everything, but to be used as starting point for your tempest settings, which means that tests may work with the output of tempestconf, but tuning may be needed and it is expected. > > * Ability to set extra parameters for the flavors. > * Creating networks, routers and ports (based on a user inputs, which > can be separate parameters or a specific file). > > Would the community accept contributions extending python-tempestconf into > this direction? I'd leave space to other python-tempestconf people, but IMHO this will stretch the scope of the project. -- Luigi From ralonsoh at redhat.com Tue Jan 19 12:21:53 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 19 Jan 2021 13:21:53 +0100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: Thanks Ian! Very useful. On Tue, Jan 19, 2021 at 12:48 PM Balazs Gibizer wrote: > > > On Tue, Jan 19, 2021 at 16:58, Ian Wienand wrote: > > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > >> I understand that adapting the old CI test result table to the new > >> gerrit > >> review UI is not a simple task. > > > > We got there in the end :) Change [1] enabled the zuul-summary-results > > plugin, which is available from [2]. I just restarted opendev gerrit > > with it, and it seems to be working. Look for the new "Zuul Summary" > > tab next to "Files". I would consider it a 0.1 release and welcome > > any contributions to make it better. > > > > If you want to make changes, you should be able to submit a change to > > system-config with a Depends-On: and trigger the > > system-config-run-review test; in the results returned there are > > screenshot artifacts that will show the results (expanding this > > testing also welcome!). We can also a put a node on hold for you to > > work on the plugin if you have interest. It's also fairly easy to run > > the container locally, so there's plenty of options. > > Awesome. Thank you Ian! > > Cheers, > gibi > > > > > Thanks, > > > > -i > > > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > > [2] > > > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christian.rohmann at inovex.de Tue Jan 19 12:33:57 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Tue, 19 Jan 2021 13:33:57 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> Message-ID: <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> Hey Artem, thank you very much for your quick reply and pointer to the patchset you work on! On 18/01/2021 20:14, Artem Goncharov wrote: > Ha, thats exactly the case, the whole logic sits in sdk and is spread > across the supported services: > - > https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/compute/v2/_proxy.py#L1798 >  - > for compute. KeyPairs not dropped, since they belong to user, and not > to the “project”; > - > https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/block_storage/v3/_proxy.py#L547 >  - > block storage; > - > https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/orchestration/v1/_proxy.py#L490 > > - > https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/network/v2/_proxy.py#L4130 >  - > the most complex one in order to give possibility to clean “old” > resource without destroying everything else > > Adding image is few lines of code (never had enough time to add it), > identity is a bit tricky, since also here mostly resources does not > belong to Project. DNS would be also easy to do. OSC here is only > providing I/F, while the logic sits in SDK and can be very easy > extended for other services. >> On 18. Jan 2021, at 19:52, Thomas Goirand > > wrote: >> >> On 1/18/21 6:56 PM, Artem Goncharov wrote: >>> What do you mean it doesn’t implement anything at all? It does clean >>> up compute, network, block_storage, orchestrate resources. Moreover >>> it gives you possibility to clean “old” resources (created before or >>> last updated before). >> >> Oh really? With that few lines of code? I'll re-read the patch then, >> sorry for my bad assumptions. >> >> Can you point at the part that's actually deleting the resources? If I understood correctly, the cleanup relies on the SDK functionality / requirement for each resource type to provide a corresponding function( https://github.com/openstack/openstacksdk/blob/master/openstack/cloud/openstackcloud.py#L762) ? Reading through the (SDK) code this even covers depending resources, nice! I certainly will leave some feedback and comments in your change (https://review.opendev.org/c/openstack/python-openstackclient/+/734485). But what are your immediate plans moving forward on with this then, Artem? There is a little todo list in the description on your change .. is there anything you yourself know that is still missing before taking this to a full review and finally merging it? Only code that is shipped and then actively used will improve further and people will notice other required functionality or switches for later iterations. With the current state of having a somewhat working but unmaintained ospurge and a non feature complete "project purge"  (supports only Block Storage v1, v2; Compute v2; Image v1, v2) this will only cause people to start hacking away on the ospurge codebase or worse building their own tools and scripts to implement project cleanup for their environments over and over again. Regards, Christian -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 19 12:56:41 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 Jan 2021 12:56:41 +0000 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: On Tue, 2021-01-19 at 12:37 +0100, Dmitry Tantsur wrote: > On Tue, Jan 19, 2021 at 7:01 AM Ian Wienand wrote: > > > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > > I understand that adapting the old CI test result table to the new gerrit > > > review UI is not a simple task. > > > > We got there in the end :) Change [1] enabled the zuul-summary-results > > plugin, which is available from [2]. I just restarted opendev gerrit > > with it, and it seems to be working. Look for the new "Zuul Summary" > > tab next to "Files". I would consider it a 0.1 release and welcome > > any contributions to make it better. > > > > Thank you, this is amazing! I wonder if we could also run the plugin that > shows the live progress (it was mentioned somewhere in the thread). > > Dmitry i belive showing the live progress of the jobs is effectivly a ddos vector. infra have ask that we not use javascript to pool the the live status of the jobs in our browser in the past. providing a link to the running zuul objs that are not actively pooling might be nice at least for the 1st party ci. there might be a way for 3rd party ci to post a link to the builds or something for there results. my concern would still remain though with anything that does active pooling ddosing the zuul api. i know that we previously tried enbeding the zuul job status directly into gerrit a few years ago and that had to be qickly reverted as it does not take many developers leave review open in a tab to quickly make that unworkable. i know i for one often leave review open over night if im pinged to review something shortly before i finish for the day so that its open on my screen when i log in the next day. if that tab was activly pooling ci jobs that would be an issue if many people did it. if it was just as static liknk on the other hand to the job build it would not have the same downside. so i guess it depend on if you want live updates of the running jobs or just a one of server side generated static link to the running job that you can click though to look at its console output. the latter might be nice but i am not sure the former is a good idea for the reason i mentioned. > > > > > > If you want to make changes, you should be able to submit a change to > > system-config with a Depends-On: and trigger the > > system-config-run-review test; in the results returned there are > > screenshot artifacts that will show the results (expanding this > > testing also welcome!). We can also a put a node on hold for you to > > work on the plugin if you have interest. It's also fairly easy to run > > the container locally, so there's plenty of options. > > > > Thanks, > > > > -i > > > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > > [2] > > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > > > > > From artem.goncharov at gmail.com Tue Jan 19 13:16:28 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 19 Jan 2021 14:16:28 +0100 Subject: Ospurge or "project purge" - What's the right approach to cleanup projects prior to deletion In-Reply-To: <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> References: <76498a8c-c8a5-9488-0223-3f47ac4486df@inovex.de> <0CC2DFF7-5721-4106-A06B-6FC2970AC07B@gmail.com> <7237beb7-a68a-0398-f779-aef76fbc0e82@debian.org> <10C08D43-B4E6-4423-B561-183A4336C488@gmail.com> <9f408ffe-4046-76e0-bbdf-57ee94191738@inovex.de> Message-ID: <5C651C9C-0D00-4CB8-9992-4AC23D92FE38@gmail.com> Hi Christian. Actually the patch stuck due to lack of reviewers. Idea here was not to replace “openstack project purge”, but to add a totally new implementation (hopefully later dropping project purge as such). From my POV at the moment there is nothing else I was thinking to mandatorily implement on OSC side (sure, for future I would like to give possibility to limit services to cleanup, to add cleanup of key-airs, etc). SDK part is completely independent of that. Here we definitely need to add dropping of private images. Also on DNS side we can do cleanup. Other services are tricky (while swift we can still implement relatively easy). All in all - we can merge the PR in it’s current form (assuming we get some positive reviews). BG, Artem > On 19. Jan 2021, at 13:33, Christian Rohmann wrote: > > Hey Artem, > > thank you very much for your quick reply and pointer to the patchset you work on! > > > > On 18/01/2021 20:14, Artem Goncharov wrote: >> Ha, thats exactly the case, the whole logic sits in sdk and is spread across the supported services: >> - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/compute/v2/_proxy.py#L1798 - for compute. KeyPairs not dropped, since they belong to user, and not to the “project”; >> - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/block_storage/v3/_proxy.py#L547 - block storage; >> - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/orchestration/v1/_proxy.py#L490 >> - https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/network/v2/_proxy.py#L4130 - the most complex one in order to give possibility to clean “old” resource without destroying everything else >> >> Adding image is few lines of code (never had enough time to add it), identity is a bit tricky, since also here mostly resources does not belong to Project. DNS would be also easy to do. OSC here is only providing I/F, while the logic sits in SDK and can be very easy extended for other services. > >>> On 18. Jan 2021, at 19:52, Thomas Goirand > wrote: >>> >>> On 1/18/21 6:56 PM, Artem Goncharov wrote: >>>> What do you mean it doesn’t implement anything at all? It does clean up compute, network, block_storage, orchestrate resources. Moreover it gives you possibility to clean “old” resources (created before or last updated before). >>> >>> Oh really? With that few lines of code? I'll re-read the patch then, >>> sorry for my bad assumptions. >>> >>> Can you point at the part that's actually deleting the resources? > If I understood correctly, the cleanup relies on the SDK functionality / requirement for each resource type to provide a corresponding function( https://github.com/openstack/openstacksdk/blob/master/openstack/cloud/openstackcloud.py#L762 ) ? > > Reading through the (SDK) code this even covers depending resources, nice! > > > > I certainly will leave some feedback and comments in your change (https://review.opendev.org/c/openstack/python-openstackclient/+/734485 ). > But what are your immediate plans moving forward on with this then, Artem? > > There is a little todo list in the description on your change .. is there anything you yourself know that is still missing before taking this to a full review and finally merging it? > > Only code that is shipped and then actively used will improve further and people will notice other required functionality or switches for later iterations. With the current state of having a somewhat working but unmaintained ospurge and a non feature complete "project purge" (supports only Block Storage v1, v2; Compute v2; Image v1, v2) this will only cause people to start hacking away on the ospurge codebase or worse building their own tools and scripts to implement project cleanup for their environments over and over again. > > > > > > Regards, > > > > Christian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jan 19 15:17:22 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Jan 2021 15:17:22 +0000 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> On 2021-01-19 12:56:41 +0000 (+0000), Sean Mooney wrote: > On Tue, 2021-01-19 at 12:37 +0100, Dmitry Tantsur wrote: [...] > > I wonder if we could also run the plugin that shows the live > > progress (it was mentioned somewhere in the thread). > > i belive showing the live progress of the jobs is effectivly a > ddos vector. infra have ask that we not use javascript to pool the > the live status of the jobs in our browser in the past. [...] > i know that we previously tried enbeding the zuul job status > directly into gerrit a few years ago and that had to be qickly > reverted as it does not take many developers leave review open in > a tab to quickly make that unworkable. i know i for one often > leave review open over night if im pinged to review something > shortly before i finish for the day so that its open on my screen > when i log in the next day. [...] I think it's probably worth trying again. The previous attempts hit a wall because of several challenges: 1. The available Zuul status API returned data on all enqueued refs (a *very* large JSON blob when the system is under heavy use) 2. Calls to the API were handled by a thread of the scheduler daemon, so often blocked or were blocked by other things going on, especially when Zuul was already under significant load 3. Browsers at the time continued running Javascript payloads in "background" tabs so the volume of queries was multiplied not just by the number of users but also by the average number of review tabs they had open Over time we added a ref-scoped status method so callers could request the status of a specific change. The Zuul REST API is now served by a separate zuul-web daemon, which we can move to a different server entirely if load demands that (and can even scale horizontally with more than one instance of it, I think?). Browser tech has also improved, and popular ones these days suspend Javascript stacks when tabs are not exposed. We may also be caching status API responses more aggressively than we used to do. All of these factors combined could make live status info in a Gerrit plug-in entirely tractable, we'll just need someone with time to try it and see... and be prepared for multiple Gerrit service restarts to enable/disable it, so probably not when utilization is as high as it has been the past couple of weeks. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue Jan 19 15:32:02 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Jan 2021 15:32:02 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> Message-ID: <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> On 2021-01-19 08:03:25 +0100 (+0100), Thomas Goirand wrote: [...] > Of course, I'm using upper-constraints too, to try to package them > as much as possible, however, the dependencies are expressed > according to lower-constraints. [...] The same lower bounds would also typically be expressed in the requirements.txt file. Presumably you looked there before projects added lower-constraints.txt files? Noting that lower bounds testing isn't feasible and the jobs we were running weren't actually correctly testing minimum versions of everything, these have always been a "best effort" assertion anyway. I gather you run Tempest tests against your OpenStack packages on Debian already, so if a dependency there is too low you'll find out and can let the project maintainers know that their minimum version for that in requirements.txt isn't correct. Hopefully that doesn't come up very often, but for things we can't realistically test, getting notified by downstream distributors and users is the best feedback mechanism we can hope for. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Tue Jan 19 15:48:19 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 19 Jan 2021 07:48:19 -0800 Subject: Ironic Midcycle - Three time windows next week Message-ID: Greetings everyone, Given we have a global community, I think the statement "Scheduling is hard" is definitely one of those things that we've all had to learn over the years. Luckily we have some time slots where many of the participants have indicated they will be able to attend. From that, it appears that the following times seem in terms of trying to cover time zones as well as including participants that have expressed interest. Monday January 25th @ 5PM - 6PM UTC / 9AM - 10AM US Pacific Tuesday January 26th @ 8 PM - 9PM UTC / 12:00 PM - 1 PM Pacific Wednesday January 27th @ 3PM - 4PM UTC / 7AM - 8AM US Pacific We will meet in MeetPad[0], and the etherpad is still open if more topics wish to be added, although we cannot guarantee we will get to them. I've also tried to spread them up across days in order to try and make efficient use of the time. Thanks everyone, and see you there. -Julia [0]: https://meetpad.opendev.org/ironic [1]: https://etherpad.opendev.org/p/ironic-wallaby-midcycle From elod.illes at est.tech Tue Jan 19 16:20:47 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Tue, 19 Jan 2021 17:20:47 +0100 Subject: [TripleO] moving stable/rocky for tripleo repos to unmaintained (+ then EOL) OK? In-Reply-To: References: Message-ID: <9d339305-e637-b27a-2b68-7ec3eb4b6555@est.tech> Hi, just to add some clarification: - 'Unmaintained' is rather a state, where a stable/branch is not maintained (meaning, no one pushes CI fixes, bugfix backports), you don't need to transition to 'unmaintained' - a team can decide whether they want to wait for the 6 months to move the branch to EOL, or (if no one steps up as maintainer) start the EOL process [1] after the warning is sent to the mailing list - the '-last' tag is really created to support tempest's special case, so in TripleO's case EOL is the right choice I hope this helps. And sorry for the late response. Thanks, Előd [1] https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life On 2021. 01. 18. 11:44, Herve Beraud wrote: > Hello > > Le lun. 18 janv. 2021 à 10:22, Marios Andreou > a écrit : > > > > On Fri, Jan 15, 2021 at 7:01 PM Herve Beraud > wrote: > > Hello, > > Sorry for my late reply, and thanks for the heads up. > > Can't we move directly to EOL [1]? > I don't see reason to keep an unmaintained repo open, and if > the repos remain open and patches merged then it's not really > unmaintained repos. > > The goal of the extended maintenance was to offer more chances > to downstream maintainers to get/share patches and fixes, if > you decide to not maintain them anymore then I would suggest > you to move to "EOL" directly, it would be less misleading. > > Notice that if you move rocky to eol all the corresponding > branches will be dropped in your repos. > > Also notice that last week we proposed a new kind of tag > (-last) [2][3] for Tempest's needs, but because > tempest is branchless... > > Maybe we could extend this notion (-last) to allow the project > to reflect the last step... > It could reflect that it will be your last release, and that > the project is near from the end. > > But if you don't plan to merge patches, or if you don't have > patches to merge anymore, then I would really suggest to you > to move directly to EOL, else it means that you're not really > "unmaintained". > > > OK thanks very much Herve as always for your time and thoughts > here. I am not against the EOL I just thought it was more of a > requirement to declare it 'unmaintained' first. The advantage is > it is a softer path to completely closing it off for any future > submissions. Possibly the '-last' tag fits this need but if I have > understood correctly it might need some adjustment to the > definition of that tag ('we could extend this notion') and > honestly I don't know if it is necessary. More likely straight to > EOL is what we want here. > > > You're welcome, Do not hesitate to ping us. > Concerning the "-last", I said that we surely need to extend this kind > of tag because it is referenced for tempest's usages. > I think EOL fits our needs and is the shortest path. > > > I will bring this up again in tomorrow's tripleo irc meeting > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019859.html > and point to this thread. Let's see what other opinions there are > around EOL vs -last for stable/rocky > > Same thing on my side, I added this topic to our next relmgt irc > meeting (thursday) to see opinions of the team. > > > thank you, marios > > > Thank you > > > > Hopefully it will help you to find the solution that fits your > needs. > > Let me know if you have more questions. > > [1] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life > [2] https://review.opendev.org/c/openstack/releases/+/770265 > [3] > https://review.opendev.org/c/openstack/project-team-guide/+/769821 > > Le ven. 15 janv. 2021 à 16:52, Marios Andreou > > a écrit : > > > On Thu, Dec 10, 2020 at 7:01 PM Marios Andreou > > wrote: > > Hello TripleO > > > I would like to propose that we move all tripleo > stable/rocky repos [1] to "unmaintained", with a view > to tagging as end-of-life in due course. > > > This will allow us to focus our efforts on keeping the > check and gate queues green and continue to deliver > weekly promotions for the more recent and active > stable/* branches train ussuri victoria and master. > > > The stable/rocky repos have not had much action in the > last few months - I collected some info at [2] about > the most recent stable/rocky commits for each of the > tripleo repos. For many of those there are no commits > in the last 6 months and for some even longer. > > > The tripleo stable/rocky repos were tagged as > "extended maintenance" (rocky-em) [2] in April 2020 > with [3]. > > > We have already reduced our CI commitment for rocky - > these [4] are the current check/gate jobs and these > [5] are the jobs that run for promotion to > current-tripleo. However maintaining this doesn’t make > sense if we are not even using it e.g. merging things > into tripleo-* stable/rocky. > > > Please raise your objections or any other comments or > thoughts about this. Unless there are any blockers > raised here, the plan is to put this into motion early > in January. > > > One still unanswered question I have is that since > there is no ‘unmaintained’ tag, in the same way as we > have the -em or for extended > maintenance and end-of-life, do we simply _declare_ > that the repos are unmaintained? Then after a period > of “0 to 6 months” per [6] we can tag the tripleo > repos with rocky-eol. If any one reading this knows > please tell us! > > > > o/ hello ! > > replying to bump the thread - this was sent ~1 month ago > now and there hasn't been any comment thus far. > > ping @Herve please do you know the answer to that question > in the last paragraph above about 'declaring unmaintained' > ? please thank you ;) > > As discussed at the last tripleo bi-weekly we can consider > moving forward with this so I think it's prudent to give > folks more chance to comment if they object for _reason_ > > thanks, marios > > > > > Thanks for reading! > > regards, marios > > > > [1] > https://releases.openstack.org/teams/tripleo.html#rocky > > [2] http://paste.openstack.org/raw/800464/ > > [3] > https://review.opendev.org/c/openstack/releases/+/709912 > > [4] > http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&var-influxdb_filter=branch%7C%3D%7Cstable%2Frocky > > [5] > http://dashboard-ci.tripleo.org/d/3-DYSmOGk/jobs-exploration?orgId=1&fullscreen&panelId=9&var-influxdb_filter=type%7C%3D%7Crdo&var-influxdb_filter=job_name%7C%3D~%7C%2Fperiodic.*-rocky%2F > > [6] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Jan 19 16:34:20 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 19 Jan 2021 18:34:20 +0200 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> References: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> Message-ID: On Fri, Jan 15, 2021 at 4:56 PM Ghanshyam Mann wrote: > ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud > wrote ---- > > Dear QA team, > > The release team noticed an inconsistency between the QA team's > deliverables described in the governance’s reference and deliverables > defined in the openstack/releases repo (c.f our related meeting topic [1]). > > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never > released yet, was not ready yet for ussuri and victoria. maybe we should > abandon this instead of waiting? > > Notice that Wallaby's milestone 2 is next week so maybe it could be a > good time to update this. > > Let us know your thoughts, we are waiting for your replies. > > Thanks for reading, > > Thanks hberaud for brining it, > > I did not know about this repo until it was discussed in yesterday's > release meeting. > > As per the governance project.yaml 'openstack-tempest-skiplist' is under > the TripleO project, not QA [1] (tagging tripleo in sub). > > This repo is to maintain the test skiplist so not sure if it needed > release but marios or Chandan can decide. > > [1] > https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 > > -gmann > Thank you Herve and Ghanshyam (and thanks Chandan for pointing me to this thread!) apologies for the late response but I initially missed this. I just discussed a bit with Arx (adding him into cc) - he agrees this isn't something we will want or need to make releases for. It is used by CI tooling and not to do with TripleO deploying OpenStack. The repo itself is branchless so if we *want* to make releases for it then we can consider adding a release file under deliverables/_independent in the releases repo. However since we don't then I think we should just mark it so in governance? I posted that just now there https://review.opendev.org/c/openstack/governance/+/771488 @Herve is that correct? regards, marios > > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Jan 19 17:02:44 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 19 Jan 2021 19:02:44 +0200 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> Message-ID: <6138511611075251@mail.yandex.ru> An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Tue Jan 19 17:54:08 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Tue, 19 Jan 2021 19:54:08 +0200 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <6138511611075251@mail.yandex.ru> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> Message-ID: <1671511611078715@mail.yandex.ru> An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Jan 19 18:16:14 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Jan 2021 12:16:14 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> Message-ID: <1771bdc8d1e.d3cb624075720.6344507521187621605@ghanshyammann.com> ---- On Tue, 19 Jan 2021 09:32:02 -0600 Jeremy Stanley wrote ---- > On 2021-01-19 08:03:25 +0100 (+0100), Thomas Goirand wrote: > [...] > > Of course, I'm using upper-constraints too, to try to package them > > as much as possible, however, the dependencies are expressed > > according to lower-constraints. > [...] > > The same lower bounds would also typically be expressed in the > requirements.txt file. Presumably you looked there before projects > added lower-constraints.txt files? Noting that lower bounds testing > isn't feasible and the jobs we were running weren't actually > correctly testing minimum versions of everything, these have always > been a "best effort" assertion anyway. > > I gather you run Tempest tests against your OpenStack packages on > Debian already, so if a dependency there is too low you'll find out > and can let the project maintainers know that their minimum version > for that in requirements.txt isn't correct. Hopefully that doesn't > come up very often, but for things we can't realistically test, > getting notified by downstream distributors and users is the best > feedback mechanism we can hope for. Yeah, in requirments.txt we always have a lower bound of deps and we do not update it or sync it with u-c. Yes, we will not be testing those as such but as Jeremy mentioned if there is some wrong lower bound then we can fix it quickly. Usually, on every new feature or interface deps, we do bump that lower bound in requirements.txt. We usually check if anything new we are using that is being updated in this file or not -gmann > -- > Jeremy Stanley > From gmann at ghanshyammann.com Tue Jan 19 18:39:39 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Jan 2021 12:39:39 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <6138511611075251@mail.yandex.ru> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> Message-ID: <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> ---- On Tue, 19 Jan 2021 11:02:44 -0600 Dmitriy Rabotyagov wrote ---- > Hi! I have some follow up questions. On oslo.policy side it looks like it's better to explicitly set policy.yaml path > in config and not rely if services have already moved to using yaml files. Or in case policy.json does not exist, oslo > will try to load yaml instead? This was first thought but we can not do that as this will break the existing deployment relying on policy.json. That is why we need to wait for all services to do 1. change the default value of CONF.policy_file to policy.yaml 2. officially deprecate the JSON format policy file support. And once that is done in all openstack services and the operator has moved to policy.yaml then we can change it in oslo.policy safely. Overall what we are trying to achieve is "Convey the JSON->YAML policy file migration properly to the operator and then switch the flag" so that we do not introduce any breaking change and migrate it smoothly. >Another question is more general one and very basic :( I have a feeling that policies are applied without >related service reload which means that they're loaded from disk for each incoming request? Is that >assumption right? I'm just thinking about the best way to do upgrade and at what point we should be >dropping old policy.json and if we can do this before placing policy.yaml or not. My plan is 1. finish the service side deprecation and default file change in Wallaby 2. give Xena cycle as a buffer for the operator to notice these changes 3. In the Y cycle, we remove the JSON format and file support completly. -gmann 29.12.2020, 20:37, "Ghanshyam Mann" : ---- On Tue, 29 Dec 2020 01:56:22 -0600 Dmitriy Rabotyagov wrote ---- > > Hi! Regarding OpenStack-Ansible I was planning to land patches early January. We eventually need to patch every role to change "dest" and "config_type" for placing template, ie. [1] Also we will need to think through removal of old json file for ppl that will perform upgrade, to avoid any possible conflicts and confusions because of the prescence of both files. [1] https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L78-L82 > > Thanks, Dmitriy, do let me know if you need help this is a large number of changes. I will be able to push changes for this. > > On point of the presence of both files, yes this is a good point. From the service side default value change, I am taking care of > this on oslo.policy side[1]. If both files exist and deployment rely on the default value (config option is not overridden ) then > oslo policy will pick up the 'policy.json'. With this, we make sure we do not break any upgrade for deployment relying on this > default value. In the future, when we decide to remove the support of policy.json then we can remove this fallback logic. > > -gmann > > [1] https://github.com/openstack/oslo.policy/blob/0a228dea2ee96ec3eabed3361ca22502d0bbd4a1/oslo_policy/policy.py#L363 > > > > 26.12.2020, 00:41, "Ghanshyam Mann" :Hello Everyone, > > > > Please find the week's R-16 updates on 'Migrate RBAC Policy Format from JSON to YAML' community-wide goals. > > > > Tracking: https://etherpad.opendev.org/p/migrate-policy-format-from-json-to-yaml > > > > Gerrit Topic: https://review.opendev.org/q/topic:%22policy-json-to-yaml%22+(status:open%20OR%20status:merged) > > > > Progress: > > ======= > > * Projects completed: 5 > > * Projects left to merge the patches: 25 > > * Projects left to push the patches: 2 (horizon and Openstackansible) > > * Projects do not need any work: 17 > > > > Updates: > > ======= > > * I have pushed the patches for all the required service projects. > > > > ** Because of many services gate is already broken for lower constraints job, these patches might not be green in the > > test results. I request projects to fix the gate so that we can merge this goal work before m-2. > > > > ** There are many project tests where CONF object was not fully initialized before the policy is init. This was working till now > > as policy init did not use the CONF object but oslo_policy 3.6.0 onwards it needs fully initialized CONF object during init only. > > > > ** Aodh work for this goal is blocked because it needs oslo_policy 3.6.0 but gnocchi is capped for oslo_policy 3.4.0 [1] > > - https://review.opendev.org/c/openstack/aodh/+/768499 > > > > * Horizon and Openstackansible work is pending to use/deploy the YAML formatted policy file. I will start exploring this > > next week or so. > > > > [1] https://github.com/gnocchixyz/gnocchi/blob/e19fda590c7f7f07f1df0ba93177df07d9802300/setup.cfg#L33 > > > > Merry Christmas and Happy Holidays! > > > > -gmann > > > > -- > > Kind Regards,Dmitriy Rabotyagov > -- > Kind Regards,Dmitriy Rabotyagov From gmann at ghanshyammann.com Tue Jan 19 18:44:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 19 Jan 2021 12:44:21 -0600 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: References: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> Message-ID: <1771bf6499a.e47b8f1076996.4876840442407425263@ghanshyammann.com> ---- On Tue, 19 Jan 2021 10:34:20 -0600 Marios Andreou wrote ---- > > > On Fri, Jan 15, 2021 at 4:56 PM Ghanshyam Mann wrote: > ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud wrote ---- > > Dear QA team, > > The release team noticed an inconsistency between the QA team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). > > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never released yet, was not ready yet for ussuri and victoria. maybe we should abandon this instead of waiting? > > Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. > > Let us know your thoughts, we are waiting for your replies. > > Thanks for reading, > > Thanks hberaud for brining it, > > I did not know about this repo until it was discussed in yesterday's release meeting. > > As per the governance project.yaml 'openstack-tempest-skiplist' is under the TripleO project, not QA [1] (tagging tripleo in sub). > > This repo is to maintain the test skiplist so not sure if it needed release but marios or Chandan can decide. > > [1] https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 > > -gmann > > > Thank you Herve and Ghanshyam (and thanks Chandan for pointing me to this thread!) > apologies for the late response but I initially missed this. > I just discussed a bit with Arx (adding him into cc) - he agrees this isn't something we will want or need to make releases for. It is used by CI tooling and not to do with TripleO deploying OpenStack. > The repo itself is branchless so if we *want* to make releases for it then we can consider adding a release file under deliverables/_independent in the releases repo. > However since we don't then I think we should just mark it so in governance? I posted that just now there https://review.opendev.org/c/openstack/governance/+/771488 at Herve is that correct? Thanks, marios for your response and updates. I also agree that this skip list repo does not need release as such. -gmann > > regards, marios > > > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > -- > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > From openstack at nemebean.com Tue Jan 19 20:04:07 2021 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 19 Jan 2021 14:04:07 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> Message-ID: <1df1b294-fd76-c598-1b57-6b298675123a@nemebean.com> On 1/19/21 12:39 PM, Ghanshyam Mann wrote: > ---- On Tue, 19 Jan 2021 11:02:44 -0600 Dmitriy Rabotyagov wrote ---- > > Hi! I have some follow up questions. On oslo.policy side it looks like it's better to explicitly set policy.yaml path >> in config and not rely if services have already moved to using yaml files. Or in case policy.json does not exist, oslo >> will try to load yaml instead? > > This was first thought but we can not do that as this will break the existing deployment relying on policy.json. > That is why we need to wait for all services to do 1. change the default value of CONF.policy_file to policy.yaml > 2. officially deprecate the JSON format policy file support. And once that is done in all openstack services and > the operator has moved to policy.yaml then we can change it in oslo.policy safely. Overall what we are trying to > achieve is "Convey the JSON->YAML policy file migration properly to the operator and then switch the flag" so > that we do not introduce any breaking change and migrate it smoothly. There was also a security concern with potentially having multiple policy files and it not being clear which was in use. If someone converted their JSON policy to YAML, but left the JSON one in place, it could result in oslo.policy using the wrong one (or not the one they expect). We decided it was better for each project to make a clean switchover, which allows for things like upgrade checks that oslo.policy couldn't have itself, than to try to handle it all in oslo.policy. From aditi.Dukle at ibm.com Tue Jan 19 11:36:20 2021 From: aditi.Dukle at ibm.com (aditi Dukle) Date: Tue, 19 Jan 2021 11:36:20 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: References: , , <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Tue Jan 19 12:37:53 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Tue, 19 Jan 2021 20:37:53 +0800 Subject: docs about ironic-inspector Message-ID: Hi, I have an rocky OpenStack platform, and I need openstack-ironic-inspector to inspect my baremetal node, but the documentation on the websit https://docs.openstack.org/ironic/rocky/admin/inspection.html is too concise, without a detailed configuration process. I need a link to documentation for detailed ironic-inspector configuration and usage. Looking forward to your help. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: From narjes.bessghaier.1 at ens.etsmtl.ca Tue Jan 19 21:29:32 2021 From: narjes.bessghaier.1 at ens.etsmtl.ca (Bessghaier, Narjes) Date: Tue, 19 Jan 2021 21:29:32 +0000 Subject: Requesting help with OpenStack configuration files Message-ID: Dear OpenStack team, I’m a Ph.D. student in software engineering at the ETS Montreal, University of Quebec working on the quality and configuration of web-based software systems. I’m particularly interested in analyzing configuration files from different OpenStack files. One of the main challenges I am currently facing is the proper identification of configuration files. I’m mostly confused between the python files used for production and the python files used for configuration. I am kindly requesting your precious help with the following questions: 1- How to distinguish between python files used for configuration and python files used for production? It will be very helpful if there are some configuration-based patterns (eg, textual patterns or expressions) that we can find in python files to help us distinguish between source code and configuration files? 2- Certain python files use the oslo_config to access and define configuration options. Could "all" these python files be considered as configuration files? For example, the following python file of the keystone project: keystone/keystone/conf/auth.py, is it considered a source code or configuration file? 3- Why are there different source code and configuration repositories for OpenStack projects (eg, nova and puppet-nova)? For instance, does the OpenStack-nova service have some configuration files in its repository and have the puppet-nova as a separate configuration repository as well? Thank you very much in advance for your time and your help! Kind regards, Narjes Bessghaier Narjes Bessghaier Ph.D student in Software Engineering École de Technologie Supérieure (ETS)| University of Quebec Montreal, Canada narjes.bessghaier.1 at ens.etsmtl.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Jan 19 22:28:03 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 19 Jan 2021 16:28:03 -0600 Subject: Secure RBAC work In-Reply-To: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> Message-ID: Hey all, I want to follow up on this thread because there's been some discussion and questions (some of which are in reviews) as services work through the proposed changes [0]. TL;DR - OpenStack services implementing secure RBAC should update default policies with the `reader` role in a consistent manner, where it is not meant to protect sensitive information. In the process of reviewing changes for various resources, some folks raised concerns about the `reader` role definition. One of the intended use-cases for implementing a `reader` role was to use it for auditing, as noted in the keystone definitions for each role and persona [1]. Another key point of that document, and the underlying design of secure RBAC, is that the default roles have role implications built between them (e.g., reader implies member, and member implies admin). This detail serves two important functions. First, it reduces duplication in check strings because keystone expands role implications in token response bodies. For example, someone with the `admin` role on a project will have `member` and `reader` roles in their token body when they authenticate for a token or validate a token. This reduces the complexity of our check strings by writing the policy to the highest level of authorization required to access an API or resource. Users with anything above that level will work through the role implications feature. Second, it reduces the need for extra role assignments. If you grant someone the `admin` role on a project you don't need to also give them `reader` and `member` role assignments. This is true regardless of how services implement check strings. Ultimately, the hierarchical role structure in keystone and role expansion in token responses give us shorter check strings and less role assignments. But, one thing we're aware of now is that we need to be careful how we expose certain information to users via the `reader` role, since it is the least-privileged role in the hierarchy. For example, one concern was exposing license key information in images to anyone with the `reader` role on the system. Some deployments, depending on their security posture or auditing targets, might not allow sensitive information to be implicitly exposed. Instead, they may require deployments to explicitly grant access to sensitive information [2]. So what do we do moving forward? I think it's clear that there are APIs and resources in OpenStack that fall into a special category where we shouldn't expose certain information to the lowest level of the role hierarchy, regardless of the scope. But, the role implication functionality served a purpose initially to deliver a least-privileged role used only for read operations within a given scope. I think breaking that implication now is confusing considering we implemented the implication in Rocky [3], but I think future work for an elevated read-only role is a good path forward. Eventually, keystone can consider implementing support for a new default role, which implies `reader`, making all the work we do today still useful. At that time, we can update relevant policies to expose sensitive information with the elevated read-only role. I suspect this will be a much smaller set of APIs and policies. I think this approach strikes a balance between what we have today, and a way to move forward that still protects sensitive data. I proposed an update to the documentation in keystone to clarify this point [4]. It also doesn't assume all audits are the same. Instead, it phrases the ability to use `reader` roles for auditing in a way that leaves that up to the deployer and auditor. I think that's an important detail since different deployments have different security requirements. Instead of assuming everyone can use `reader` for auditing, we can give them a list of APIs they can interact with as a `reader` (or have them generate those policies themselves, especially if they have custom policy) and let them determine if that access is sufficient for their audit. If it isn't, deployers aren't in a worse position today, but it emphasizes the importance of expanding the default roles to include another tier for elevated read-only permissions. Given where we are in the release cycle for Wallaby, I don't expect keystone to implement a new default role this late in the release [5]. Perhaps Xena is a better target, but I'll talk with Kristi about it next week during the keystone meeting. I hope this helps clarify some of the confusion around the secure RBAC patches. If you have additional comments or questions about this topic, let me know. We can obviously iterate here, or use the policy pop up time slot which is in a couple of days [6]. Thanks, Lance [0] https://review.opendev.org/q/topic:secure-rbac [1] https://docs.openstack.org/keystone/latest/admin/service-api-protection.html [2] FedRAMP control AC -06 (01) is an example of this - *The organization explicitly authorizes access to [Assignment: organization-defined security functions (deployed in hardware, software, and firmware) and security-relevant information].* [3] https://docs.openstack.org/releasenotes/keystone/rocky.html#new-features [4] https://review.opendev.org/c/openstack/keystone/+/771509 [5] https://releases.openstack.org/wallaby/schedule.html [6] https://etherpad.opendev.org/p/default-policy-meeting-agenda On Thu, Dec 10, 2020 at 7:15 PM Ghanshyam Mann wrote: > ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad < > lbragstad at gmail.com> wrote ---- > > Hey everyone, > > > > I wanted to take an opportunity to clarify some work we have been doing > upstream, specifically modifying the default policies across projects. > > > > These changes are the next phase of an initiative that’s been underway > since Queens to fix some long-standing security concerns in OpenStack [0]. > For context, we have been gradually improving policy enforcement for years. > We started by improving policy formats, registering default policies into > code [1], providing better documentation for policy writers, implementing > necessary identity concepts in keystone [2], developing support for those > concepts in libraries [3][4][5][6][7][8], and consuming all of those > changes to provide secure default policies in a way operators can consume > and roll out to their users [9][10]. > > > > All of this work is in line with some high-level documentation we > started writing about three years ago [11][12][13]. > > > > There are a handful of services that have implemented the goals that > define secure RBAC by default, but a community-wide goal is still > out-of-reach. To help with that, the community formed a pop-up team with a > focused objective and disbanding criteria [14]. > > > > The work we currently have in progress [15] is an attempt to start > applying what we have learned from existing implementations to other > projects. The hope is that we can complete the work for even more projects > in Wallaby. Most deployers looking for this functionality won't be able to > use it effectively until all services in their deployment support it. > > Thanks, Lance for pushing this work forwards. I completely agree and that > is what we get feedback in > forum sessions also that we should implement this in all the services > first before we ask operators to > move their cloud to the new RBAC. > > We discussed these in today's policy-popup meeting also and encourage > every project to help in those > patches to add tests and review. This will help to finish the work on > priority and we can provide better > RBAC experience to the deployer. > > -gmann > > > > > > > I hope this helps clarify or explain the patches being proposed. > > > > > > As always, I'm happy to elaborate on specific concerns if folks have > them. > > > > > > Thanks, > > > > > > Lance > > > > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > > [1] > https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > > [2] > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > > [4] > https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > > [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > > [8] > https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > > [9] > https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > > [10] > https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > > [11] > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > > [12] > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > [13] > https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > > [14] > https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > > [15] > https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Tue Jan 19 22:31:23 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 19 Jan 2021 16:31:23 -0600 Subject: Secure RBAC work In-Reply-To: References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> Message-ID: On Tue, Jan 19, 2021 at 4:28 PM Lance Bragstad wrote: > Hey all, > > I want to follow up on this thread because there's been some discussion > and questions (some of which are in reviews) as services work through the > proposed changes [0]. > > TL;DR - OpenStack services implementing secure RBAC should update default > policies with the `reader` role in a consistent manner, where it is not > meant to protect sensitive information. > > In the process of reviewing changes for various resources, some folks > raised concerns about the `reader` role definition. > > One of the intended use-cases for implementing a `reader` role was to use > it for auditing, as noted in the keystone definitions for each role and > persona [1]. Another key point of that document, and the underlying design > of secure RBAC, is that the default roles have role implications built > between them (e.g., reader implies member, and member implies admin). This > detail serves two important functions. > Correction. The admin role implies the member role and the member role implies the reader role. > First, it reduces duplication in check strings because keystone expands > role implications in token response bodies. For example, someone with the > `admin` role on a project will have `member` and `reader` roles in their > token body when they authenticate for a token or validate a token. This > reduces the complexity of our check strings by writing the policy to the > highest level of authorization required to access an API or resource. Users > with anything above that level will work through the role implications > feature. > > Second, it reduces the need for extra role assignments. If you grant > someone the `admin` role on a project you don't need to also give them > `reader` and `member` role assignments. This is true regardless of how > services implement check strings. > > Ultimately, the hierarchical role structure in keystone and role expansion > in token responses give us shorter check strings and less role assignments. > But, one thing we're aware of now is that we need to be careful how we > expose certain information to users via the `reader` role, since it is the > least-privileged role in the hierarchy. For example, one concern was > exposing license key information in images to anyone with the `reader` role > on the system. Some deployments, depending on their security posture or > auditing targets, might not allow sensitive information to be implicitly > exposed. Instead, they may require deployments to explicitly grant access > to sensitive information [2]. > > So what do we do moving forward? > > I think it's clear that there are APIs and resources in OpenStack that > fall into a special category where we shouldn't expose certain information > to the lowest level of the role hierarchy, regardless of the scope. But, > the role implication functionality served a purpose initially to deliver a > least-privileged role used only for read operations within a given scope. I > think breaking that implication now is confusing considering we implemented > the implication in Rocky [3], but I think future work for an elevated > read-only role is a good path forward. Eventually, keystone can consider > implementing support for a new default role, which implies `reader`, making > all the work we do today still useful. At that time, we can update relevant > policies to expose sensitive information with the elevated read-only role. > I suspect this will be a much smaller set of APIs and policies. I think > this approach strikes a balance between what we have today, and a way to > move forward that still protects sensitive data. > > I proposed an update to the documentation in keystone to clarify this > point [4]. It also doesn't assume all audits are the same. Instead, it > phrases the ability to use `reader` roles for auditing in a way that leaves > that up to the deployer and auditor. I think that's an important detail > since different deployments have different security requirements. Instead > of assuming everyone can use `reader` for auditing, we can give them a list > of APIs they can interact with as a `reader` (or have them generate those > policies themselves, especially if they have custom policy) and let them > determine if that access is sufficient for their audit. If it isn't, > deployers aren't in a worse position today, but it emphasizes the > importance of expanding the default roles to include another tier for > elevated read-only permissions. Given where we are in the release cycle for > Wallaby, I don't expect keystone to implement a new default role this late > in the release [5]. Perhaps Xena is a better target, but I'll talk with > Kristi about it next week during the keystone meeting. > > I hope this helps clarify some of the confusion around the secure RBAC > patches. If you have additional comments or questions about this topic, let > me know. We can obviously iterate here, or use the policy pop up time slot > which is in a couple of days [6]. > > Thanks, > > Lance > > [0] https://review.opendev.org/q/topic:secure-rbac > [1] > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > [2] FedRAMP control AC -06 (01) is an example of this - *The organization > explicitly authorizes access to [Assignment: organization-defined security > functions (deployed in hardware, software, and firmware) and > security-relevant information].* > [3] > https://docs.openstack.org/releasenotes/keystone/rocky.html#new-features > [4] https://review.opendev.org/c/openstack/keystone/+/771509 > [5] https://releases.openstack.org/wallaby/schedule.html > [6] https://etherpad.opendev.org/p/default-policy-meeting-agenda > > On Thu, Dec 10, 2020 at 7:15 PM Ghanshyam Mann > wrote: > >> ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad < >> lbragstad at gmail.com> wrote ---- >> > Hey everyone, >> > >> > I wanted to take an opportunity to clarify some work we have been >> doing upstream, specifically modifying the default policies across projects. >> > >> > These changes are the next phase of an initiative that’s been underway >> since Queens to fix some long-standing security concerns in OpenStack [0]. >> For context, we have been gradually improving policy enforcement for years. >> We started by improving policy formats, registering default policies into >> code [1], providing better documentation for policy writers, implementing >> necessary identity concepts in keystone [2], developing support for those >> concepts in libraries [3][4][5][6][7][8], and consuming all of those >> changes to provide secure default policies in a way operators can consume >> and roll out to their users [9][10]. >> > >> > All of this work is in line with some high-level documentation we >> started writing about three years ago [11][12][13]. >> > >> > There are a handful of services that have implemented the goals that >> define secure RBAC by default, but a community-wide goal is still >> out-of-reach. To help with that, the community formed a pop-up team with a >> focused objective and disbanding criteria [14]. >> > >> > The work we currently have in progress [15] is an attempt to start >> applying what we have learned from existing implementations to other >> projects. The hope is that we can complete the work for even more projects >> in Wallaby. Most deployers looking for this functionality won't be able to >> use it effectively until all services in their deployment support it. >> >> Thanks, Lance for pushing this work forwards. I completely agree and that >> is what we get feedback in >> forum sessions also that we should implement this in all the services >> first before we ask operators to >> move their cloud to the new RBAC. >> >> We discussed these in today's policy-popup meeting also and encourage >> every project to help in those >> patches to add tests and review. This will help to finish the work on >> priority and we can provide better >> RBAC experience to the deployer. >> >> -gmann >> >> > >> > >> > I hope this helps clarify or explain the patches being proposed. >> > >> > >> > As always, I'm happy to elaborate on specific concerns if folks have >> them. >> > >> > >> > Thanks, >> > >> > >> > Lance >> > >> > >> > [0] https://bugs.launchpad.net/keystone/+bug/968696/ >> > [1] >> https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html >> > [2] >> https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html >> > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 >> > [4] >> https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 >> > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 >> > [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 >> > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 >> > [8] >> https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) >> > [9] >> https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master >> > [10] >> https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) >> > [11] >> https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html >> > [12] >> https://docs.openstack.org/keystone/latest/admin/service-api-protection.html >> > [13] >> https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes >> > [14] >> https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies >> > [15] >> https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Jan 19 23:09:39 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 00:09:39 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> Message-ID: <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> Hi Jeremy, Thanks for your reply. On 1/19/21 4:32 PM, Jeremy Stanley wrote: > On 2021-01-19 08:03:25 +0100 (+0100), Thomas Goirand wrote: > [...] >> Of course, I'm using upper-constraints too, to try to package them >> as much as possible, however, the dependencies are expressed >> according to lower-constraints. > [...] > > The same lower bounds would also typically be expressed in the > requirements.txt file. Presumably you looked there before projects > added lower-constraints.txt files? I should rephrase. Yes, I'm looking into requirements.txt to translate this into the dependencies in debian/control. The exact workflow, is to compare what's in requirements.txt and what's in the current Debian Stable. If the version is satisfied in Debian Stable, I just don't express any lower bound. If the dependency in Debian Stable isn't high enough, I do write whatever upstream wrote as minimum version in debian/control, meaning that it will need a backported version to Debian Stable to run, or the version in Debian Testing/Unstable. I do expect it to be correct in requirements.txt, and as we always say in the OpenStack world "if it's not tested it's broken"... Which is what bothers me here. > Noting that lower bounds testing > isn't feasible and the jobs we were running weren't actually > correctly testing minimum versions of everything, these have always > been a "best effort" assertion anyway. Correct, though $topic for this thread is "let's throw away the baby with the bath water"^W^W^W^W "Let's stop testing from all projects". :) If $topic was "let's relax testing on l-c", or "can we find a solution" you'd have my full acceptance, as I can only agree that there's only so much one can do in a day of work, and that our head count is shrinking. Though at the same time, I have a hard time understanding a general call for removing useful tests. At the end, I'm not the person that will be maintaining these tests. That's not my role, and I simply wouldn't be able to do that upstream work on the 500+ packages that I maintain (nearly) alone. Though it's my duty to warn the community about the consequences it may have downstream. > I gather you run Tempest tests against your OpenStack packages on > Debian already I'm trying to find the time to do it, but until I have a full CI up and running (which isn't fully the case yet for my installer), it's still a manual, and painful process. :/ I'm close to having such a CI up and running, though with my current setup, it demands a lot of resources (ie: a machine with 256GB of RAM). Contributions would be very much appreciated. > so if a dependency there is too low you'll find out > and can let the project maintainers know that their minimum version > for that in requirements.txt isn't correct. It happened numerous times that I did such bug report. Which proves 2 things: - that it's not tested (enough) - that testing would be useful > Hopefully that doesn't come up very often It used to be that it was about 5 or 6 times per release. People working on each project for long enough can probably remember me asking about failed unit tests, and being told to upgrade this or that. This type of trouble may mean spending 2 or 3 days not understanding what's happening, until someone on IRC, knowing the project well enough, just finds out in 5 minutes. Since the tested l-c, I can't remember finding out such a problem that wasn't my fault, so it was a huge improvement. One also has to keep in mind that, if on a single release, I can find 5 or 6 times a wrong lower-bound in a requirements.txt, this probably means 10 times more wrong lower bounds really being there. I don't see most problems, because I don't test lower-bound myself, and try to package as much as possible what's in u-c. So I just happen to bump into something I forgot to upgrade *AND* upstream has a wrong lower-bound. > but for things we can't realistically test, > getting notified by downstream distributors and users is the best > feedback mechanism we can hope for. Something I don't understand: why can't we use an older version of pip, if the problem is the newer pip resolver? Or can't the current pip be patched to fix things? It's not as if there was no prior art... Maybe I'm missing the big picture? On 1/19/21 7:16 PM, Ghanshyam Mann wrote: > Yeah, in requirments.txt we always have a lower bound of deps and we > do not update it or sync it with u-c. Yes, we will not be testing > those as such but as Jeremy mentioned if there is some wrong lower > bound then we can fix it quickly. > > Usually, on every new feature or interface deps, we do bump that lower > bound in requirements.txt. We usually check if anything new we are > using that is being updated in this file or not >From a downstream distribution package maintainer, having an upstream to do that work, is just super nice and rare. Though the manual process that you describe above is far from trivial, and very error-prone, unfortunately. And this isn't specific to OpenStack of course. Cheers, Thomas Goirand (zigo) From zigo at debian.org Tue Jan 19 23:17:45 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 00:17:45 +0100 Subject: Requesting help with OpenStack configuration files In-Reply-To: References: Message-ID: On 1/19/21 10:29 PM, Bessghaier, Narjes wrote: > Dear OpenStack team, > > > I’m a Ph.D. student in software engineering at the ETS Montreal, > University of Quebec working on the quality and configuration of > web-based software systems. > I’m particularly interested in analyzing configuration files from > different OpenStack files. One of the main challenges I am currently > facing is the proper identification of configuration files. I’m mostly > confused between the python files used for production and the python > files used for configuration. I am kindly requesting your precious help > with the following questions: > >   > > 1- How to distinguish between python files used for configuration and > python files used for production? It will be very helpful if there > are some configuration-based patterns (eg, textual patterns or > expressions) that we can find in python files to help us distinguish > between source code and configuration files? Hi! Appart from Horizon (that reads a "local_settings.py" as configuration file), all projects are using the .ini file format for configuration, and read them using the oslo.config library. So there's no such thing as "python files used for configuration". Also, from a downstream distribution, it's very easy how to identify configuration files: they are all stored in /etc. If not, then this is a bug which you may file against the package: https://www.debian.org/Bugs/Reporting > 2- Certain python files use the oslo_config to access and define > configuration options. Could "all" these python files be considered as > configuration files? None of them are. > For example, the following python file of the > keystone project: keystone/keystone/conf/auth.py, is it considered a > source code or configuration file? Definitively source code. > 3- Why are there different source code and configuration repositories > for OpenStack projects (eg, nova and puppet-nova)? These are 2 different project. "nova" is the upstream code for the nova service, and "puppet-nova" is a repository for the puppet configuration management module for Nova. > For instance, does > the OpenStack-nova service have some configuration files in its > repository The nova repository doesn't contain any configuration file by itself, these are generated by oslo-config-generator using the code of Nova. > and have the puppet-nova as a separate configuration > repository as well? puppet-nova isn't holding any configuration for Nova. It is a module for puppet that will configure (ie: tweak the configuration of) Nova. You probably should read about what puppet is and what it does. I hope this helps, Cheers, Thomas Goirand (zigo) From zigo at debian.org Tue Jan 19 23:25:46 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 00:25:46 +0100 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: <1df1b294-fd76-c598-1b57-6b298675123a@nemebean.com> References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> <1df1b294-fd76-c598-1b57-6b298675123a@nemebean.com> Message-ID: On 1/19/21 9:04 PM, Ben Nemec wrote: > There was also a security concern with potentially having multiple > policy files and it not being clear which was in use. If someone > converted their JSON policy to YAML, but left the JSON one in place, it > could result in oslo.policy using the wrong one (or not the one they > expect). We decided it was better for each project to make a clean > switchover, which allows for things like upgrade checks that oslo.policy > couldn't have itself, than to try to handle it all in oslo.policy. IMO, that's a downstream distro thing. What I did in Debian (and for Victoria already) was having the postinst of each package to rename any existing policy.json into a disabled version. Here's an example with Cinder: if [ -r /etc/cinder/policy.json ] ; then mv /etc/cinder/policy.json /etc/cinder/disabled.policy.json.old fi and then package the yaml file as (example from Nova): /etc/nova/policy.d/00_default_policy.yaml and then setting-up this: policy_dirs = /etc/nova/policy.d The reason I'm doing this way, is that I'm expecting upstream to generate a commented-only yaml file, and final users to drop non-default supplementary files without touching the package default file. So, someone upgrading to Victoria with a non-default policy.json will see its manual tweaks go away, but not completely gone (ie: recoverable from disabled.policy.json.old). Does this seem to be a correct approach? Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Tue Jan 19 23:51:49 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 19 Jan 2021 23:51:49 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> Message-ID: <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> On 2021-01-20 00:09:39 +0100 (+0100), Thomas Goirand wrote: [...] > Something I don't understand: why can't we use an older version of > pip, if the problem is the newer pip resolver? Or can't the > current pip be patched to fix things? It's not as if there was no > prior art... Maybe I'm missing the big picture? [...] To get to the heart of the matter, when using older versions of pip it was just quietly installing different versions of packages than we asked it to, and versions of transitive dependencies which directly conflicted with the versions other dependencies said they required. When pip finally (very recently) implemented a coherent dependency solver, it started alerting us directly to this fact. We could certainly find a way to hide our heads in the sand and go back to testing with old pip and pretending we knew what was being tested there, but the question is whether what we were actually testing that way was worthwhile enough to try to continue doing it, now that we have proof it wasn't what we were wanting to test. The challenge with actually testing what we wanted has always been that there's many hundreds of packages we depend on and, short of writing one ourselves, no tool available to find a coherent set of versions of them which satisfy the collective lower bounds. The way pip works, it wants to always solve for the newest possible versions which satisfy an aggregate set of version ranges, and what we'd want for lower bounds checking is the inverse of that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lpetrut at cloudbasesolutions.com Wed Jan 20 07:26:05 2021 From: lpetrut at cloudbasesolutions.com (Lucian Petrut) Date: Wed, 20 Jan 2021 07:26:05 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org>, <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: Hi, For Windows related projects such as os-win and networking-hyperv, we decided to keep the lower constraints job but remove indirect dependencies from the lower-constraints.txt file. This made it much easier to maintain and it allows us to at least cover direct dependencies. I suggest considering this approach instead of completely dropping the lower constraints job, whenever possible. Another option might be to make it non-voting while it’s getting fixed. Lucian Petrut From: Jeremy Stanley Sent: Wednesday, January 20, 2021 1:52 AM To: openstack-discuss at lists.openstack.org Subject: Re: [all][tc] Dropping lower-constraints testing from all projects On 2021-01-20 00:09:39 +0100 (+0100), Thomas Goirand wrote: [...] > Something I don't understand: why can't we use an older version of > pip, if the problem is the newer pip resolver? Or can't the > current pip be patched to fix things? It's not as if there was no > prior art... Maybe I'm missing the big picture? [...] To get to the heart of the matter, when using older versions of pip it was just quietly installing different versions of packages than we asked it to, and versions of transitive dependencies which directly conflicted with the versions other dependencies said they required. When pip finally (very recently) implemented a coherent dependency solver, it started alerting us directly to this fact. We could certainly find a way to hide our heads in the sand and go back to testing with old pip and pretending we knew what was being tested there, but the question is whether what we were actually testing that way was worthwhile enough to try to continue doing it, now that we have proof it wasn't what we were wanting to test. The challenge with actually testing what we wanted has always been that there's many hundreds of packages we depend on and, short of writing one ourselves, no tool available to find a coherent set of versions of them which satisfy the collective lower bounds. The way pip works, it wants to always solve for the newest possible versions which satisfy an aggregate set of version ranges, and what we'd want for lower bounds checking is the inverse of that. -- Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jan 20 08:53:45 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 20 Jan 2021 08:53:45 +0000 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> References: <20210119055830.GB3137911@fedora19.localdomain> <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> Message-ID: On Tue, 19 Jan 2021 at 15:18, Jeremy Stanley wrote: > > On 2021-01-19 12:56:41 +0000 (+0000), Sean Mooney wrote: > > On Tue, 2021-01-19 at 12:37 +0100, Dmitry Tantsur wrote: > [...] > > > I wonder if we could also run the plugin that shows the live > > > progress (it was mentioned somewhere in the thread). > > > > i belive showing the live progress of the jobs is effectivly a > > ddos vector. infra have ask that we not use javascript to pool the > > the live status of the jobs in our browser in the past. > [...] > > i know that we previously tried enbeding the zuul job status > > directly into gerrit a few years ago and that had to be qickly > > reverted as it does not take many developers leave review open in > > a tab to quickly make that unworkable. i know i for one often > > leave review open over night if im pinged to review something > > shortly before i finish for the day so that its open on my screen > > when i log in the next day. > [...] > > I think it's probably worth trying again. The previous attempts hit > a wall because of several challenges: > > 1. The available Zuul status API returned data on all enqueued refs > (a *very* large JSON blob when the system is under heavy use) > > 2. Calls to the API were handled by a thread of the scheduler > daemon, so often blocked or were blocked by other things going on, > especially when Zuul was already under significant load > > 3. Browsers at the time continued running Javascript payloads in > "background" tabs so the volume of queries was multiplied not just > by the number of users but also by the average number of review tabs > they had open > > Over time we added a ref-scoped status method so callers could > request the status of a specific change. The Zuul REST API is now > served by a separate zuul-web daemon, which we can move to a > different server entirely if load demands that (and can even scale > horizontally with more than one instance of it, I think?). Browser > tech has also improved, and popular ones these days suspend > Javascript stacks when tabs are not exposed. We may also be caching > status API responses more aggressively than we used to do. All of > these factors combined could make live status info in a Gerrit > plug-in entirely tractable, we'll just need someone with time to try > it and see... and be prepared for multiple Gerrit service restarts > to enable/disable it, so probably not when utilization is as high as > it has been the past couple of weeks. A refresh button to update live results on demand could be a good compromise between UX and unnecessary polling. > -- > Jeremy Stanley From mark at stackhpc.com Wed Jan 20 08:57:16 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 20 Jan 2021 08:57:16 +0000 Subject: docs about ironic-inspector In-Reply-To: References: Message-ID: On Tue, 19 Jan 2021 at 20:23, Ankele zhang wrote: > > Hi, > I have an rocky OpenStack platform, and I need openstack-ironic-inspector to inspect my baremetal node, but the documentation on the websit https://docs.openstack.org/ironic/rocky/admin/inspection.html is too concise, without a detailed configuration process. > I need a link to documentation for detailed ironic-inspector configuration and usage. > Looking forward to your help. > > Ankele Hi Ankele, See https://docs.openstack.org/ironic-inspector/rocky/. Mark From hberaud at redhat.com Wed Jan 20 09:07:54 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 20 Jan 2021 10:07:54 +0100 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: <1771bf6499a.e47b8f1076996.4876840442407425263@ghanshyammann.com> References: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> <1771bf6499a.e47b8f1076996.4876840442407425263@ghanshyammann.com> Message-ID: @Mario: Patch LGTM I think you can stop holding it, and I think we can propose to delete the release file on openstack/release with a depends-on https://review.opendev.org/c/openstack/governance/+/771488 . @Ghanshyam: Thanks for the heads-up about Tripleo. Le mar. 19 janv. 2021 à 19:44, Ghanshyam Mann a écrit : > ---- On Tue, 19 Jan 2021 10:34:20 -0600 Marios Andreou > wrote ---- > > > > > > On Fri, Jan 15, 2021 at 4:56 PM Ghanshyam Mann > wrote: > > ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud < > hberaud at redhat.com> wrote ---- > > > Dear QA team, > > > The release team noticed an inconsistency between the QA team's > deliverables described in the governance’s reference and deliverables > defined in the openstack/releases repo (c.f our related meeting topic [1]). > > > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never > released yet, was not ready yet for ussuri and victoria. maybe we should > abandon this instead of waiting? > > > Notice that Wallaby's milestone 2 is next week so maybe it could be > a good time to update this. > > > Let us know your thoughts, we are waiting for your replies. > > > Thanks for reading, > > > > Thanks hberaud for brining it, > > > > I did not know about this repo until it was discussed in yesterday's > release meeting. > > > > As per the governance project.yaml 'openstack-tempest-skiplist' is > under the TripleO project, not QA [1] (tagging tripleo in sub). > > > > This repo is to maintain the test skiplist so not sure if it needed > release but marios or Chandan can decide. > > > > [1] > https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 > > > > -gmann > > > > > > Thank you Herve and Ghanshyam (and thanks Chandan for pointing me to > this thread!) > > apologies for the late response but I initially missed this. > > I just discussed a bit with Arx (adding him into cc) - he agrees this > isn't something we will want or need to make releases for. It is used by CI > tooling and not to do with TripleO deploying OpenStack. > > The repo itself is branchless so if we *want* to make releases for it > then we can consider adding a release file under deliverables/_independent > in the releases repo. > > However since we don't then I think we should just mark it so in > governance? I posted that just now there > https://review.opendev.org/c/openstack/governance/+/771488 at Herve is that > correct? > > Thanks, marios for your response and updates. I also agree that this skip > list repo does not need release as such. > > -gmann > > > > > regards, marios > > > > > > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > > -- > > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// > github.com/4383/https://twitter.com/4383hberaud > > > -----BEGIN PGP SIGNATURE----- > > > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > > v6rDpkeNksZ9fFSyoY2o > > > =ECSj > > > -----END PGP SIGNATURE----- > > > > > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Wed Jan 20 09:22:04 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 20 Jan 2021 11:22:04 +0200 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: References: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> <1771bf6499a.e47b8f1076996.4876840442407425263@ghanshyammann.com> Message-ID: On Wed, Jan 20, 2021 at 11:08 AM Herve Beraud wrote: > @Mario: Patch LGTM I think you can stop holding it, and I think we can > propose to delete the release file on openstack/release with a depends-on > https://review.opendev.org/c/openstack/governance/+/771488 > . > OK I will remove the -workflow at https://review.opendev.org/c/openstack/governance/+/771488 but which release file? I can't find openstack-tempest-skiplist in https://opendev.org/openstack/releases/src/branch/master/ I suspect we never created one (I think that was the original issue that created this thread right? There is divergence between governance repo, which has openstack-tempest-skiplist under tripleo release management and the releases repo where we don't have the release file?). Am I missing it somewhere? thanks, marios > > @Ghanshyam: Thanks for the heads-up about Tripleo. > > > Le mar. 19 janv. 2021 à 19:44, Ghanshyam Mann a > écrit : > >> ---- On Tue, 19 Jan 2021 10:34:20 -0600 Marios Andreou < >> marios at redhat.com> wrote ---- >> > >> > >> > On Fri, Jan 15, 2021 at 4:56 PM Ghanshyam Mann < >> gmann at ghanshyammann.com> wrote: >> > ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud < >> hberaud at redhat.com> wrote ---- >> > > Dear QA team, >> > > The release team noticed an inconsistency between the QA team's >> deliverables described in the governance’s reference and deliverables >> defined in the openstack/releases repo (c.f our related meeting topic [1]). >> > > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never >> released yet, was not ready yet for ussuri and victoria. maybe we should >> abandon this instead of waiting? >> > > Notice that Wallaby's milestone 2 is next week so maybe it could be >> a good time to update this. >> > > Let us know your thoughts, we are waiting for your replies. >> > > Thanks for reading, >> > >> > Thanks hberaud for brining it, >> > >> > I did not know about this repo until it was discussed in yesterday's >> release meeting. >> > >> > As per the governance project.yaml 'openstack-tempest-skiplist' is >> under the TripleO project, not QA [1] (tagging tripleo in sub). >> > >> > This repo is to maintain the test skiplist so not sure if it needed >> release but marios or Chandan can decide. >> > >> > [1] >> https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 >> > >> > -gmann >> > >> > >> > Thank you Herve and Ghanshyam (and thanks Chandan for pointing me to >> this thread!) >> > apologies for the late response but I initially missed this. >> > I just discussed a bit with Arx (adding him into cc) - he agrees this >> isn't something we will want or need to make releases for. It is used by CI >> tooling and not to do with TripleO deploying OpenStack. >> > The repo itself is branchless so if we *want* to make releases for it >> then we can consider adding a release file under deliverables/_independent >> in the releases repo. >> > However since we don't then I think we should just mark it so in >> governance? I posted that just now there >> https://review.opendev.org/c/openstack/governance/+/771488 at Herve is that >> correct? >> >> Thanks, marios for your response and updates. I also agree that this skip >> list repo does not need release as such. >> >> -gmann >> >> > >> > regards, marios >> > >> > >> > > [1] >> http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 >> > > -- >> > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// >> github.com/4383/https://twitter.com/4383hberaud >> > > -----BEGIN PGP SIGNATURE----- >> > > >> > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> > > v6rDpkeNksZ9fFSyoY2o >> > > =ECSj >> > > -----END PGP SIGNATURE----- >> > > >> > > >> > >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 20 09:49:48 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 20 Jan 2021 10:49:48 +0100 Subject: [qa][release][tripleo] Abandoning openstack-tempest-skiplist? In-Reply-To: References: <177068a105d.aea199651318020.1189554580760474526@ghanshyammann.com> <1771bf6499a.e47b8f1076996.4876840442407425263@ghanshyammann.com> Message-ID: Ah yes sorry, my bad, the file is missing and it was the origin of our inconsistency, please ignore my previously proposed removing step on openstack/release. Thanks Le mer. 20 janv. 2021 à 10:22, Marios Andreou a écrit : > > > On Wed, Jan 20, 2021 at 11:08 AM Herve Beraud wrote: > >> @Mario: Patch LGTM I think you can stop holding it, and I think we can >> propose to delete the release file on openstack/release with a depends-on >> https://review.opendev.org/c/openstack/governance/+/771488 >> . >> > > > OK I will remove the -workflow at > https://review.opendev.org/c/openstack/governance/+/771488 > > but which release file? I can't find openstack-tempest-skiplist in > https://opendev.org/openstack/releases/src/branch/master/ I suspect we > never created one (I think that was the original issue that created this > thread right? There is divergence between governance repo, which has > openstack-tempest-skiplist under tripleo release management and the > releases repo where we don't have the release file?). Am I missing it > somewhere? > > thanks, marios > > > >> >> @Ghanshyam: Thanks for the heads-up about Tripleo. >> >> >> Le mar. 19 janv. 2021 à 19:44, Ghanshyam Mann >> a écrit : >> >>> ---- On Tue, 19 Jan 2021 10:34:20 -0600 Marios Andreou < >>> marios at redhat.com> wrote ---- >>> > >>> > >>> > On Fri, Jan 15, 2021 at 4:56 PM Ghanshyam Mann < >>> gmann at ghanshyammann.com> wrote: >>> > ---- On Fri, 15 Jan 2021 03:07:12 -0600 Herve Beraud < >>> hberaud at redhat.com> wrote ---- >>> > > Dear QA team, >>> > > The release team noticed an inconsistency between the QA team's >>> deliverables described in the governance’s reference and deliverables >>> defined in the openstack/releases repo (c.f our related meeting topic [1]). >>> > > Indeed, openstack-tempest-skiplist (added Mar 20, 2020) was never >>> released yet, was not ready yet for ussuri and victoria. maybe we should >>> abandon this instead of waiting? >>> > > Notice that Wallaby's milestone 2 is next week so maybe it could >>> be a good time to update this. >>> > > Let us know your thoughts, we are waiting for your replies. >>> > > Thanks for reading, >>> > >>> > Thanks hberaud for brining it, >>> > >>> > I did not know about this repo until it was discussed in yesterday's >>> release meeting. >>> > >>> > As per the governance project.yaml 'openstack-tempest-skiplist' is >>> under the TripleO project, not QA [1] (tagging tripleo in sub). >>> > >>> > This repo is to maintain the test skiplist so not sure if it needed >>> release but marios or Chandan can decide. >>> > >>> > [1] >>> https://github.com/openstack/governance/blob/2bdd9cff00fb40b2f95b66cad47ae1cfd14a2f1b/reference/projects.yaml#L3069 >>> > >>> > -gmann >>> > >>> > >>> > Thank you Herve and Ghanshyam (and thanks Chandan for pointing me to >>> this thread!) >>> > apologies for the late response but I initially missed this. >>> > I just discussed a bit with Arx (adding him into cc) - he agrees this >>> isn't something we will want or need to make releases for. It is used by CI >>> tooling and not to do with TripleO deploying OpenStack. >>> > The repo itself is branchless so if we *want* to make releases for it >>> then we can consider adding a release file under deliverables/_independent >>> in the releases repo. >>> > However since we don't then I think we should just mark it so in >>> governance? I posted that just now there >>> https://review.opendev.org/c/openstack/governance/+/771488 at Herve is >>> that correct? >>> >>> Thanks, marios for your response and updates. I also agree that this >>> skip list repo does not need release as such. >>> >>> -gmann >>> >>> > >>> > regards, marios >>> > >>> > >>> > > [1] >>> http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 >>> > > -- >>> > > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps:// >>> github.com/4383/https://twitter.com/4383hberaud >>> > > -----BEGIN PGP SIGNATURE----- >>> > > >>> > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> > > v6rDpkeNksZ9fFSyoY2o >>> > > =ECSj >>> > > -----END PGP SIGNATURE----- >>> > > >>> > > >>> > >>> > >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From chacon.piza at gmail.com Wed Jan 20 10:00:47 2021 From: chacon.piza at gmail.com (Martin Chacon Piza) Date: Wed, 20 Jan 2021 11:00:47 +0100 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: References: Message-ID: Hi Hervé, Thanks for your note. This change will fix the problem https://review.opendev.org/c/openstack/monasca-tempest-plugin/+/771523 Now the Monasca-tempest images are published properly as you can see here: https://zuul.opendev.org/t/openstack/builds?job_name=publish-monasca-tempest-plugin-docker-image We detected this issue recently too, which affects the rest of Monasca Repositories. It can be tracked with this topic: https://review.opendev.org/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+AND+fix-zuul-publish Could you help us please to restart the monasca-tempest-plugin release? Thanks in advance, Martin (chaconpiza) P.S. Thanks Adrian Czarnecki for the original fix! El mar, 19 de ene. de 2021 a la(s) 09:42, Herve Beraud (hberaud at redhat.com) escribió: > Hello monasca team, > > FYI a release failure happened on monasca-tempest-plugin within the job > publish-monasca-tempest-plugin-docker-image. > > This build was triggered by > https://review.opendev.org/c/openstack/releases/+/768551 (the Wallaby > part). > > A new incompatibility in requirements was found by pip's new resolver: > > pykafka 2.8.0 has requirement kazoo==2.5.0, but you'll have kazoo 2.8.0 > which is incompatible. > (c.f > https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1157 > ) > > I didn't find a trace of these requirements on your repo so I think they > are pulled/resolved from/for underlying libraries. > > After that the job fails to push the "latest" docker tag because the tag > is not found, I think it's a side effect of the previous error. > > (c.f > https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1443 > ) > > Let us know if we can help you, > > The Release Team. > > > Le lun. 18 janv. 2021 à 23:31, a écrit : > >> Build failed. >> >> - openstack-upload-github-mirror >> https://zuul.opendev.org/t/openstack/build/8dd9a4fce3bc4ae2a32134b7d7fec5b5 >> : SUCCESS in 52s >> - release-openstack-python >> https://zuul.opendev.org/t/openstack/build/ff7c8136563444df9c565f07f618c559 >> : SUCCESS in 3m 44s >> - announce-release >> https://zuul.opendev.org/t/openstack/build/78731c6e0948490d82e1e2d14eb67857 >> : SUCCESS in 3m 42s >> - propose-update-constraints >> https://zuul.opendev.org/t/openstack/build/3eab7a3209a84a33b8d7b69e41e185cb >> : SUCCESS in 2m 49s >> - publish-monasca-tempest-plugin-docker-image >> https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa >> : POST_FAILURE in 8m 19s >> >> _______________________________________________ >> Release-job-failures mailing list >> Release-job-failures at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- *Martín Chacón Pizá* *chacon.piza at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 20 10:18:56 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 20 Jan 2021 11:18:56 +0100 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: References: Message-ID: Le mer. 20 janv. 2021 à 11:01, Martin Chacon Piza a écrit : > Hi Hervé, > > Thanks for your note. This change will fix the problem > https://review.opendev.org/c/openstack/monasca-tempest-plugin/+/771523 > You're welcome > > Now the Monasca-tempest images are published properly as you can see here: > > https://zuul.opendev.org/t/openstack/builds?job_name=publish-monasca-tempest-plugin-docker-image > > We detected this issue recently too, which affects the rest of Monasca > Repositories. It can be tracked with this topic: > > https://review.opendev.org/q/(projects:openstack/monasca+OR+project:openstack/python-monascaclient)+AND+fix-zuul-publish > > Could you help us please to restart the monasca-tempest-plugin release? > Sure, I asked the infra team to reenqueue the job, let's wait for that, do not hesitate to join #openstack-infra to join the discussion. > Thanks in advance, > Martin (chaconpiza) > > P.S. Thanks Adrian Czarnecki for the original fix! > > > > El mar, 19 de ene. de 2021 a la(s) 09:42, Herve Beraud (hberaud at redhat.com) > escribió: > >> Hello monasca team, >> >> FYI a release failure happened on monasca-tempest-plugin within the job >> publish-monasca-tempest-plugin-docker-image. >> >> This build was triggered by >> https://review.opendev.org/c/openstack/releases/+/768551 (the Wallaby >> part). >> >> A new incompatibility in requirements was found by pip's new resolver: >> > pykafka 2.8.0 has requirement kazoo==2.5.0, but you'll have kazoo 2.8.0 >> which is incompatible. >> (c.f >> https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1157 >> ) >> >> I didn't find a trace of these requirements on your repo so I think they >> are pulled/resolved from/for underlying libraries. >> >> After that the job fails to push the "latest" docker tag because the tag >> is not found, I think it's a side effect of the previous error. >> >> (c.f >> https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa/log/job-output.txt#1443 >> ) >> >> Let us know if we can help you, >> >> The Release Team. >> >> >> Le lun. 18 janv. 2021 à 23:31, a écrit : >> >>> Build failed. >>> >>> - openstack-upload-github-mirror >>> https://zuul.opendev.org/t/openstack/build/8dd9a4fce3bc4ae2a32134b7d7fec5b5 >>> : SUCCESS in 52s >>> - release-openstack-python >>> https://zuul.opendev.org/t/openstack/build/ff7c8136563444df9c565f07f618c559 >>> : SUCCESS in 3m 44s >>> - announce-release >>> https://zuul.opendev.org/t/openstack/build/78731c6e0948490d82e1e2d14eb67857 >>> : SUCCESS in 3m 42s >>> - propose-update-constraints >>> https://zuul.opendev.org/t/openstack/build/3eab7a3209a84a33b8d7b69e41e185cb >>> : SUCCESS in 2m 49s >>> - publish-monasca-tempest-plugin-docker-image >>> https://zuul.opendev.org/t/openstack/build/aba1acc623e74cf08e82ffc4d73134aa >>> : POST_FAILURE in 8m 19s >>> >>> _______________________________________________ >>> Release-job-failures mailing list >>> Release-job-failures at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures >>> >> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > *Martín Chacón Pizá* > *chacon.piza at gmail.com * > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Jan 20 10:31:58 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 11:31:58 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <006ebc01-9372-448b-9889-09af69e19c9d@debian.org> On 1/20/21 8:26 AM, Lucian Petrut wrote: > Hi, > > > For Windows related projects such as os-win and networking-hyperv, > we decided to keep the lower constraints job but remove indirect > dependencies from the lower-constraints.txt file. > > > This made it much easier to maintain and it allows us to at least cover > direct dependencies. I suggest considering this approach instead of > completely dropping the lower constraints job, whenever possible. > Another option might be to make it non-voting while it’s getting fixed. > > > Lucian Petrut Hi, If this could be done, it'd be very nice already. Cheers, Thomas Goirand (zigo) From stephenfin at redhat.com Wed Jan 20 11:13:39 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 20 Jan 2021 11:13:39 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> , <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: On Wed, 2021-01-20 at 07:26 +0000, Lucian Petrut wrote: > Hi, >   > For Windows related projects such as os-win and networking-hyperv, > we decided to keep the lower constraints job but remove indirect > dependencies from the lower-constraints.txt file. >   > This made it much easier to maintain and it allows us to at least cover > direct dependencies. I suggest considering this approach instead of > completely dropping the lower constraints job, whenever possible. > Another option might be to make it non-voting while it’s getting fixed. >   > Lucian Petrut Yes, I've looked into doing this elsewhere (as promised) and it seems to do the job quite nicely. It's not perfect but it does seem to be "good enough" and captures basic things like "I depend on this function found in oslo.foo vX.Y and forgot to bump my minimum version to reflect this". I think these jobs probably offer _more_ value now than they did in the past, given pip is now finally honouring the explicit constraints we express in these files, so I would be in favour of this approach rather than dropping l-c entirely. I do realize that there is some degree of effort here in getting e.g. all the oslo projects fixed, but I'm happy to help out with and have already fixed quite a few projects. I also wouldn't be opposed to dropping l-c on *stable* branches so long as we maintained for master, on the basis that they were already broken so nothing is really changing. Sticking to older, admittedly broken versions of pip for stable branches is another option and might help us avoid a deluge of "remove/fix l-c" patches for stable branches, but I don't know how practical that is? Stephen > From: Jeremy Stanley > Sent: Wednesday, January 20, 2021 1:52 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: [all][tc] Dropping lower-constraints testing from all projects >   > On 2021-01-20 00:09:39 +0100 (+0100), Thomas Goirand wrote: > [...] > > Something I don't understand: why can't we use an older version of > > pip, if the problem is the newer pip resolver? Or can't the > > current pip be patched to fix things? It's not as if there was no > > prior art... Maybe I'm missing the big picture? > [...] >   > To get to the heart of the matter, when using older versions of pip > it was just quietly installing different versions of packages than > we asked it to, and versions of transitive dependencies which > directly conflicted with the versions other dependencies said they > required. When pip finally (very recently) implemented a coherent > dependency solver, it started alerting us directly to this fact. We > could certainly find a way to hide our heads in the sand and go back > to testing with old pip and pretending we knew what was being tested > there, but the question is whether what we were actually testing > that way was worthwhile enough to try to continue doing it, now that > we have proof it wasn't what we were wanting to test. >   > The challenge with actually testing what we wanted has always been > that there's many hundreds of packages we depend on and, short of > writing one ourselves, no tool available to find a coherent set of > versions of them which satisfy the collective lower bounds. The way > pip works, it wants to always solve for the newest possible > versions which satisfy an aggregate set of version ranges, and what > we'd want for lower bounds checking is the inverse of that. > -- > Jeremy Stanley >   From radoslaw.piliszek at gmail.com Wed Jan 20 11:29:44 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 20 Jan 2021 12:29:44 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: On Wed, Jan 20, 2021 at 12:14 PM Stephen Finucane wrote: > > On Wed, 2021-01-20 at 07:26 +0000, Lucian Petrut wrote: > > Hi, > > > > For Windows related projects such as os-win and networking-hyperv, > > we decided to keep the lower constraints job but remove indirect > > dependencies from the lower-constraints.txt file. > > > > This made it much easier to maintain and it allows us to at least cover > > direct dependencies. I suggest considering this approach instead of > > completely dropping the lower constraints job, whenever possible. > > Another option might be to make it non-voting while it’s getting fixed. > > > > Lucian Petrut > > Yes, I've looked into doing this elsewhere (as promised) and it seems to do the > job quite nicely. It's not perfect but it does seem to be "good enough" and > captures basic things like "I depend on this function found in oslo.foo vX.Y and > forgot to bump my minimum version to reflect this". I think these jobs probably > offer _more_ value now than they did in the past, given pip is now finally > honouring the explicit constraints we express in these files, so I would be in > favour of this approach rather than dropping l-c entirely. I do realize that > there is some degree of effort here in getting e.g. all the oslo projects fixed, > but I'm happy to help out with and have already fixed quite a few projects. I > also wouldn't be opposed to dropping l-c on *stable* branches so long as we > maintained for master, on the basis that they were already broken so nothing is > really changing. Sticking to older, admittedly broken versions of pip for stable > branches is another option and might help us avoid a deluge of "remove/fix l-c" > patches for stable branches, but I don't know how practical that is? > > Stephen Hmm, I agree with this approach. Sounds quite sane. I have a related question - do you have a tool to recommend that would check whether all modules used directly by the project are in requirements.txt already? I.e. that there are no directly-used modules that are actually pulled in as indirect dependencies? That would improve the proposed approach as well as general requirements condition. -yoctozepto From smooney at redhat.com Wed Jan 20 13:01:11 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Jan 2021 13:01:11 +0000 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> <1df1b294-fd76-c598-1b57-6b298675123a@nemebean.com> Message-ID: <85c0b95b6027461bdc6096668b68e23d748f7563.camel@redhat.com> On Wed, 2021-01-20 at 00:25 +0100, Thomas Goirand wrote: > On 1/19/21 9:04 PM, Ben Nemec wrote: > > There was also a security concern with potentially having multiple > > policy files and it not being clear which was in use. If someone > > converted their JSON policy to YAML, but left the JSON one in place, it > > could result in oslo.policy using the wrong one (or not the one they > > expect). We decided it was better for each project to make a clean > > switchover, which allows for things like upgrade checks that oslo.policy > > couldn't have itself, than to try to handle it all in oslo.policy. > > IMO, that's a downstream distro thing. > > What I did in Debian (and for Victoria already) was having the postinst > of each package to rename any existing policy.json into a disabled > version. Here's an example with Cinder: > > if [ -r /etc/cinder/policy.json ] ; then >     mv /etc/cinder/policy.json /etc/cinder/disabled.policy.json.old > fi > > and then package the yaml file as (example from Nova): > /etc/nova/policy.d/00_default_policy.yaml > > and then setting-up this: > policy_dirs = /etc/nova/policy.d > > The reason I'm doing this way, is that I'm expecting upstream to > generate a commented-only yaml file, and final users to drop non-default > supplementary files without touching the package default file. > > So, someone upgrading to Victoria with a non-default policy.json will > see its manual tweaks go away, but not completely gone (ie: recoverable > from disabled.policy.json.old). > > Does this seem to be a correct approach? that seams kind of dangorus to me or at least unexpected if i have custom policy and i upgrade i dont expect that to revert back to default policy. if it does not exists autogeneratin a default policy.yaml with everything commented could work but im not sure if that is better then leaving it blank and insted generating on in /usr/share//policy.yaml.example i dont have a strong oppion on this approach beyond that really. i just find it unintuitve. what you descibe i guess would make sense if you had the popup whwere it ask do you want to use teh package maintaners version if you do that implictly without askign the user i would be concerned it will either break there production system if they had replaxed the policy constarits or added new roles or worse expose capablities they had locked down. so what ever you do make sure its telegraphed well. > > Cheers, > > Thomas Goirand (zigo) > From zigo at debian.org Wed Jan 20 13:10:30 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 14:10:30 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <5a6f69e4-4cfe-617d-ba7d-d368897db994@debian.org> On 1/20/21 12:13 PM, Stephen Finucane wrote: > On Wed, 2021-01-20 at 07:26 +0000, Lucian Petrut wrote: >> Hi, >>   >> For Windows related projects such as os-win and networking-hyperv, >> we decided to keep the lower constraints job but remove indirect >> dependencies from the lower-constraints.txt file. >>   >> This made it much easier to maintain and it allows us to at least cover >> direct dependencies. I suggest considering this approach instead of >> completely dropping the lower constraints job, whenever possible. >> Another option might be to make it non-voting while it’s getting fixed. >>   >> Lucian Petrut > > Yes, I've looked into doing this elsewhere (as promised) and it seems to do the > job quite nicely. It's not perfect but it does seem to be "good enough" and > captures basic things like "I depend on this function found in oslo.foo vX.Y and > forgot to bump my minimum version to reflect this". I think these jobs probably > offer _more_ value now than they did in the past, given pip is now finally > honouring the explicit constraints we express in these files, so I would be in > favour of this approach rather than dropping l-c entirely. I do realize that > there is some degree of effort here in getting e.g. all the oslo projects fixed, > but I'm happy to help out with and have already fixed quite a few projects. I > also wouldn't be opposed to dropping l-c on *stable* branches +1 I don't really mind for things already released, and already proven to work, though please everyone: take care when backporting patches. Thanks a lot for your proposal. :) Cheers, Thomas Goirand (zigo) From mailakkina at gmail.com Wed Jan 20 13:15:58 2021 From: mailakkina at gmail.com (Nagaraj Akkina) Date: Wed, 20 Jan 2021 14:15:58 +0100 Subject: [$nova] [$horizon] [$nova-scheduler] [$heat] Message-ID: I think we have a big problem, scheduler is able to allocate more memory than actual physical memory in the compute node, the same statistics are also there in database. For example : if we have a physical memory of 125GB in a node, the total memory allocated to all the instances(virsh dommemstat of all vms) in that host is more than physical memory say around 147GB, being ram_allocation_ratio defined as 1 and reserve_host_memory_mb defined as required. How can nova allocates more memory than actual? Statistics from database +-----------+----------------+-------------+----------------------+ | memory_mb | memory_mb_used | free_ram_mb | ram_allocation_ratio | +-----------+----------------+-------------+----------------------+ | 128897 | 154624 | -25727 | 1 | +-----------+----------------+-------------+----------------------+ we are using openstack stein Regards, Akkina -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Wed Jan 20 13:36:34 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 20 Jan 2021 13:36:34 +0000 Subject: [$nova] [$horizon] [$nova-scheduler] [$heat] In-Reply-To: References: Message-ID: <1c00100bda5a1c15bdc8bd861135f4ee0b1f7d9c.camel@redhat.com> On Wed, 2021-01-20 at 14:15 +0100, Nagaraj Akkina wrote: > > I think we have a big problem, scheduler is able to allocate more memory than > actual physical memory in the compute node, the same statistics are also there > in database. > For example : if we have a physical memory of 125GB in a node, the total > memory allocated to all the instances(virsh dommemstat of all vms) in that > host is more than physical memory say around 147GB, being ram_allocation_ratio > defined as 1 and reserve_host_memory_mb defined as required. > > How can nova allocates more memory than actual? > > Statistics from database > > +-----------+----------------+-------------+----------------------+ > | memory_mb | memory_mb_used | free_ram_mb | ram_allocation_ratio | > +-----------+----------------+-------------+----------------------+ > |    128897 |         154624 |      -25727 |                    1 | > +-----------+----------------+-------------+----------------------+ > we are using openstack stein You're probably losing memory due to huge pages or reserve_host_memory_mb still isn't configured correctly. What does placement report for total inventory and usage? You can find the UUID of the resource provider like so: $ openstack resource provider list Once you have that, you can view available inventory like so: $ openstack resource provider inventory list $RP_UUID You should ensure that allocation_ratio, max_unit and reserved values match what you'd expect, namely, 1.0, the total amount of memory on the host, and the value configured for reserved memory on the host. Once you've verified those, you can view usage like so: $ openstack resource provider usage show $RP_UUID This will summarize the consumption of usage by all allocations against this resource provider. Finally, you can view the individual breakdown of resources by inspecting the inventory consumed for each instance on the host, like so: $ openstack server list --host devstack-1 -c ID # for each server $ openstack resource provider allocation show $SERVER_UUID Hope this helps, Stephen > Regards, > Akkina > From mnaser at vexxhost.com Wed Jan 20 13:53:24 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 20 Jan 2021 08:53:24 -0500 Subject: [tc] weekly meeting summary (Jan 14) Message-ID: Hi everyone, Here’s a summary of what happened in our TC weekly meeting last Thursday, Jan 14th. # ATTENDEES (LINES SAID) 1. mnaser (123) 2. gmann (56) 3. fungi (31) 4. rosmaita (19) 5. ricolin (11) 6. dansmith (9) 7. jungleboyj (8) 8. knikolla (6) 9. clarkb (3) 10. apevec (2) 11. slaweq (2) 12. belmoreira (1) # MEETING SUMMARY 1. Rollcall 2. Follow up on past action items - DONE: mnaser submit a patch to officially list no community goals for X cycle - DONE: diablo_rojo update resolution for tc stance on osc - IN PROGRESS: diablo_rojo complete retirement of karbor - ALMOST DONE - TBD: diablo_rojo reach out to SIGs/ML and start auditing states of SIGs - TBD: gmann continue to audit tags + outreach to community to apply for them 3. Write a proposed goal for X about stabilization/cooldown This topic mostly continues to live inside gerrit, so ti will be removed from the agenda. 4. Audit SIG list and chairs (diablo_rojo) No progress update, we'll keep an action item to keep up with it. 5. Annual report suggestions (diablo_rojo) Topic to be removed from the agenda as this is getting drafted already. 6. Add Resolution of TC stance on the OpenStackClient (diablo_rojo) dansmith will update osc change to include ci/docs commentary. 7. Audit and clean-up tags (gmann) gmann bumped the email with PTL tag for API interoperability tag, we’ll see how many projects start applying it. 8. infra-core additions (was Farewell Andreas) (diablo_rojo) The reposition is quiet for the moment and there is no backlog yet. We are bringing this up at the OpenDev infrastructure meeting on Tuesday the 19th, mnaser will add openstack/infra-core as a discussion topic in the OpenDev meeting. 9. Dropping lower-constraints testing from all projects (gmann) This is something projects asked TC for consensus and direction for all projects. l-c testing is not part of PTI but there should exist some direction for consistency testing and providing a lower bound to package maintainers. A few projects already started dropping l-c job, and it seems that our current l-c were not exactly testing in a valid way. Also, most testing (except for the l-c job) is being done with versions close to the upper-constraints. We will keep this an open topic for next week. 10. Decide on OpenSUSE in testing runtime (gmann) OpenSUSE distro job is broken for a month and DevStack team is approaching removing it as there is no investment from the company or the community. It was discussed to make the governance change and if that goes through the DevStack one, it will be merged. We’ll have an ML before and after moving testing/governance change, gmann will update supported distros to drop OpenSUSE. # ACTION ITEMS 1. diablo_rojo complete retirement of karbor 2. mnaser to remove proposed goal topic from agenda 3. diablo_rojo reach out to SIGs/ML and start auditing states of SIGs 4. mnaser remove annual report suggestions from agenda 5. dansmith update osc change to include ci/docs commentary 6. mnaser add openstack/infra-core as discussion topic in opendev meeting 7. gmann follow-up with zigo if debian uses l-c 8. gmann update supported distros to drop opensuse To read the full logs of the meeting, please refer to http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-01-14-15.00.log.html -- Mohammed Naser VEXXHOST, Inc. From agarwalnisha1980 at gmail.com Wed Jan 20 15:06:03 2021 From: agarwalnisha1980 at gmail.com (Nisha Agarwal) Date: Wed, 20 Jan 2021 20:36:03 +0530 Subject: [ironic] [infra] Making Glean work with IPA for static IP assignment In-Reply-To: <20201126011956.GB522326@fedora19.localdomain> References: <20201125020901.GA522326@fedora19.localdomain> <20201126011956.GB522326@fedora19.localdomain> Message-ID: Hi Ian, We were trying the same thing and the deploy fails when we use CentOS8 or Ubuntu ramdisk. Glean is able to modify the network scripts but looks like Networking/NetworkManager is not restarted after that and ip is not assigned to the interface. Manually I just did "systemctl restart NetworkManager" on the CentOS8 system and after that the deploy succeeded. Is there a bug for this? and is there any plan to fix the issue ? If there is no bug existing for the issue, I am planning to raise one. The image is built using following command: disk-image-create -o centos_deploy_image ironic-python-agent-ramdisk centos simple-init devuser selinux-permissive As a side note, the centos7 image created using above works fine for us and the dhcpless deploy works end-to-end using ironic. Regards Nisha On Thu, Nov 26, 2020 at 6:51 AM Ian Wienand wrote: > On Wed, Nov 25, 2020 at 11:54:13AM +0100, Dmitry Tantsur wrote: > > > > # systemd-analyze critical-chain > > multi-user.target @2min 6.301s > > └─tuned.service @1min 32.273s +34.024s > > └─network.target @1min 31.590s > > └─network-pre.target @1min 31.579s > > └─glean at enp1s0.service @36.594s +54.952s > > └─system-glean.slice @36.493s > > └─system.slice @4.083s > > └─-.slice @4.080s > > > > # systemd-analyze critical-chain NetworkManager.service > > NetworkManager.service +9.287s > > └─network-pre.target @1min 31.579s > > └─glean at enp1s0.service @36.594s +54.952s > > └─system-glean.slice @36.493s > > └─system.slice @4.083s > > └─-.slice @4.080s > > > It seems that the ordering is correct and the interface service is > > executed, but the IP address is nonetheless wrong. > > I agree, this seems to say to me that NetworkManager should run after > network.pre-target, and glean at enp1s0 should be running before it. > > The glean at enp1s0.service is set as oneshot [1] which should prevent > network-pre.target being reached until it exits: > > oneshot ... [the] service manager will consider the unit up after the > main process exits. It will then start follow-up units. > > To the best of my knowledge the dependencies are correct; but if you > go through the "git log" of the project you can find some history of > us thinking ordering was correct and finding issues. > > > Can it be related to how long glean takes to run in my case (54 seconds > vs > > 1 second in your case)? > > The glean script doesn't run asynchronously in any way (at least not > on purpose!). I can't see any way it could exit before the ifcfg file > is written out. > > > # cat /etc/sysconfig/network-scripts/ifcfg-enp1s0 > ... > > The way NM support works is writing out this file which is read by the > NM ifcfg-rh plugin [2]. AFAIK that's built-in to NM so would not be > missing, and I think you'd have to go to effort to manually edit > /etc/NetworkManager/conf.d/99-main-plugins.conf to have it ignored. > > I'm afraid that's overall not much help. Are you sure there isn't an > errant dhclient running somehow that grabs a different address? Does > it get the correct address on reboot; implying the ifcfg- file is read > correctly but somehow isn't in place before NetworkManager starts? > > -i > > [1] > https://opendev.org/opendev/glean/src/branch/master/glean/init/glean-nm at .service#L13 > [2] > https://developer.gnome.org/NetworkManager/stable/nm-settings-ifcfg-rh.html > > > -- The Secret Of Success is learning how to use pain and pleasure, instead of having pain and pleasure use you. If You do that you are in control of your life. If you don't life controls you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Wed Jan 20 15:08:36 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Wed, 20 Jan 2021 10:08:36 -0500 Subject: [nova] Selecting PCI devices by their addresses Message-ID: Stupid question of the day: if I am not mistaken, I can ask nova[1] which pci/pcie devices it knows and where they are. And it will reply with the PCI address for each device it knows of. But, when I want to create flavours so I can create (libvirt-based) instances using them, I cannot tie an alias to a specific pci address; all I can do is say "this pci_alias is for all the pci devices with this vendor_id and product_id that I whitelisted already"[2]. Since libvirt allows me to feed a vm guest the exact pci device I want, could anyone point out which obvious step I am missing here? [1] https://wiki.openstack.org/wiki/Pci-api-support [2] https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-nova-compute-compute From laurentfdumont at gmail.com Wed Jan 20 15:34:05 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 20 Jan 2021 10:34:05 -0500 Subject: Requesting help with OpenStack configuration files In-Reply-To: References: Message-ID: Bonjour fellow Montrealer! As Thomas mentioned, I dont think there is a fixed standard for config files vs code. That said, most configuration file that are used by Openstack services (nova, cinder, glance, neutron) are usually ending in .conf and have the following contents. [section_name_blocks] key=value #comments If the file looks like that, it's usually a good indication that it's a configuration file and not code. On Tue, Jan 19, 2021 at 4:39 PM Bessghaier, Narjes < narjes.bessghaier.1 at ens.etsmtl.ca> wrote: > Dear OpenStack team, > > > I’m a Ph.D. student in software engineering at the ETS Montreal, > University of Quebec working on the quality and configuration of > web-based software systems. > I’m particularly interested in analyzing configuration files from > different OpenStack files. One of the main challenges I am currently facing > is the proper identification of configuration files. I’m mostly confused > between the python files used for production and the python files used for > configuration. I am kindly requesting your precious help with the following > questions: > > > > 1- How to distinguish between python files used for configuration and > python files used for production? It will be very helpful if there > are some configuration-based patterns (eg, textual patterns or expressions) > that we can find in python files to help us distinguish between source code > and configuration files? > > > > > 2- Certain python files use the oslo_config to access and define > configuration options. Could "all" these python files be considered as > configuration files? For example, the following python file of the keystone > project: keystone/keystone/conf/auth.py, is it considered a source code or > configuration file? > > > > > 3- Why are there different source code and configuration repositories for > OpenStack projects (eg, nova and puppet-nova)? For instance, does the > OpenStack-nova service have some configuration files in its repository and > have the puppet-nova as a separate configuration repository as well? > > > > Thank you very much in advance for your time and your help! > > Kind regards, > Narjes Bessghaier > > > > > > Narjes Bessghaier > > Ph.D student in Software Engineering > > École de Technologie Supérieure (ETS)| University of Quebec > > Montreal, Canada > > narjes.bessghaier.1 at ens.etsmtl.ca > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jan 20 15:39:39 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Jan 2021 15:39:39 +0000 Subject: [nova] Selecting PCI devices by their addresses In-Reply-To: References: Message-ID: On Wed, 2021-01-20 at 10:08 -0500, Mauricio Tavares wrote: > Stupid question of the day: if I am not mistaken, I can ask nova[1] > which pci/pcie devices it knows and where they are. > not via any api no. nova track pci deivces that have been whitelisted in an internal database but as an enduser you cannot query this infomation via a pulbic api currently. > And it will reply > with the PCI address for each device it knows of. But, when I want to > create flavours so I can create (libvirt-based) instances using them, > I cannot tie an alias to a specific pci address; > partly correct, in that you cannot create and alias that uses a specific pci adress. the alias must be the same on all compute nodes and the nova-api so they cannot refer to a spcieic card. > all I can do is say > "this pci_alias is for all the pci devices with this vendor_id and > product_id that I whitelisted already"[2]. Since libvirt allows me to > feed a vm guest the exact pci device I want, could anyone point out > which obvious step I am missing here? openstack is a cloud plathform and in that plathform nova act as a compute resouce orcetrator. as part of that role it provides a layer of indirectly between the avaiable resouce and the users request. as a result it is not within the scope of nova to povide an api to place a device in a speicic host with a specifc set of resouce chosen by an external user. as a standard tenant you should not be aware of the device on a given hypervior and technially should not know which hypervir is used behine the nova api. e.g. you should not know via a pulic non admin api if openstack is using livbrt or vmware. so you are not missing anything obvior in the feature we support. we intentionally do not support the usecase your are requesting. to select device on a libvirt host for a given vm a enterprise virutalistion software stack such as ovirt, proxmox or esxi without vsphere is more suitable. can you descibe the acutl usecase that you are trying to solve with direct selection of pci devices. there may be a cloud native way we support to fultil that usecase but the solution your are tryign to implemnt is not supported by openstack by design. > > [1] https://wiki.openstack.org/wiki/Pci-api-support > [2] https://docs.openstack.org/nova/latest/admin/pci-passthrough.html#configure-nova-compute-compute > From radoslaw.piliszek at gmail.com Wed Jan 20 16:20:59 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 20 Jan 2021 17:20:59 +0100 Subject: [masakari] Masakari Team Meeting - the schedule In-Reply-To: References: Message-ID: Having received no negative feedback about this, I have moved the meeting to the weekly schedule. [1] [1] https://review.opendev.org/c/opendev/irc-meetings/+/771642 -yoctozepto On Tue, Jan 12, 2021 at 10:56 AM Radosław Piliszek wrote: > > Hello, Folks! > > I have realised the Masakari Team Meeting is to run on even weeks [1]. > However, anyone who created the meeting record in their calendar > (including me) has likely gotten the meeting schedule in odd weeks > this year (because last year finished with an odd week and obviously > numbering also starts on odd: the 1). > So I have run the first meeting this year the previous week but > someone came for the meeting this week. :-) > > According to the "new wrong" schedule, the next meeting would be on > Jan 19, but according to the "proper" one it would be on Jan 26. > I am available both weeks the same so can run either term (or both as > well, why not). > > The question is whether we don't want to simply move to the weekly > meeting schedule. > We usually don't have much to discuss but it might be less confusing > and a better way to form a habit if we met every week. > Please let me know your thoughts. > > [1] http://eavesdrop.openstack.org/#Masakari_Team_Meeting > > -yoctozepto From laurentfdumont at gmail.com Wed Jan 20 17:05:32 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 20 Jan 2021 12:05:32 -0500 Subject: Requesting help with OpenStack configuration files In-Reply-To: References: Message-ID: That depends on your definition of system I guess. Modern deployments of Openstack are using containers to "containerize" the Openstack functions outside of the OS itself. Deprecated options are usually removed between Openstack releases (with some leeway) so that users can adapt their own configuration files. I don't think they impact the overall system itself, just the Openstack component behavior. On Wed, Jan 20, 2021 at 10:56 AM Bessghaier, Narjes < narjes.bessghaier.1 at ens.etsmtl.ca> wrote: > Thank you for your reply > > Regarding python files that are deprecating some options. Are these files > considered to impact the configuration of the system? > > > Thank you > > Get Outlook for Android > > ------------------------------ > *From:* Laurent Dumont > *Sent:* Wednesday, January 20, 2021 10:34:05 AM > *To:* Bessghaier, Narjes > *Cc:* openstack-discuss at lists.openstack.org < > openstack-discuss at lists.openstack.org>; Ouni, Ali > *Subject:* Re: Requesting help with OpenStack configuration files > > Bonjour fellow Montrealer! > > As Thomas mentioned, I dont think there is a fixed standard for config > files vs code. > > That said, most configuration file that are used by Openstack services > (nova, cinder, glance, neutron) are usually ending in .conf and have the > following contents. > > [section_name_blocks] > key=value > #comments > > If the file looks like that, it's usually a good indication that it's a > configuration file and not code. > > On Tue, Jan 19, 2021 at 4:39 PM Bessghaier, Narjes < > narjes.bessghaier.1 at ens.etsmtl.ca> wrote: > > Dear OpenStack team, > > > I’m a Ph.D. student in software engineering at the ETS Montreal, > University of Quebec working on the quality and configuration of > web-based software systems. > I’m particularly interested in analyzing configuration files from > different OpenStack files. One of the main challenges I am currently facing > is the proper identification of configuration files. I’m mostly confused > between the python files used for production and the python files used for > configuration. I am kindly requesting your precious help with the following > questions: > > > > 1- How to distinguish between python files used for configuration and > python files used for production? It will be very helpful if there > are some configuration-based patterns (eg, textual patterns or expressions) > that we can find in python files to help us distinguish between source code > and configuration files? > > > > > 2- Certain python files use the oslo_config to access and define > configuration options. Could "all" these python files be considered as > configuration files? For example, the following python file of the keystone > project: keystone/keystone/conf/auth.py, is it considered a source code or > configuration file? > > > > > 3- Why are there different source code and configuration repositories for > OpenStack projects (eg, nova and puppet-nova)? For instance, does the > OpenStack-nova service have some configuration files in its repository and > have the puppet-nova as a separate configuration repository as well? > > > > Thank you very much in advance for your time and your help! > > Kind regards, > Narjes Bessghaier > > > > > > Narjes Bessghaier > > Ph.D student in Software Engineering > > École de Technologie Supérieure (ETS)| University of Quebec > > Montreal, Canada > > narjes.bessghaier.1 at ens.etsmtl.ca > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jan 20 17:42:20 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 20 Jan 2021 12:42:20 -0500 Subject: [cinder] wallaby new driver status Message-ID: <2ae69c57-d10c-7179-808a-21b1fdf8c8cc@gmail.com> At today's Cinder meeting [0] we discussed the unmerged proposed drivers (3 volume drivers, 1 backup driver) in light of tomorrow's new driver merge deadline. Here's a summary of what we decided: 1. s3 cinder backup driver https://review.opendev.org/c/openstack/cinder/+/746561 The code looks basically OK but the driver hasn't been independently tested. There's a proposed CI job but it currently isn't passing. ==> plan: aim for two +2s on the patch before the merge deadline, but hold off on merging until we have confirmation that the driver works, hopefully but the CI working by next week (Thursday 28 January). 2. TOYOU ACS5000 driver https://review.opendev.org/c/openstack/cinder/+/767290/ Code is looking good and they've been good about quickly revising patches. The CI is running and driver is passing tempest and cinder-tempest-plugin tests, though it's not reporting reliably on patches. They've agreed to get their CI responding properly by milestone-3. ==> plan: the requested revisions haven't been major, so this is looking good for tomorrow's deadline. We anticipate that the CI situation will be corrected by M-3. 3. Ceph iSCSI driver https://review.opendev.org/c/openstack/cinder/+/662829 Code is looking good but the CI isn't passing yet. ==> plan: this is an important community driver so we're extending the deadline two weeks to 5 February. 4. Kioxia Kumoscale driver (and brick changes) https://review.opendev.org/c/openstack/cinder/+/768574 https://review.opendev.org/c/openstack/os-brick/+/768575 https://review.opendev.org/c/openstack/os-brick/+/768576 Kioxia reports that their CI is up though it's not reporting yet. This is a large change and the cinder team review bandwidth has been low this cycle, but with the other drivers (mostly) done at this point, we should be able to get faster reviews on these. ==> plan: new driver merge extension to 8 Feb, that is, code merged and CI running and reporting by that date to be included in wallaby. cheers, brian [0] http://eavesdrop.openstack.org/meetings/cinder/2021/cinder.2021-01-20-14.00.log.html From gmann at ghanshyammann.com Wed Jan 20 17:42:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Jan 2021 11:42:21 -0600 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> , <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> ---- On Wed, 20 Jan 2021 05:13:39 -0600 Stephen Finucane wrote ---- > On Wed, 2021-01-20 at 07:26 +0000, Lucian Petrut wrote: > > Hi, > > > > For Windows related projects such as os-win and networking-hyperv, > > we decided to keep the lower constraints job but remove indirect > > dependencies from the lower-constraints.txt file. > > > > This made it much easier to maintain and it allows us to at least cover > > direct dependencies. I suggest considering this approach instead of > > completely dropping the lower constraints job, whenever possible. > > Another option might be to make it non-voting while it’s getting fixed. > > > > Lucian Petrut > > Yes, I've looked into doing this elsewhere (as promised) and it seems to do the > job quite nicely. It's not perfect but it does seem to be "good enough" and > captures basic things like "I depend on this function found in oslo.foo vX.Y and > forgot to bump my minimum version to reflect this". I think these jobs probably > offer _more_ value now than they did in the past, given pip is now finally > honouring the explicit constraints we express in these files, so I would be in > favour of this approach rather than dropping l-c entirely. I do realize that > there is some degree of effort here in getting e.g. all the oslo projects fixed, > but I'm happy to help out with and have already fixed quite a few projects. I I thought oslo did drop that instead of fixing all failing l-c jobs? May be I am missing something or misreading it? > also wouldn't be opposed to dropping l-c on *stable* branches so long as we > maintained for master, on the basis that they were already broken so nothing is > really changing. Sticking to older, admittedly broken versions of pip for stable > branches is another option and might help us avoid a deluge of "remove/fix l-c" > patches for stable branches, but I don't know how practical that is? I agree on the point about dropping it on stable to make stable maintenance easy. But I think making/keeping n-v is very dangerous and it can easily go as 'false information'. The n-v concept was to keep failing/starting jobs n-v temporarily and once it is fixed/stable then make it voting. I do not think keeping any job as n-v permanently is a good approach. I am still not convinced how 'removal of indirect deps from l-c' make 'Know the lower bounds of openstack packages' better? I think it makes it less informative than it is currently. How we will know the lower bound for indirect deps? Do not packagers need those or they can go with their u-c if so then why not for direct deps? In general, my take here as an upstream maintainer is that we should ship the things completely tested/which serve the complete planned mission. We should not ship/commit anything as half baked. And we keep such things open as one of the TODO if anyone volunteers to fix it. -gmann > > Stephen > > > From: Jeremy Stanley > > Sent: Wednesday, January 20, 2021 1:52 AM > > To: openstack-discuss at lists.openstack.org > > Subject: Re: [all][tc] Dropping lower-constraints testing from all projects > > > > On 2021-01-20 00:09:39 +0100 (+0100), Thomas Goirand wrote: > > [...] > > > Something I don't understand: why can't we use an older version of > > > pip, if the problem is the newer pip resolver? Or can't the > > > current pip be patched to fix things? It's not as if there was no > > > prior art... Maybe I'm missing the big picture? > > [...] > > > > To get to the heart of the matter, when using older versions of pip > > it was just quietly installing different versions of packages than > > we asked it to, and versions of transitive dependencies which > > directly conflicted with the versions other dependencies said they > > required. When pip finally (very recently) implemented a coherent > > dependency solver, it started alerting us directly to this fact. We > > could certainly find a way to hide our heads in the sand and go back > > to testing with old pip and pretending we knew what was being tested > > there, but the question is whether what we were actually testing > > that way was worthwhile enough to try to continue doing it, now that > > we have proof it wasn't what we were wanting to test. > > > > The challenge with actually testing what we wanted has always been > > that there's many hundreds of packages we depend on and, short of > > writing one ourselves, no tool available to find a coherent set of > > versions of them which satisfy the collective lower bounds. The way > > pip works, it wants to always solve for the newest possible > > versions which satisfy an aggregate set of version ranges, and what > > we'd want for lower bounds checking is the inverse of that. > > -- > > Jeremy Stanley > > > > > > From dtantsur at redhat.com Wed Jan 20 17:54:15 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 20 Jan 2021 18:54:15 +0100 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media Message-ID: Hi all, Now that we've gained some experience with using Redfish virtual media I'd like to reopen the discussion about $subj. For the context, the idrac-redfish-virtual-media boot interface appeared because Dell machines need an additional action [1] to boot from virtual media. The initial position on hardware interfaces was that anything requiring OEM actions must go into a vendor hardware interface. I would like to propose relaxing this (likely unwritten) rule. You see, this distinction causes a lot of confusion. Ironic supports Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports virtual media, Redfish supports virtual media, iDRAC supports virtual media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today I had to explain the cause of it to a few people. It required diving into how exactly Redfish works and how exactly ironic uses it, which is something we want to protect our users from. We already have a precedent [2] of adding vendor-specific handling to a generic driver. I have proposed a patch [3] to block using redfish-virtual-media for Dell hardware, but I grew to dislike this approach. It does not have precedents in the ironic code base and it won't scale well if we have to handle vendor differences for vendors that don't have ironic drivers. Based on all this I suggest relaxing the rule to the following: if a feature supported by a generic hardware interface requires additional actions or has a minor deviation from the standard, allow handling it in the generic hardware interface. Meaning, redfish-virtual-media starts handling the Dell case by checking the System manufacturer (via the recently added detect_vendor call) and loading the OEM code if it matches "Dell". After this idrac-redfish-virtual-media will stay empty (for future enhancements and to make the patch backportable). Thoughts? Dmitry [1] https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 [2] https://review.opendev.org/c/openstack/ironic/+/757198/ [3] https://review.opendev.org/c/openstack/ironic/+/771619 -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Jan 20 18:19:49 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 20 Jan 2021 12:19:49 -0600 Subject: [all][tc][goals] Migrate RBAC Policy Format from JSON to YAML: Week R-16 Update In-Reply-To: References: <1769c0ad573.cf348131542704.5195004248291055264@ghanshyammann.com> <884141609228423@mail.yandex.ru> <176afca89ab.1012b6043628566.4875338583213848682@ghanshyammann.com> <6138511611075251@mail.yandex.ru> <1771bf1fc95.eb108f1f76821.5001860786267000124@ghanshyammann.com> <1df1b294-fd76-c598-1b57-6b298675123a@nemebean.com> Message-ID: <17721062e04.ce4c2d85146783.6580349580803773716@ghanshyammann.com> ---- On Tue, 19 Jan 2021 17:25:46 -0600 Thomas Goirand wrote ---- > On 1/19/21 9:04 PM, Ben Nemec wrote: > > There was also a security concern with potentially having multiple > > policy files and it not being clear which was in use. If someone > > converted their JSON policy to YAML, but left the JSON one in place, it > > could result in oslo.policy using the wrong one (or not the one they > > expect). We decided it was better for each project to make a clean > > switchover, which allows for things like upgrade checks that oslo.policy > > couldn't have itself, than to try to handle it all in oslo.policy. > > IMO, that's a downstream distro thing. > > What I did in Debian (and for Victoria already) was having the postinst > of each package to rename any existing policy.json into a disabled > version. Here's an example with Cinder: > > if [ -r /etc/cinder/policy.json ] ; then > mv /etc/cinder/policy.json /etc/cinder/disabled.policy.json.old > fi > > and then package the yaml file as (example from Nova): > /etc/nova/policy.d/00_default_policy.yaml > > and then setting-up this: > policy_dirs = /etc/nova/policy.d > > The reason I'm doing this way, is that I'm expecting upstream to > generate a commented-only yaml file, and final users to drop non-default > supplementary files without touching the package default file. > > So, someone upgrading to Victoria with a non-default policy.json will > see its manual tweaks go away, but not completely gone (ie: recoverable > from disabled.policy.json.old). > > Does this seem to be a correct approach? > We have two types of default meaning in policy things. 1. default value of config option 'policy_file' which we are changing from 'policy.json' -> 'policy.yaml' and has custom policy rule 2. default value of policy rule which can be in default policy file or non-default (means keeping all the default value of policy rule in the file ). I hope with 'default' '00_default_policy.yaml' you mean the former one. If so then instead of keeping JSON the version also best approach is just using the 'oslopolicy-convert-json-to-yaml' tool to convert it to YAML format in a backward-compatible way. Newly generated YAML file will keep all the custom rule as it is and comment-out the default rule. Let me describe the recommended way for the different scenario: Scenario1: If you do not have any custom rule in the policy file: ------------- Recommendation (Very strong): Do not use any file, just remove it and let policy-in-code serve the purpose. OpenStack does not need a policy file as mandatory and if there is no file present or passed in the config option then it works fine with policy-in-code. Having a policy file with all default rules (uncommented) can cause any issue and this was the the reason why Debian packaging broke with Nova's new policy in the Ussuri release. Scenario2: If you have a policy file with custom rules and in JSON format. ------------- Recommendation: This is the use case we need to fix and this migration is all about. Use the 'oslopolicy-convert-json-to-yaml' tool to convert it to YAML format in a backward-compatible way Or convert it manually but do not keep the old JSON format file in a configured location otherwise Oslo policy will pick that if it finds both JSON and YAML formatted file. Just remove the old JSON file as the same level of permission is present in the converted YAML file. Scenario3: If you have a policy file with custom rules and in YAML format. ------------- This is all good and no work needed for this. [1] https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html -gmann > Cheers, > > Thomas Goirand (zigo) > > From zigo at debian.org Wed Jan 20 22:05:30 2021 From: zigo at debian.org (Thomas Goirand) Date: Wed, 20 Jan 2021 23:05:30 +0100 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> Message-ID: <198ec1e7-429c-82cb-1ca4-1c201b5c1bba@debian.org> On 1/20/21 6:42 PM, Ghanshyam Mann wrote: > I am still not convinced how 'removal of indirect deps from l-c' make > 'Know the lower bounds of openstack packages' better? I think it makes it > less informative than it is currently. How we will know the lower > bound for indirect deps? I do not expect OpenStack upstream to solve that problem. > Do not packagers need those We may, but this is studied for each direct dependency one by one. I see no point trying to solve indirect dependency version bounds, as it's up to each direct dependency to test them. > In general, my take here as an upstream maintainer is that we should ship > the things completely tested/which serve the complete planned mission. > We should not ship/commit anything as half baked. And we keep such things > open as one of the TODO if anyone volunteers to fix it. What was completely wrong was, a few years from now, shipping artificially inflated lower bounds, like, expressing that each and every projected needed the very last version for all oslo library, which was obviously not the case. The lower bound testing was trying to address this. Not indirect dependencies, which IMO is completely out of scope. However, when testing a lower bound for a direct dependency, you may need a lower version of an indirect dependency, and that's where it becomes tricky. Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Wed Jan 20 22:45:19 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jan 2021 22:45:19 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <20210120224518.qslnzm55l77xdrry@yuggoth.org> On 2021-01-20 07:26:05 +0000 (+0000), Lucian Petrut wrote: > For Windows related projects such as os-win and networking-hyperv, > we decided to keep the lower constraints job but remove indirect > dependencies from the lower-constraints.txt file. > > This made it much easier to maintain and it allows us to at least cover > direct dependencies. I suggest considering this approach instead of > completely dropping the lower constraints job, whenever possible. > Another option might be to make it non-voting while it’s getting fixed. [...] The fewer dependencies a project has, the easier this becomes. I'm not against projects continuing to do it if they can get it to work, but wouldn't want us to pressure folks to spend undue effort on it when they already have a lot more on their plates. I can understand where for projects with a very large set of direct dependencies this still has the problem that your stated minimums may conflict with (semi-circular) dependencies declared elsewhere in the transitive dependency set outside your lower-constraints.txt/requirements.txt file. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 20 22:52:38 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jan 2021 22:52:38 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <20210120225238.dygfpshyfigo7kvj@yuggoth.org> On 2021-01-20 11:13:39 +0000 (+0000), Stephen Finucane wrote: [...] > I also wouldn't be opposed to dropping l-c on *stable* branches so > long as we maintained for master, on the basis that they were > already broken so nothing is really changing. [...] The main proposal was for dropping them from stable branches of most projects due to the complexities of cascading dependencies between stable point releases of interdependent projects with essentially frozen requirements. I also think we should be okay if some teams don't feel they have time to fix or maintain master branch testing of lower bounds. > Sticking to older, admittedly broken versions of pip for stable > branches is another option and might help us avoid a deluge of > "remove/fix l-c" patches for stable branches, but I don't know how > practical that is? [...] Most of our testing is wrapped by tox, and the version of pip used depends on what's vendored into the version of virtualenv automatically pulled in by tox. In short, tox wants to use the latest available virtualenv (and thus also pip, setuptools, wheel...). Controlling this is doable, but nontrivial and a bit fragile. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 20 23:03:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jan 2021 23:03:16 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> Message-ID: <20210120230316.ac2xovgf5oh6b7ku@yuggoth.org> On 2021-01-20 12:29:44 +0100 (+0100), Radosław Piliszek wrote: [...] > I have a related question - do you have a tool to recommend that would > check whether all modules used directly by the project are in > requirements.txt already? I.e. that there are no directly-used modules > that are actually pulled in as indirect dependencies? > That would improve the proposed approach as well as general > requirements condition. I worked on this problem with r1chardj0n3s at an Infra team get-together circa mid-2014, after Nova unexpectedly broke when a declared dependency dropped one of its own dependencies which Nova had at some point started directly importing from without remembering to also declare it in requirements.txt. I can't take credit, he did all the real work on it, but we ended up not getting it added as a common linter because it reused private internals of pip which later evaporated. It looks like it was actively adopted and resurrected by a new author six months ago, so may be worth revisiting: https://pypi.org/project/pip-check-reqs/ FWIW, I still think it's fundamentally a good idea. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 20 23:11:06 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jan 2021 23:11:06 +0000 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: References: Message-ID: <20210120231106.vt2hdgtdsyh2jurt@yuggoth.org> On 2021-01-20 11:18:56 +0100 (+0100), Herve Beraud wrote: > Le mer. 20 janv. 2021 à 11:01, Martin Chacon Piza a > écrit : > > Thanks for your note. This change will fix the problem > > https://review.opendev.org/c/openstack/monasca-tempest-plugin/+/771523 [...] > > Could you help us please to restart the monasca-tempest-plugin release? > > Sure, I asked the infra team to reenqueue the job, let's wait for that, do > not hesitate to join #openstack-infra to join the discussion. [...] I'm still catching up, sorry (it's been a busy day), but I have the same question which was posed by others in IRC when you initially brought it up. If the fix merged after the release request, isn't a new release request needed instead which references a later branch state containing the fix? Just trying to re-run release jobs with a ref from the old unfixed state of the repository will presumably only reproduce the same error, unless I'm misunderstanding the problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From narjes.bessghaier.1 at ens.etsmtl.ca Wed Jan 20 15:56:03 2021 From: narjes.bessghaier.1 at ens.etsmtl.ca (Bessghaier, Narjes) Date: Wed, 20 Jan 2021 15:56:03 +0000 Subject: Requesting help with OpenStack configuration files In-Reply-To: References: , Message-ID: Thank you for your reply Regarding python files that are deprecating some options. Are these files considered to impact the configuration of the system? Thank you Get Outlook for Android ________________________________ From: Laurent Dumont Sent: Wednesday, January 20, 2021 10:34:05 AM To: Bessghaier, Narjes Cc: openstack-discuss at lists.openstack.org ; Ouni, Ali Subject: Re: Requesting help with OpenStack configuration files Bonjour fellow Montrealer! As Thomas mentioned, I dont think there is a fixed standard for config files vs code. That said, most configuration file that are used by Openstack services (nova, cinder, glance, neutron) are usually ending in .conf and have the following contents. [section_name_blocks] key=value #comments If the file looks like that, it's usually a good indication that it's a configuration file and not code. On Tue, Jan 19, 2021 at 4:39 PM Bessghaier, Narjes > wrote: Dear OpenStack team, I’m a Ph.D. student in software engineering at the ETS Montreal, University of Quebec working on the quality and configuration of web-based software systems. I’m particularly interested in analyzing configuration files from different OpenStack files. One of the main challenges I am currently facing is the proper identification of configuration files. I’m mostly confused between the python files used for production and the python files used for configuration. I am kindly requesting your precious help with the following questions: 1- How to distinguish between python files used for configuration and python files used for production? It will be very helpful if there are some configuration-based patterns (eg, textual patterns or expressions) that we can find in python files to help us distinguish between source code and configuration files? 2- Certain python files use the oslo_config to access and define configuration options. Could "all" these python files be considered as configuration files? For example, the following python file of the keystone project: keystone/keystone/conf/auth.py, is it considered a source code or configuration file? 3- Why are there different source code and configuration repositories for OpenStack projects (eg, nova and puppet-nova)? For instance, does the OpenStack-nova service have some configuration files in its repository and have the puppet-nova as a separate configuration repository as well? Thank you very much in advance for your time and your help! Kind regards, Narjes Bessghaier Narjes Bessghaier Ph.D student in Software Engineering École de Technologie Supérieure (ETS)| University of Quebec Montreal, Canada narjes.bessghaier.1 at ens.etsmtl.ca -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Jan 21 06:43:52 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 21 Jan 2021 07:43:52 +0100 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: <20210120231106.vt2hdgtdsyh2jurt@yuggoth.org> References: <20210120231106.vt2hdgtdsyh2jurt@yuggoth.org> Message-ID: Le jeu. 21 janv. 2021 à 00:13, Jeremy Stanley a écrit : > On 2021-01-20 11:18:56 +0100 (+0100), Herve Beraud wrote: > > Le mer. 20 janv. 2021 à 11:01, Martin Chacon Piza > a > > écrit : > > > Thanks for your note. This change will fix the problem > > > https://review.opendev.org/c/openstack/monasca-tempest-plugin/+/771523 > [...] > > > Could you help us please to restart the monasca-tempest-plugin release? > > > > Sure, I asked the infra team to reenqueue the job, let's wait for that, > do > > not hesitate to join #openstack-infra to join the discussion. > [...] > > I'm still catching up, sorry (it's been a busy day), but I have the > same question which was posed by others in IRC when you initially > brought it up. If the fix merged after the release request, isn't a > new release request needed instead which references a later branch > state containing the fix? Just trying to re-run release jobs with a > ref from the old unfixed state of the repository will presumably > only reproduce the same error, unless I'm misunderstanding the > problem. > Yes you're right, a new release should be done first to release the fix, sorry. @Martin: Can you propose a new release, the previous one version will stay absent of the registry. -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Thu Jan 21 09:22:44 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 21 Jan 2021 09:22:44 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: References: <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: On Tue, 19 Jan 2021 at 20:28, aditi Dukle wrote: > > Hi Mike, > > I have started nova unit test jobs in the periodic pipeline(runs everyday at UTC- 12) for each openstack branch as follows: > periodic: > jobs: > - openstack-tox-py27: > branches: > - stable/ocata > - stable/pike > - stable/queens > - stable/rocky > - stable/stein > - stable/train > - openstack-tox-py36: > branches: > - stable/train > - stable/ussuri > - stable/victoria > - openstack-tox-py37: > branches: > - stable/train > - stable/ussuri > - openstack-tox-py38: > branches: > - stable/victoria > - openstack-tox-py39: > branches: > - master > > I have observed a few failures in the unit test cases mostly all related to volume drivers. Please have a look at the 14 test cases that are failing in openstack-tox-py39 job( https://oplab9.parqtec.unicamp.br/pub/ppc64el/openstack/nova/periodic/openstack-tox-py39/2021-01-19-0058-38c70ae/job-output.txt). Most of the 14 failures report these errors: > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified NVME > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified SCALEIO > > I would need some help in understanding if these connectors(NVME,SCALEIO) are supported on ppc64. Right os-brick doesn't support these connectors on ppc64 but the real issue here is with the unit tests that don't mock out calls os-brick, an out of tree lib. I've filed a bug for this below: LibvirtNVMEVolumeDriverTestCase and LibvirtScaleIOVolumeDriverTestCase unit tests fail on ppc64 https://bugs.launchpad.net/nova/+bug/1912608 I'll push a fix for this shortly. In addition to these failures I also see stable rescue tests failing, these should be fixed by the following change: libvirt: Mock get_arch during some stable rescue unit tests https://review.opendev.org/c/openstack/nova/+/769916/ Finally, in an effort to root out any further issues caused by tests looking up the arch of the test host I've pushed the following change to poison nova.objects.fields.Architecture.from_host: tests: Poison nova.objects.fields.Architecture.from_host https://review.opendev.org/c/openstack/nova/+/769920 There's a huge amount of fallout from this that I'll try to address in the coming weeks ahead of M3. Hope this helps! Lee > ----- Original message ----- > From: aditi Dukle/India/Contr/IBM > To: Michael J Turek/Poughkeepsie/IBM at IBM > Cc: Sajauddin Mohammad/India/Contr/IBM at IBM, openstack-discuss at lists.openstack.org > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > Date: Tue, Jan 12, 2021 5:41 PM > > Hi Mike, > > I have created these unit test jobs - openstack-tox-py27, openstack-tox-py35, openstack-tox-py36, openstack-tox-py37, openstack-tox-py38, openstack-tox-py39 > by referring to the upstream CI( https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml) and these jobs are triggered for every patchset in the Openstack CI. > > I checked the code for old CI for Power, we didn't have any unit test jobs that were run for every patchset for nova. We had one "nova-python27" job that was run in a periodic pipeline. So, I wanted to know if we need to run the unit test jobs on ppc for every patchset for nova? and If yes, should these be reporting to the Openstack community? > > > Thanks, > Aditi Dukle > > ----- Original message ----- > From: Michael J Turek/Poughkeepsie/IBM > To: balazs.gibizer at est.tech, aditi Dukle/India/Contr/IBM at IBM, Sajauddin Mohammad/India/Contr/IBM at IBM > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > Date: Sat, Jan 9, 2021 12:52 AM > > Thanks for the heads up, > > We should have the capacity to add them. At one point I think we ran unit tests for nova but the job may have been culled in the move to zuul v3. I've CC'd the maintainers of the CI, Aditi Dukle and Sajauddin Mohammad. > > Aditi and Sajauddin, could we add a job to pkvmci to run unit tests for nova? > > Michael Turek > Software Engineer > Power Cloud Department > 1 845 433 1290 Office > mjturek at us.ibm.com > He/Him/His > > IBM > > > > > ----- Original message ----- > From: Balazs Gibizer > To: OpenStack Discuss > Cc: mjturek at us.ibm.com > Subject: [EXTERNAL] [nova] unit testing on ppc64le > Date: Fri, Jan 8, 2021 7:59 AM > > Hi, > > We have a bugreport[1] showing that our unit tests are not passing on > ppc. In the upstream CI we don't have test capability to run our tests > on ppc. But we have the IBM Power KVM CI[2] that runs integration tests > on ppc. I'm wondering if IBM could extend the CI to run nova unit and > functional tests too. I've added Michael Turek (mjturek at us.ibm.com) to > CC. Michael is listed as the contact person for the CI. > > Cheers, > gibi > > [1]https://bugs.launchpad.net/nova/+bug/1909972 > [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI > > > > > > > > > From stephenfin at redhat.com Thu Jan 21 09:30:19 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 21 Jan 2021 09:30:19 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> References: <176d8da769b.b6edb13b874337.4809906168220534198@ghanshyammann.com> <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> , <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> Message-ID: <0d4bb1d274c5456a835e5c43c890b80867e1f29b.camel@redhat.com> On Wed, 2021-01-20 at 11:42 -0600, Ghanshyam Mann wrote: >  ---- On Wed, 20 Jan 2021 05:13:39 -0600 Stephen Finucane wrote ---- >  > On Wed, 2021-01-20 at 07:26 +0000, Lucian Petrut wrote: >  > > Hi, >  > > >  > > For Windows related projects such as os-win and networking-hyperv, >  > > we decided to keep the lower constraints job but remove indirect >  > > dependencies from the lower-constraints.txt file. >  > > >  > > This made it much easier to maintain and it allows us to at least cover >  > > direct dependencies. I suggest considering this approach instead of >  > > completely dropping the lower constraints job, whenever possible. >  > > Another option might be to make it non-voting while it’s getting fixed. >  > > >  > > Lucian Petrut >  > >  > Yes, I've looked into doing this elsewhere (as promised) and it seems to do the >  > job quite nicely. It's not perfect but it does seem to be "good enough" and >  > captures basic things like "I depend on this function found in oslo.foo vX.Y and >  > forgot to bump my minimum version to reflect this". I think these jobs probably >  > offer _more_ value now than they did in the past, given pip is now finally >  > honouring the explicit constraints we express in these files, so I would be in >  > favour of this approach rather than dropping l-c entirely. I do realize that >  > there is some degree of effort here in getting e.g. all the oslo projects fixed, >  > but I'm happy to help out with and have already fixed quite a few projects. I > > I thought oslo did drop that instead of fixing all failing l-c jobs? May be I am missing something or > misreading it? It's been proposed but nothing is merged, pending discussions. >  > also wouldn't be opposed to dropping l-c on *stable* branches so long as we >  > maintained for master, on the basis that they were already broken so nothing is >  > really changing. Sticking to older, admittedly broken versions of pip for stable >  > branches is another option and might help us avoid a deluge of "remove/fix l-c" >  > patches for stable branches, but I don't know how practical that is? > > I agree on the point about dropping it on stable to make stable maintenance > easy. But I think making/keeping n-v is very dangerous and it can easily go > as 'false information'. The n-v concept was to keep failing/starting jobs n-v > temporarily and once it is fixed/stable then make it voting. I do not think keeping > any job as n-v permanently is a good approach. I agree non-voting only makes sense if you plan to fix it at a later date. If not, you should remove it. > I am still not convinced how 'removal of indirect deps from l-c' make > 'Know the lower bounds of openstack packages' better? I think it makes it > less informative than it is currently. How we will know the lower > bound for indirect deps? Do not packagers need those or they can go > with their u-c if so then why not for direct deps? What we have doesn't work, and direct dependencies are the only things we can truly control. In the scenario you're suggesting, not only do we need to track dependencies, but we also need to track the dependencies of dependencies, and the dependencies of the dependencies of the dependencies, and the dependencies of the dependencies of the dependencies of the dependencies etc. etc. down the rabbit hole. For each of these indirect dependencies, of which there may be many, we need to figure out what the minimum version is for each of these indirect dependencies is manually, because as has been noted many times there is no standardized machinery in place in pip etc. to find (and test) the minimum dependency versions supported by a package. Put another way, if we depend on package foo, which depends on package bar, which depends on package baz, we can state our own informed minimum version for foo, but we will need to inspect foo to find a minimum version of bar that is suitable, and we will need to inspect baz to find a minimum version of baz that is suitable. An impossible ask. > In general, my take here as an upstream maintainer is that we should ship > the things completely tested/which serve the complete planned mission. > We should not ship/commit anything as half baked. And we keep such things > open as one of the TODO if anyone volunteers to fix it. Maintaining l-c for direct dependencies on all OpenStack projects would mean we can at least guarantee that these packages have been tested with their supposed minimum version. Considering that for a package like nova, at least 1/4 of the dependencies are "OpenStack-backed", this is no small deal. These jobs encourage us to ensure these minimums still make sense and to correct things if not. As noted previously, they're not perfect but they still provides a service that we won't have if we simply delete this machinery entirely. Stephen > -gmann > > >  > >  > Stephen >  > >  > > From: Jeremy Stanley >  > > Sent: Wednesday, January 20, 2021 1:52 AM >  > > To: openstack-discuss at lists.openstack.org >  > > Subject: Re: [all][tc] Dropping lower-constraints testing from all projects >  > > >  > > On 2021-01-20 00:09:39 +0100 (+0100), Thomas Goirand wrote: >  > > [...] >  > > > Something I don't understand: why can't we use an older version of >  > > > pip, if the problem is the newer pip resolver? Or can't the >  > > > current pip be patched to fix things? It's not as if there was no >  > > > prior art... Maybe I'm missing the big picture? >  > > [...] >  > > >  > > To get to the heart of the matter, when using older versions of pip >  > > it was just quietly installing different versions of packages than >  > > we asked it to, and versions of transitive dependencies which >  > > directly conflicted with the versions other dependencies said they >  > > required. When pip finally (very recently) implemented a coherent >  > > dependency solver, it started alerting us directly to this fact. We >  > > could certainly find a way to hide our heads in the sand and go back >  > > to testing with old pip and pretending we knew what was being tested >  > > there, but the question is whether what we were actually testing >  > > that way was worthwhile enough to try to continue doing it, now that >  > > we have proof it wasn't what we were wanting to test. >  > > >  > > The challenge with actually testing what we wanted has always been >  > > that there's many hundreds of packages we depend on and, short of >  > > writing one ourselves, no tool available to find a coherent set of >  > > versions of them which satisfy the collective lower bounds. The way >  > > pip works, it wants to always solve for the newest possible >  > > versions which satisfy an aggregate set of version ranges, and what >  > > we'd want for lower bounds checking is the inverse of that. >  > > -- >  > > Jeremy Stanley >  > > >  > >  > >  > >  > > From hberaud at redhat.com Thu Jan 21 10:43:05 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 21 Jan 2021 11:43:05 +0100 Subject: [monasca][release] Abandoning monasca-ceilometer and monasca-log-api? In-Reply-To: References: <72283d02-7f75-1589-e5e1-4f5ac0d91334@stackhpc.com> Message-ID: I proposed a governance patch, please can you validate it => https://review.opendev.org/c/openstack/governance/+/771785 Thanks Le mar. 19 janv. 2021 à 11:17, Herve Beraud a écrit : > Thanks Doug for your response. > > I think that more steps are needed to deprecate your repos properly: > > > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > > The release part is the last step of the deprecation. > Also I think that it could be better to let a monasca team member submit > these patches to inform the governance team and project-config team of your > choices concerning these repos. > > With all these patches we will have a whole consistent state. > > Let me know what you think. > > Thanks, > > Le mar. 19 janv. 2021 à 10:36, Doug a écrit : > >> >> On 15/01/2021 09:15, Herve Beraud wrote: >> >> Dear Monasca team, >> >> Hi Herve >> >> >> The release team noticed an inconsistency between the Monasca team's >> deliverables described in the governance’s reference and deliverables >> defined in the openstack/releases repo (c.f our related meeting topic [1]). >> >> Indeed, monasca-ceilometer and monasca-log-api were released in train >> but not released in ussuri nor victoria. Do you think that they should be >> deprecated (abandoned) in governance? >> >> Both of these services have been deprecated (see details below). Please >> proceed. >> >> https://review.opendev.org/c/openstack/monasca-ceilometer/+/720319 >> >> https://review.opendev.org/c/openstack/monasca-log-api/+/704519 >> >> >> Notice that Wallaby's milestone 2 is next week so maybe it could be a >> good time to update this. >> >> Let us know your thoughts, we are waiting for your replies. >> >> Thanks for reading, >> >> Thanks! >> >> >> [1] >> http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Jan 21 10:53:00 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 21 Jan 2021 11:53:00 +0100 Subject: [OpenStackSDK][release] Abandoning js-openstack-lib? In-Reply-To: References: Message-ID: Thanks for the details, I submitted a patch to propose to ignore it from a release point of view, feel free to approve/disapprove. https://review.opendev.org/c/openstack/governance/+/771789 Thanks, Le ven. 15 janv. 2021 à 14:05, Radosław Piliszek < radoslaw.piliszek at gmail.com> a écrit : > On Fri, Jan 15, 2021 at 10:11 AM Herve Beraud wrote: > > > > Dear OpenStackSDK team, > > > > The release team noticed an inconsistency between the OpenStackSDK > team's deliverables described in the governance’s reference and > deliverables defined in the openstack/releases repo (c.f our related > meeting topic [1]). > > > > Indeed, js-openstack-lib (added January 9, 2020) was never released yet > and was not ready for ussuri or victoria. maybe we should abandon this > instead of waiting? > > Probably. It was me who saved it from demise but I did not get time to > really work on it and not planning to any time soon nowadays. > It has been established that the project more-or-less requires a > serious redesign and rewrite to keep up with current technology and > practices established in other, established OpenStackSDK deliverables. > > -yoctozepto > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jan 21 10:58:05 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 21 Jan 2021 10:58:05 +0000 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <20210119055830.GB3137911@fedora19.localdomain> References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: <0e8a5b91addb09d16df0872dd671ab4fc7cd81cd.camel@redhat.com> On Tue, 2021-01-19 at 16:58 +1100, Ian Wienand wrote: > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > I understand that adapting the old CI test result table to the new gerrit > > review UI is not a simple task. > > We got there in the end :) Change [1] enabled the zuul-summary-results > plugin, which is available from [2]. I just restarted opendev gerrit > with it, and it seems to be working. Look for the new "Zuul Summary" > tab next to "Files". I would consider it a 0.1 release and welcome > any contributions to make it better. > > If you want to make changes, you should be able to submit a change to > system-config with a Depends-On: and trigger the > system-config-run-review test; in the results returned there are > screenshot artifacts that will show the results (expanding this > testing also welcome!). We can also a put a node on hold for you to > work on the plugin if you have interest. It's also fairly easy to run > the container locally, so there's plenty of options. Thanks for this. One issues I've noted is that it doesn't update as I click through a chain of patches. The results from patch N will be shown for any other patch I navigate to. I don't know if this is an issue with my browser, though it sounds like something is being cached and that cache should be invalidated? Stephen > Thanks, > > -i > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > [2] https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > From hberaud at redhat.com Thu Jan 21 12:11:30 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 21 Jan 2021 13:11:30 +0100 Subject: [barbican][release] Barbican deliverables questions In-Reply-To: References: Message-ID: Hello Barbican team, Do you've any updates to share with us? Moisès made me notice that an RFE is coming in OSP 17 for barbican-ui, so I suppose we can wait for this one and allow you to release it. Thanks Moisès. Concerning your ansible roles (ansible-role-atos-hsm and ansible-role-thales-hsm) an answer would be appreciated. Thanks for your attention. Le ven. 15 janv. 2021 à 09:57, Herve Beraud a écrit : > Hi Barbican Team, > > The release team noticed some inconsistencies between the Barbican team's > deliverables described in the governance’s reference and deliverables > defined in the openstack/releases repo (c.f our related meeting topic [1]). > > First, we noticed that ansible-role-atos-hsm and ansible-role-thales-hsm > are new barbican deliverables, and we would appreciate to know if we should > plan to release them for Wallaby. > > The second thing is that barbican-ui (added in Oct 2019) was never > released yet and was not ready yet for ussuri and victoria. maybe we should > abandon this instead of waiting? > > Notice that Wallaby's milestone 2 is next week so it could be the good > time to update all these things. > > Let us know your thoughts, we are waiting for your replies. > > Thanks for reading, > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Thu Jan 21 12:39:56 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 21 Jan 2021 07:39:56 -0500 Subject: [tc] weekly meeting Message-ID: Hi everyone, Here’s the agenda for our weekly TC meeting. It will happen tomorrow (Thursday the 21st) at 1500 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting # ACTIVE INITIATIVES * Follow up on past action items * Audit SIG list and chairs (diablo_rojo) * Add Resolution of TC stance on the OpenStackClient (diablo_rojo) - https://review.opendev.org/c/openstack/governance/+/759904 * Gate performance and heavy job configs (dansmith) - http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Audit and clean-up tags (gmann) - 'supports-api-interoperability' tag - Manila request: https://review.opendev.org/c/openstack/governance/+/770859 * infra-core team (mnaser / diablo_rojo) - Stepping up to help with review load that will be new to everyone * Dropping lower-constraints testing from all projects (gmann) - http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html * Decide on OpenSUSE in testing runtime (gmann) - https://review.opendev.org/c/openstack/devstack/+/769884 * Define 2021 upstream investment opportunities - https://review.opendev.org/c/openstack/governance/+/771707 * Open Reviews - https://review.opendev.org/q/project:openstack/governance+is:open Thank you Mohammed -- Mohammed Naser VEXXHOST, Inc. From lyarwood at redhat.com Thu Jan 21 13:12:18 2021 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 21 Jan 2021 13:12:18 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: References: <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: <20210121131218.44gumx7dvwxqkoci@lyarwood-laptop.usersys.redhat.com> On 21-01-21 09:22:44, Lee Yarwood wrote: > On Tue, 19 Jan 2021 at 20:28, aditi Dukle wrote: > > > > Hi Mike, > > > > I have started nova unit test jobs in the periodic pipeline(runs everyday at UTC- 12) for each openstack branch as follows: > > periodic: > > jobs: > > - openstack-tox-py27: > > branches: > > - stable/ocata > > - stable/pike > > - stable/queens > > - stable/rocky > > - stable/stein > > - stable/train > > - openstack-tox-py36: > > branches: > > - stable/train > > - stable/ussuri > > - stable/victoria > > - openstack-tox-py37: > > branches: > > - stable/train > > - stable/ussuri > > - openstack-tox-py38: > > branches: > > - stable/victoria > > - openstack-tox-py39: > > branches: > > - master > > > > I have observed a few failures in the unit test cases mostly all related to volume drivers. Please have a look at the 14 test cases that are failing in openstack-tox-py39 job( https://oplab9.parqtec.unicamp.br/pub/ppc64el/openstack/nova/periodic/openstack-tox-py39/2021-01-19-0058-38c70ae/job-output.txt). Most of the 14 failures report these errors: > > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified NVME > > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified SCALEIO > > > > I would need some help in understanding if these connectors(NVME,SCALEIO) are supported on ppc64. > > Right os-brick doesn't support these connectors on ppc64 but the real > issue here is with the unit tests that don't mock out calls os-brick, > an out of tree lib. I've filed a bug for this below: > > LibvirtNVMEVolumeDriverTestCase and LibvirtScaleIOVolumeDriverTestCase > unit tests fail on ppc64 > https://bugs.launchpad.net/nova/+bug/1912608 > > I'll push a fix for this shortly. libvirt: Stop NVMe and ScaleIO unit tests from calling os-brick https://review.opendev.org/c/openstack/nova/+/771806/ > In addition to these failures I also see stable rescue tests failing, > these should be fixed by the following change: > > libvirt: Mock get_arch during some stable rescue unit tests > https://review.opendev.org/c/openstack/nova/+/769916/ I also noticed some additional failures caused by the way in which the libvirt virt driver loads its volume drivers at startup that also attempt to load the underlying os-brick connector. As above this results in volume drivers failing to load and being dropped. We've actually had a long standing TODO in the driver to move to loading these drivers on-demand so I've proposed the following: libvirt: Load and cache volume drivers on-demand https://review.opendev.org/c/openstack/nova/+/741545/ Can you rerun your tests using the above change and I'll try to address any additional failures. > Finally, in an effort to root out any further issues caused by tests > looking up the arch of the test host I've pushed the following change > to poison nova.objects.fields.Architecture.from_host: > > tests: Poison nova.objects.fields.Architecture.from_host > https://review.opendev.org/c/openstack/nova/+/769920 > > There's a huge amount of fallout from this that I'll try to address in > the coming weeks ahead of M3. > > Hope this helps! > > Lee > > > ----- Original message ----- > > From: aditi Dukle/India/Contr/IBM > > To: Michael J Turek/Poughkeepsie/IBM at IBM > > Cc: Sajauddin Mohammad/India/Contr/IBM at IBM, openstack-discuss at lists.openstack.org > > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > > Date: Tue, Jan 12, 2021 5:41 PM > > > > Hi Mike, > > > > I have created these unit test jobs - openstack-tox-py27, openstack-tox-py35, openstack-tox-py36, openstack-tox-py37, openstack-tox-py38, openstack-tox-py39 > > by referring to the upstream CI( https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml) and these jobs are triggered for every patchset in the Openstack CI. > > > > I checked the code for old CI for Power, we didn't have any unit test jobs that were run for every patchset for nova. We had one "nova-python27" job that was run in a periodic pipeline. So, I wanted to know if we need to run the unit test jobs on ppc for every patchset for nova? and If yes, should these be reporting to the Openstack community? > > > > > > Thanks, > > Aditi Dukle > > > > ----- Original message ----- > > From: Michael J Turek/Poughkeepsie/IBM > > To: balazs.gibizer at est.tech, aditi Dukle/India/Contr/IBM at IBM, Sajauddin Mohammad/India/Contr/IBM at IBM > > Cc: openstack-discuss at lists.openstack.org > > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > > Date: Sat, Jan 9, 2021 12:52 AM > > > > Thanks for the heads up, > > > > We should have the capacity to add them. At one point I think we ran unit tests for nova but the job may have been culled in the move to zuul v3. I've CC'd the maintainers of the CI, Aditi Dukle and Sajauddin Mohammad. > > > > Aditi and Sajauddin, could we add a job to pkvmci to run unit tests for nova? > > > > Michael Turek > > Software Engineer > > Power Cloud Department > > 1 845 433 1290 Office > > mjturek at us.ibm.com > > He/Him/His > > > > IBM > > > > > > > > > > ----- Original message ----- > > From: Balazs Gibizer > > To: OpenStack Discuss > > Cc: mjturek at us.ibm.com > > Subject: [EXTERNAL] [nova] unit testing on ppc64le > > Date: Fri, Jan 8, 2021 7:59 AM > > > > Hi, > > > > We have a bugreport[1] showing that our unit tests are not passing on > > ppc. In the upstream CI we don't have test capability to run our tests > > on ppc. But we have the IBM Power KVM CI[2] that runs integration tests > > on ppc. I'm wondering if IBM could extend the CI to run nova unit and > > functional tests too. I've added Michael Turek (mjturek at us.ibm.com) to > > CC. Michael is listed as the contact person for the CI. > > > > Cheers, > > gibi > > > > [1]https://bugs.launchpad.net/nova/+bug/1909972 > > [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI > > > > > > > > > > > > > > > > > > -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From romain.chanu at univ-lyon1.fr Thu Jan 21 14:31:31 2021 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Thu, 21 Jan 2021 14:31:31 +0000 Subject: [Octavia] Network issue between amphora and health manager port Message-ID: Hello, I try to install Octavia and i'm facing an issue with octavia-health- manager-listen-port interface. I use Openstack Ussuri on Ubuntu 18.04 with linuxbridge plugin I followed this procedure: https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components When I try to create my loadbalancer I got this error: WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='10.244.3.243', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) From my interface "o-hm0" I cannot ping 10.244.3.243 but if I try to ping from DHCP namespace it works fine. I think the problem is my "octavia-health-manager-listen-port" port appears "DOWN" when I list every ports. I figured out than Neutron didnt update iptables rules, probably because the interface is DOWN but if I add these rules: iptables -A neutron-linuxbri-FORWARD -m physdev --physdev-out o-bhm0 -- physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT iptables -A neutron-linuxbri-FORWARD -m physdev --physdev-in o-bhm0 -- physdev-is-bridged -m comment --comment "Accept all packets when port is trusted." -j ACCEPT Amphora is able to communication with health manager port then the loadbalancer becomes UP but now I got this WARNING: 2021-01-21 15:27:21.230 2999834 WARNING octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager experienced an exception processing a heartbeat message from ('10.244.3.243', 30684). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' I guess it's all about the port being stuck in DOWN state. Do you have any input how to configure this port? Best regards, Romain -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3217 bytes Desc: not available URL: From juliaashleykreger at gmail.com Thu Jan 21 14:44:26 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 21 Jan 2021 06:44:26 -0800 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: So, un-written and always situational flexible. When we all started down the path of creating ironic so many years ago. Wow I suddenly feel old. Anyway, back then we were a bit naive to think every vendor properly and consistently implemented vendor hardware behavior and interfaces the same way. Reality has been kind of far from it and vendors really don't like changing BMC code to become compliant to standard after the fact. I think the right thing is trying to provide guard rails to continue to do the right thing for the user since vendors are generally getting mostly compliant, the rest is fairly easy to handle if we know what... and how to handle these things. So overall, I think it is perfectly acceptable to have some conditional code in generic drivers... especially of advanced features, to go "oh, we need this $thing slightly differently" or "Oh, This driver is completely wrong, lets fix it for them magically" So implementation details aside (as it is the implementer's prerogative on the initial design) , I'm good and agree with the general ideas. Hope that makes sense. On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur wrote: > > Hi all, > > Now that we've gained some experience with using Redfish virtual media I'd like to reopen the discussion about $subj. For the context, the idrac-redfish-virtual-media boot interface appeared because Dell machines need an additional action [1] to boot from virtual media. The initial position on hardware interfaces was that anything requiring OEM actions must go into a vendor hardware interface. I would like to propose relaxing this (likely unwritten) rule. > > You see, this distinction causes a lot of confusion. Ironic supports Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports virtual media, Redfish supports virtual media, iDRAC supports virtual media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today I had to explain the cause of it to a few people. It required diving into how exactly Redfish works and how exactly ironic uses it, which is something we want to protect our users from. > > We already have a precedent [2] of adding vendor-specific handling to a generic driver. I have proposed a patch [3] to block using redfish-virtual-media for Dell hardware, but I grew to dislike this approach. It does not have precedents in the ironic code base and it won't scale well if we have to handle vendor differences for vendors that don't have ironic drivers. > > Based on all this I suggest relaxing the rule to the following: if a feature supported by a generic hardware interface requires additional actions or has a minor deviation from the standard, allow handling it in the generic hardware interface. Meaning, redfish-virtual-media starts handling the Dell case by checking the System manufacturer (via the recently added detect_vendor call) and loading the OEM code if it matches "Dell". After this idrac-redfish-virtual-media will stay empty (for future enhancements and to make the patch backportable). > > Thoughts? > > Dmitry > > [1] https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ > [3] https://review.opendev.org/c/openstack/ironic/+/771619 > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From fungi at yuggoth.org Thu Jan 21 14:50:12 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Jan 2021 14:50:12 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <0d4bb1d274c5456a835e5c43c890b80867e1f29b.camel@redhat.com> References: <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> <0d4bb1d274c5456a835e5c43c890b80867e1f29b.camel@redhat.com> Message-ID: <20210121145011.23pvymfzukevkqsl@yuggoth.org> On 2021-01-21 09:30:19 +0000 (+0000), Stephen Finucane wrote: [...] > What we have doesn't work, and direct dependencies are the only > things we can truly control. In the scenario you're suggesting, > not only do we need to track dependencies, but we also need to > track the dependencies of dependencies, and the dependencies of > the dependencies of the dependencies, and the dependencies of the > dependencies of the dependencies of the dependencies etc. etc. > down the rabbit hole. For each of these indirect dependencies, of > which there may be many, we need to figure out what the minimum > version is for each of these indirect dependencies is manually, > because as has been noted many times there is no standardized > machinery in place in pip etc. to find (and test) the minimum > dependency versions supported by a package. Put another way, if we > depend on package foo, which depends on package bar, which depends > on package baz, we can state our own informed minimum version for > foo, but we will need to inspect foo to find a minimum version of > bar that is suitable, and we will need to inspect baz to find a > minimum version of baz that is suitable. An impossible ask. [...] Where this begins to fall apart, as I mentioned earlier, is that the larger your transitive dependency set, the more likely it is that a direct dependency is *also* an indirect dependency (maybe many layers down). If a dependency of your dependency updates to a version which insists on a newer version of some other direct dependency of yours than what you've set in lower-constraints.txt, then your jobs are going to break and need lower bounds adjustments or additional indirect dependencies added to the lower-constraints.txt to roll them back to versions which worked with the others you've set. Unlike upper-constraints.txt where it's assumed that a complete transitive set of dependencies is covered, this will mean additional churn in your stable branches over time. Or is the idea that we would only every do lower bounds checking on the release under development, and then remove those jobs when we branch? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gmann at ghanshyammann.com Thu Jan 21 14:57:53 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 21 Jan 2021 08:57:53 -0600 Subject: [monasca][release] Abandoning monasca-ceilometer and monasca-log-api? In-Reply-To: References: <72283d02-7f75-1589-e5e1-4f5ac0d91334@stackhpc.com> Message-ID: <1772573a9f7.d2910476198695.6214442223032400818@ghanshyammann.com> ---- On Tue, 19 Jan 2021 04:17:39 -0600 Herve Beraud wrote ---- > Thanks Doug for your response. > I think that more steps are needed to deprecate your repos properly: > https://docs.openstack.org/project-team-guide/repository.html#deprecating-a-repository > The release part is the last step of the deprecation.Also I think that it could be better to let a monasca team member submit these patches to inform the governance team and project-config team of your choices concerning these repos. > With all these patches we will have a whole consistent state. > Let me know what you think. +1, let's deprecate it officially via steps mentioned in the link mentioned by Herve. -gmann > Thanks, > > Le mar. 19 janv. 2021 à 10:36, Doug a écrit : > > > On 15/01/2021 09:15, Herve Beraud wrote: > Dear Monasca team, Hi Herve > > The release team noticed an inconsistency between the Monasca team's deliverables described in the governance’s reference and deliverables defined in the openstack/releases repo (c.f our related meeting topic [1]). > Indeed, monasca-ceilometer and monasca-log-api were released in train but not released in ussuri nor victoria. Do you think that they should be deprecated (abandoned) in governance? Both of these services have been deprecated (see details below). Please proceed. > > https://review.opendev.org/c/openstack/monasca-ceilometer/+/720319 > https://review.opendev.org/c/openstack/monasca-log-api/+/704519 > > > Notice that Wallaby's milestone 2 is next week so maybe it could be a good time to update this. > Let us know your thoughts, we are waiting for your replies. > Thanks for reading, Thanks! > > [1] http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2021-01-14.log.html#t2021-01-14T17:05:23 > -- > Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > > -- > Hervé BeraudSenior Software Engineer at Red Hatirc: hberaudhttps://github.com/4383/https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From zigo at debian.org Thu Jan 21 15:08:02 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 21 Jan 2021 16:08:02 +0100 Subject: [Octavia] Network issue between amphora and health manager port In-Reply-To: References: Message-ID: <900763fb-7f97-bfa8-6f34-51d2cc507c61@debian.org> On 1/21/21 3:31 PM, CHANU ROMAIN wrote: > Hello, > > I try to install Octavia and i'm facing an issue with octavia-health- > manager-listen-port interface. > > I use Openstack Ussuri on Ubuntu 18.04 with linuxbridge plugin I > followed this procedure: > https://docs.openstack.org/octavia/latest/install/install-ubuntu.html#install-and-configure-components > > When I try to create my loadbalancer I got this error: > > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not > connect to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='10.244.3.243', port=9443): Max retries > exceeded with url: // (Caused by > NewConnectionError(' at 0x7fe1ec077080>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > From my interface "o-hm0" I cannot ping 10.244.3.243 but if I try to > ping from DHCP namespace it works fine. > > I think the problem is my "octavia-health-manager-listen-port" port > appears "DOWN" when I list every ports. > > I figured out than Neutron didnt update iptables rules, probably > because the interface is DOWN but if I add these rules: > > iptables -A neutron-linuxbri-FORWARD -m physdev --physdev-out o-bhm0 -- > physdev-is-bridged -m comment --comment "Accept all packets when port > is trusted." -j ACCEPT > iptables -A neutron-linuxbri-FORWARD -m physdev --physdev-in o-bhm0 -- > physdev-is-bridged -m comment --comment "Accept all packets when port > is trusted." -j ACCEPT > > Amphora is able to communication with health manager port then the > loadbalancer becomes UP but now I got this WARNING: > 2021-01-21 15:27:21.230 2999834 WARNING > octavia.amphorae.drivers.health.heartbeat_udp [-] Health Manager > experienced an exception processing a heartbeat message from > ('10.244.3.243', 30684). Ignoring this packet. Exception: 'NoneType' > object has no attribute 'encode' > > I guess it's all about the port being stuck in DOWN state. Do you have > any input how to configure this port? > > Best regards, > Romain Hi Romain, Did you understand correctly that the servers containing your Octavia services must have connectivity to your load balancer management network? Also, did you configure correctly the security group of your load balancers? To setup Octavia in my CI, I use this script: https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer/-/blob/debian/victoria/utils/usr/bin/oci-octavia-amphora-secgroups-sshkey-lbrole-and-network Specifically, look where it's setting-up the lb-mgmt-sec-grp and lb-health-mgr-sec-grp security groups. Maybe this will help? Cheers, Thomas Goirand (zigo) From skaplons at redhat.com Thu Jan 21 15:22:35 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 21 Jan 2021 16:22:35 +0100 Subject: [neutron]Drivers meeting agenda - 22.01.2021 Message-ID: <20210121152235.3fn6ba4j7nexmtkk@p1.localdomain> Hi, Agenda for tomorrow's drivers meeting is at [1]. We have one RFE ready to discuss: - https://bugs.launchpad.net/neutron/+bug/1911126 We also have 2 new things which are now triaging, so please take a look at them if You have some time and ask questions if You have any: - https://bugs.launchpad.net/neutron/+bug/1912460 - https://bugs.launchpad.net/neutron/+bug/1911864 [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dmendiza at redhat.com Thu Jan 21 16:23:47 2021 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Thu, 21 Jan 2021 10:23:47 -0600 Subject: [barbican][release] Barbican deliverables questions In-Reply-To: References: Message-ID: On 1/21/21 6:11 AM, Herve Beraud wrote: > Hello Barbican team, > > Do you've any updates to share with us? > > Moisès made me notice that an RFE is coming in OSP 17 for barbican-ui, > so I suppose we can wait for this one and allow you to release it. > Thanks Moisès. Yes, we still have barbican-ui in the roadmap. We have a patch that implements basic functionality [1], but it's been on the back burner since the core review team lacks front end skills. 😅 We'd definitely appreciate reviews from folks with horizon experience. > Concerning your ansible roles (ansible-role-atos-hsm and > ansible-role-thales-hsm) an answer would be appreciated. Yes, we definitely want to release these along with the other HSM ansible role: ansible-role-lunasa-hsm. > Thanks for your attention. Thank you for staying on top of these things. - Douglas Mendizábal From dtantsur at redhat.com Thu Jan 21 16:25:41 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 21 Jan 2021 17:25:41 +0100 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> Message-ID: On Wed, Jan 20, 2021 at 9:56 AM Mark Goddard wrote: > On Tue, 19 Jan 2021 at 15:18, Jeremy Stanley wrote: > > > > On 2021-01-19 12:56:41 +0000 (+0000), Sean Mooney wrote: > > > On Tue, 2021-01-19 at 12:37 +0100, Dmitry Tantsur wrote: > > [...] > > > > I wonder if we could also run the plugin that shows the live > > > > progress (it was mentioned somewhere in the thread). > > > > > > i belive showing the live progress of the jobs is effectivly a > > > ddos vector. infra have ask that we not use javascript to pool the > > > the live status of the jobs in our browser in the past. > > [...] > > > i know that we previously tried enbeding the zuul job status > > > directly into gerrit a few years ago and that had to be qickly > > > reverted as it does not take many developers leave review open in > > > a tab to quickly make that unworkable. i know i for one often > > > leave review open over night if im pinged to review something > > > shortly before i finish for the day so that its open on my screen > > > when i log in the next day. > > [...] > > > > I think it's probably worth trying again. The previous attempts hit > > a wall because of several challenges: > > > > 1. The available Zuul status API returned data on all enqueued refs > > (a *very* large JSON blob when the system is under heavy use) > > > > 2. Calls to the API were handled by a thread of the scheduler > > daemon, so often blocked or were blocked by other things going on, > > especially when Zuul was already under significant load > > > > 3. Browsers at the time continued running Javascript payloads in > > "background" tabs so the volume of queries was multiplied not just > > by the number of users but also by the average number of review tabs > > they had open > > > > Over time we added a ref-scoped status method so callers could > > request the status of a specific change. The Zuul REST API is now > > served by a separate zuul-web daemon, which we can move to a > > different server entirely if load demands that (and can even scale > > horizontally with more than one instance of it, I think?). Browser > > tech has also improved, and popular ones these days suspend > > Javascript stacks when tabs are not exposed. We may also be caching > > status API responses more aggressively than we used to do. All of > > these factors combined could make live status info in a Gerrit > > plug-in entirely tractable, we'll just need someone with time to try > > it and see... and be prepared for multiple Gerrit service restarts > > to enable/disable it, so probably not when utilization is as high as > > it has been the past couple of weeks. > > A refresh button to update live results on demand could be a good > compromise between UX and unnecessary polling. > I'd be totally fine with a refresh button, very infrequent (once a minute?) automatic refreshes and aggressive caching. Dmitry > > > -- > > Jeremy Stanley > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jan 21 17:27:12 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Jan 2021 17:27:12 +0000 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> Message-ID: On Thu, 2021-01-21 at 17:25 +0100, Dmitry Tantsur wrote: > On Wed, Jan 20, 2021 at 9:56 AM Mark Goddard wrote: > > > On Tue, 19 Jan 2021 at 15:18, Jeremy Stanley wrote: > > > > > > On 2021-01-19 12:56:41 +0000 (+0000), Sean Mooney wrote: > > > > On Tue, 2021-01-19 at 12:37 +0100, Dmitry Tantsur wrote: > > > [...] > > > > > I wonder if we could also run the plugin that shows the live > > > > > progress (it was mentioned somewhere in the thread). > > > > > > > > i belive showing the live progress of the jobs is effectivly a > > > > ddos vector. infra have ask that we not use javascript to pool the > > > > the live status of the jobs in our browser in the past. > > > [...] > > > > i know that we previously tried enbeding the zuul job status > > > > directly into gerrit a few years ago and that had to be qickly > > > > reverted as it does not take many developers leave review open in > > > > a tab to quickly make that unworkable. i know i for one often > > > > leave review open over night if im pinged to review something > > > > shortly before i finish for the day so that its open on my screen > > > > when i log in the next day. > > > [...] > > > > > > I think it's probably worth trying again. The previous attempts hit > > > a wall because of several challenges: > > > > > > 1. The available Zuul status API returned data on all enqueued refs > > > (a *very* large JSON blob when the system is under heavy use) > > > > > > 2. Calls to the API were handled by a thread of the scheduler > > > daemon, so often blocked or were blocked by other things going on, > > > especially when Zuul was already under significant load > > > > > > 3. Browsers at the time continued running Javascript payloads in > > > "background" tabs so the volume of queries was multiplied not just > > > by the number of users but also by the average number of review tabs > > > they had open > > > > > > Over time we added a ref-scoped status method so callers could > > > request the status of a specific change. The Zuul REST API is now > > > served by a separate zuul-web daemon, which we can move to a > > > different server entirely if load demands that (and can even scale > > > horizontally with more than one instance of it, I think?). Browser > > > tech has also improved, and popular ones these days suspend > > > Javascript stacks when tabs are not exposed. We may also be caching > > > status API responses more aggressively than we used to do. All of > > > these factors combined could make live status info in a Gerrit > > > plug-in entirely tractable, we'll just need someone with time to try > > > it and see... and be prepared for multiple Gerrit service restarts > > > to enable/disable it, so probably not when utilization is as high as > > > it has been the past couple of weeks. > > > > A refresh button to update live results on demand could be a good > > compromise between UX and unnecessary polling. > > > > I'd be totally fine with a refresh button, very infrequent (once a minute?) > automatic refreshes and aggressive caching. that kind of conter productive no. if you agressivly cache there is no point in frequent pooling i would consider anything less then 5 mints to be still relitivly frequent by the way so 1 minute is not very infrequent. a refresh button could work but my browser already has one of those.... personlaly if i want to monitor a specific job i use the zuul status page and filter it to just the job i want using the reivew number. that allows me to look at a whole series in one view too if i use the number of the first reivew in the series. im not really agaisnt live updates provide it does not bork things or cause undue load on the ci. > > Dmitry > > > > > > > -- > > > Jeremy Stanley > > > > > From fungi at yuggoth.org Thu Jan 21 20:53:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Jan 2021 20:53:16 +0000 Subject: [all][infra] CI test result table in the new gerrit review UI In-Reply-To: References: <20210119055830.GB3137911@fedora19.localdomain> <20210119151722.gguyp53vn7oa6vtc@yuggoth.org> Message-ID: <20210121205316.7cvhrlv7lj5xlcop@yuggoth.org> On 2021-01-21 17:27:12 +0000 (+0000), Sean Mooney wrote: [...] > if you agressivly cache there is no point in frequent pooling i > would consider anything less then 5 mints to be still relitivly > frequent by the way so 1 minute is not very infrequent. [...] When I said "aggressively" I meant we're actually caching what would normally be considered dynamic data. The Zuul API sets... Cache-Control: public, max-age=1 ...so that browsers will try to avoid requesting the file more than once a second, and we front it with Apache mod_proxy so the server will also refresh its cache roughly that often. I think frequent requests are fine, maybe not once a second, but once every five or ten would likely be okay (our Zuul dashboard refreshes its status every 5 seconds in your browser, and that's pulling the full status blob for the tenant, not just change-specific data like the Gerrit plugin would presumably do). We have capacity and a scalable architecture now, so I'd err on the side of responsiveness and user convenience unless we see that it's actually still a problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Jan 21 21:23:25 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 21 Jan 2021 13:23:25 -0800 Subject: [All][StoryBoard] Angular.js Alternatives Message-ID: Hello Everyone! The StoryBoard team is looking at alternatives to Angular.js since its going end of life. After some research, we've boiled all the options down to two possibilities: Vue.js or React.js I am diving more deeply into researching those two options this week, but any opinions or feedback on your experiences with either of them would be helpful! Here is the etherpad with our research so far[3]. Feel free to add opinions there or in response to this thread! -Kendall Nelson (diablo_rojo) & The StoryBoard Team [1] https://vuejs.org/ [2] https://reactjs.org/ [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu Jan 21 22:00:10 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 21 Jan 2021 22:00:10 +0000 Subject: [All] copyright Message-ID: We have in a few places openstack copyright: # Copyright (c) 2020 OpenStack Foundation Should we use OpenInfra copyright going forward? Something like # Copyright (c) 20XX Open Infrastructure Foundation Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell EMC office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jan 21 22:14:05 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Jan 2021 22:14:05 +0000 Subject: [All] copyright In-Reply-To: References: Message-ID: <20210121221405.mwfnl7fumksagav5@yuggoth.org> On 2021-01-21 22:00:10 +0000 (+0000), Kanevsky, Arkady wrote: > We have in a few places openstack copyright: > # Copyright (c) 2020 OpenStack Foundation > > Should we use OpenInfra copyright going forward? > Something like > # Copyright (c) 20XX Open Infrastructure Foundation Interpreting from earlier guidance, things should only ever have been "Copyright OpenStack Foundation" if they were created by staff/contractors of the foundation *or* previously copyrights owned by Rackspace under the "OpenStack, LLC." moniker (2012 and prior) for which ownership was officially transferred to the foundation. As far as existing copyright statements are concerned, that would technically be up to representatives of the foundation to update, and as far as I'm aware the OpenStack Foundation still exists (with something like a "doing business as" Open Infrastructure Foundation) so there's no legal urgency to update copyright statements at this time. The foundation staff are engaged currently in inventorying legal documents which need updating, and will likely provide guidance about lower priority items such as copyright statements in the coming months. Further discussion is likely more on topic for either the foundation or openstack-legal mailing lists. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From juliaashleykreger at gmail.com Thu Jan 21 22:16:51 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 21 Jan 2021 14:16:51 -0800 Subject: [All] copyright In-Reply-To: References: Message-ID: This is likely a better question for legal-discuss. I suspect feedback on-list is going to be opinion, and I *think* somewhere in the documentation it actually says to NOT put in copyright headers in the foundation name, which would also make sense since contributors are not actually employed by the foundation and thus can't assert that copyright claim. -Julia On Thu, Jan 21, 2021 at 2:03 PM Kanevsky, Arkady wrote: > > We have in a few places openstack copyright: > > # Copyright (c) 2020 OpenStack Foundation > > > > Should we use OpenInfra copyright going forward? > > Something like > > # Copyright (c) 20XX Open Infrastructure Foundation > > > > Thanks, > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell EMC office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > From fungi at yuggoth.org Thu Jan 21 22:17:49 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 21 Jan 2021 22:17:49 +0000 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: <20210121221749.3p5oaw6exd5e4qw2@yuggoth.org> On 2021-01-21 13:23:25 -0800 (-0800), Kendall Nelson wrote: > The StoryBoard team is looking at alternatives to Angular.js since its > going end of life. [...] See also the minutes and log from the weekly meeting where initial discussion of these points took place: http://eavesdrop.openstack.org/meetings/storyboard/2021/storyboard.2021-01-21-18.02.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Thu Jan 21 22:25:40 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 21 Jan 2021 22:25:40 +0000 Subject: [All] copyright In-Reply-To: <20210121221405.mwfnl7fumksagav5@yuggoth.org> References: <20210121221405.mwfnl7fumksagav5@yuggoth.org> Message-ID: Thanks Jeremy. Will wait for legal counsel to respond. The code is in refstack and had been mostly handled by foundation. Thanks, Arkady -----Original Message----- From: Jeremy Stanley Sent: Thursday, January 21, 2021 4:14 PM To: openstack-discuss at lists.openstack.org Subject: Re: [All] copyright On 2021-01-21 22:00:10 +0000 (+0000), Kanevsky, Arkady wrote: > We have in a few places openstack copyright: > # Copyright (c) 2020 OpenStack Foundation > > Should we use OpenInfra copyright going forward? > Something like > # Copyright (c) 20XX Open Infrastructure Foundation Interpreting from earlier guidance, things should only ever have been "Copyright OpenStack Foundation" if they were created by staff/contractors of the foundation *or* previously copyrights owned by Rackspace under the "OpenStack, LLC." moniker (2012 and prior) for which ownership was officially transferred to the foundation. As far as existing copyright statements are concerned, that would technically be up to representatives of the foundation to update, and as far as I'm aware the OpenStack Foundation still exists (with something like a "doing business as" Open Infrastructure Foundation) so there's no legal urgency to update copyright statements at this time. The foundation staff are engaged currently in inventorying legal documents which need updating, and will likely provide guidance about lower priority items such as copyright statements in the coming months. Further discussion is likely more on topic for either the foundation or openstack-legal mailing lists. -- Jeremy Stanley From mordred at inaugust.com Thu Jan 21 22:32:42 2021 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 21 Jan 2021 16:32:42 -0600 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: > On Jan 21, 2021, at 15:23, Kendall Nelson wrote: > > Hello Everyone! > > The StoryBoard team is looking at alternatives to Angular.js since its going end of life. After some research, we've boiled all the options down to two possibilities: > > Vue.js > > or > > React.js > > I am diving more deeply into researching those two options this week, but any opinions or feedback on your experiences with either of them would be helpful! > > Here is the etherpad with our research so far[3]. > > Feel free to add opinions there or in response to this thread! Zuul’s dashboard is React, which I see you’ve already got in there. We actually wrote the initial version of it in Angular but then migrated to React. +1 on React In a vacuum I’ve heard good things about Vue - and it’s lighter-weight. But the general adjacency of Zuul and Storyboard in OpenDev would make me suggest benefits of going React. > -Kendall Nelson (diablo_rojo) & The StoryBoard Team > > [1] https://vuejs.org/ > [2] https://reactjs.org/ > [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research From mkopec at redhat.com Thu Jan 21 23:36:08 2021 From: mkopec at redhat.com (Martin Kopec) Date: Fri, 22 Jan 2021 00:36:08 +0100 Subject: [tempest] extending python-tempestconf In-Reply-To: <10167377.aFP6jjVeTY@whitebase.usersys.redhat.com> References: <10167377.aFP6jjVeTY@whitebase.usersys.redhat.com> Message-ID: Hi, contributions to make python-tempestconf smarter - being able to discover more relevant configuration are always welcomed. In regards of resource creation, I'd probably stick with the current resources (images and flavors) and wouldn't complicate it with more such as network ones. I think it would be better if you would create any network resources you need prior python-tempestconf execution, I expect that something like that will have more logic (code) which increases maintenance requirements - it will be better also for you if you have more control over it. Speaking about flavors, we already support ram and disk modifications so I think that we could add one more option through which a user could pass custom parameters (*like hw:mem_page_size* you mentioned) to the flavors. Btw, check out ansible-role-os_tempest, it's basically a role/wrapper around tempest and python-tempestconf. It creates basic network resources and some other stuff in order to prepare for tempest execution. https://opendev.org/openstack/openstack-ansible-os_tempest On Tue, 19 Jan 2021 at 13:23, Luigi Toscano wrote: > On Tuesday, 19 January 2021 12:49:19 CET Szabolcs Tóth wrote: > > Hej! > > > > The official tool named python-tempestconf has a parameter named > --create, > > which allows to create the following resources: > > > > * CirrOS image (uploads the image based on the location defined with > > --image parameter), * Flavors (based on default values - > > DEFAULT_FLAVOR_RAM, DEFAULT_FLAVOR_RAM_ALT, DEFAULT_FLAVOR_DISK - which > can > > be changed with --flavor-min-mem and --flavor-min-disk). > > > > In order to verify our specific installation with Tempest, we need to > create > > the basic resources as > > > > * Flavors (with extra-spec parameters like hw:mem_page_size). > > * Networks (one for fixed_network_name and one for > > floating_network_name). > > > > * python-tempestconf is able to find an already existing network > > created with router:external flag and set it as value for > > floating_network_name. > > > > * Router and port (for routing traffic between internal and external > > networks). > > > > I would like to ask the following: > > > > * Is there any particular reason why the basic resource create > > functionality is limited to the image and flavor? > > * Are there any plans > > to extend the basic resource create functionality? > > The aim of python-tempestconf (which is not part of the QA/tempest > project, > but of the refstack project) is described as "for automatic generation of > tempest configuration based on user’s cloud." > > This means that any resource creation is limited to what is needed for > running > "the basics" of tempest. > > From an historical point of view, it is not meant to be able to discover > everything, but to be used as starting point for your tempest settings, > which > means that tests may work with the output of tempestconf, but tuning may > be > needed and it is expected. > > > > > * Ability to set extra parameters for the flavors. > > * Creating networks, routers and ports (based on a user inputs, > which > > can be separate parameters or a specific file). > > > > Would the community accept contributions extending python-tempestconf > into > > this direction? > > I'd leave space to other python-tempestconf people, but IMHO this will > stretch > the scope of the project. > > -- > Luigi > > > > -- Martin Kopec Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From lianhao.lu at intel.com Fri Jan 22 02:10:04 2021 From: lianhao.lu at intel.com (Lu, Lianhao) Date: Fri, 22 Jan 2021 02:10:04 +0000 Subject: [infra][ci] no more 3rd party CI test result in gerrit Message-ID: Hi All, I noticed that we're no longer be able to see the 3rd party CI test results in most of the projects in openstack gerrit, such as nova, neutron. But we can still see them in the ci-sandbox project. I'm wondering how to see those test results in gerrit now? Thanks! BR, -Lianhao -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Fri Jan 22 03:00:51 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 22 Jan 2021 04:00:51 +0100 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: I think it is a good idea to be more flexible for the generic drivers. The approach to block redfish-virtual-media for Dell is something that makes sense to me, at least we can easily backport to stable branches (I see the approach to make redfish-virtual-media work for Dell on stable branches as a feature, but I may be mistaken) Em qui., 21 de jan. de 2021 às 15:46, Julia Kreger < juliaashleykreger at gmail.com> escreveu: > So, un-written and always situational flexible. > > When we all started down the path of creating ironic so many years ago. > > Wow I suddenly feel old. > > Anyway, back then we were a bit naive to think every vendor properly > and consistently implemented vendor hardware behavior and interfaces > the same way. Reality has been kind of far from it and vendors really > don't like changing BMC code to become compliant to standard after the > fact. I think the right thing is trying to provide guard rails to > continue to do the right thing for the user since vendors are > generally getting mostly compliant, the rest is fairly easy to handle > if we know what... and how to handle these things. > > So overall, I think it is perfectly acceptable to have some > conditional code in generic drivers... especially of advanced > features, to go "oh, we need this $thing slightly differently" or "Oh, > This driver is completely wrong, lets fix it for them magically" > > So implementation details aside (as it is the implementer's > prerogative on the initial design) , I'm good and agree with the > general ideas. Hope that makes sense. > > On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur > wrote: > > > > Hi all, > > > > Now that we've gained some experience with using Redfish virtual media > I'd like to reopen the discussion about $subj. For the context, the > idrac-redfish-virtual-media boot interface appeared because Dell machines > need an additional action [1] to boot from virtual media. The initial > position on hardware interfaces was that anything requiring OEM actions > must go into a vendor hardware interface. I would like to propose relaxing > this (likely unwritten) rule. > > > > You see, this distinction causes a lot of confusion. Ironic supports > Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports > virtual media, Redfish supports virtual media, iDRAC supports virtual > media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today I > had to explain the cause of it to a few people. It required diving into how > exactly Redfish works and how exactly ironic uses it, which is something we > want to protect our users from. > > > > We already have a precedent [2] of adding vendor-specific handling to a > generic driver. I have proposed a patch [3] to block using > redfish-virtual-media for Dell hardware, but I grew to dislike this > approach. It does not have precedents in the ironic code base and it won't > scale well if we have to handle vendor differences for vendors that don't > have ironic drivers. > > > > Based on all this I suggest relaxing the rule to the following: if a > feature supported by a generic hardware interface requires additional > actions or has a minor deviation from the standard, allow handling it in > the generic hardware interface. Meaning, redfish-virtual-media starts > handling the Dell case by checking the System manufacturer (via the > recently added detect_vendor call) and loading the OEM code if it matches > "Dell". After this idrac-redfish-virtual-media will stay empty (for future > enhancements and to make the patch backportable). > > > > Thoughts? > > > > Dmitry > > > > [1] > https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 > > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ > > [3] https://review.opendev.org/c/openstack/ironic/+/771619 > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Fri Jan 22 09:03:29 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Fri, 22 Jan 2021 10:03:29 +0100 Subject: [magnum][heat] Rolling system upgrades Message-ID: <14a045a4-8705-438d-942f-f11f2d0258b2@www.fastmail.com> Hi, While testing magnum, a problem of upgrades came up - while work has been done to make kubernetes upgrades without interruption, operating system upgrades seem to be handled only partially. According to the documentation, two ways of upgrading system are available: - via specifying ostree_commit or ostree_remote labels in the cluster template used for upgrade - via specifying a new image in the cluster template used for upgrade The first one is specific to Fedora Atomic (and, while probably untested, seems to be mostly working with Fedora CoreOS) but it has some drawbacks. Firstly, due to base image staying the same we require this image for the life of the cluster, even if OS has already been upgraded. Secondly, using this method only upgrades existing instances and new instances (spawned via scaling cluster up) will not be upgraded. Thirdly, even if that is fixed I'm worried that at some point upgrading from old base image to some future ostree snapshot will fail (there is also cost associated with diff growing with each release). The second method, of specifying a new image in the cluster template used for upgrade, comes with an ugly warning about nodes not being drained properly before server rebuild (and it actually doesn't seem to be working anyway as the new image parameter is not being passed to the heat template on upgrade). This does however seem like a more valid approach in general. I'm not that familar with Heat, and the documentation of various OS::Heat::Software* resources seems inconclusive, but is there no way of executing some code before instance is rebuilt? If not, how are other projects and users handling this in general? -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com From radoslaw.piliszek at gmail.com Fri Jan 22 09:28:19 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 22 Jan 2021 10:28:19 +0100 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: On Thu, Jan 21, 2021 at 10:24 PM Kendall Nelson wrote: > > Hello Everyone! > > The StoryBoard team is looking at alternatives to Angular.js since its going end of life. After some research, we've boiled all the options down to two possibilities: > > Vue.js > > or > > React.js Hello, Kendall! This is likely the toughest question in the frontend universe at the moment. Both solutions are very well thought out and have solid ecosystems. Based on observed productivity both are good choices. Personally, I have done more Vue than React. I have added a few points in the etherpad. Angular is not a bad choice either but it involves much stronger bonding with the final product. The others leave more freedom of choice. As for the verdict, I am afraid the best solution would be to run voting for parties interested in Storyboard development and just stick to the poll winner. -yoctozepto > I am diving more deeply into researching those two options this week, but any opinions or feedback on your experiences with either of them would be helpful! > > Here is the etherpad with our research so far[3]. > > Feel free to add opinions there or in response to this thread! > > -Kendall Nelson (diablo_rojo) & The StoryBoard Team > > [1] https://vuejs.org/ > [2] https://reactjs.org/ > [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research From janders at redhat.com Fri Jan 22 12:05:17 2021 From: janders at redhat.com (Jacob Anders) Date: Fri, 22 Jan 2021 22:05:17 +1000 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: I agree. As someone who transitioned from an Ironic operator to Ironic developer, I'd like to stress the importance of user experience (and as a result, user confidence). As much as I dislike non-compliance with standards (and competing standards ), I believe that directly exposing the users to these for the sake of a cleaner codebase isn't the right thing to do. It's really bad if operators have to deep dive into the inner workings of the drivers to figure out what's going on (got some past first-hand experience unfortunately). This often results in site-specific workarounds and fixes which may be never accepted upstream and the time spent developing these could be better spent contributing to Ironic. To cut the long story short - I'm all for relaxing the rules where reasonable and where needed, as pointed out by Dmitry. Thank you, Jacob On Fri, Jan 22, 2021 at 1:07 PM Iury Gregory wrote: > I think it is a good idea to be more flexible for the generic drivers. > The approach to block redfish-virtual-media for Dell is something that > makes sense to me, at least we can easily backport to stable branches (I > see the approach to make redfish-virtual-media work for Dell on stable > branches as a feature, but I may be mistaken) > > Em qui., 21 de jan. de 2021 às 15:46, Julia Kreger < > juliaashleykreger at gmail.com> escreveu: > >> So, un-written and always situational flexible. >> >> When we all started down the path of creating ironic so many years ago. >> >> Wow I suddenly feel old. >> >> Anyway, back then we were a bit naive to think every vendor properly >> and consistently implemented vendor hardware behavior and interfaces >> the same way. Reality has been kind of far from it and vendors really >> don't like changing BMC code to become compliant to standard after the >> fact. I think the right thing is trying to provide guard rails to >> continue to do the right thing for the user since vendors are >> generally getting mostly compliant, the rest is fairly easy to handle >> if we know what... and how to handle these things. >> >> So overall, I think it is perfectly acceptable to have some >> conditional code in generic drivers... especially of advanced >> features, to go "oh, we need this $thing slightly differently" or "Oh, >> This driver is completely wrong, lets fix it for them magically" >> >> So implementation details aside (as it is the implementer's >> prerogative on the initial design) , I'm good and agree with the >> general ideas. Hope that makes sense. >> >> On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur >> wrote: >> > >> > Hi all, >> > >> > Now that we've gained some experience with using Redfish virtual media >> I'd like to reopen the discussion about $subj. For the context, the >> idrac-redfish-virtual-media boot interface appeared because Dell machines >> need an additional action [1] to boot from virtual media. The initial >> position on hardware interfaces was that anything requiring OEM actions >> must go into a vendor hardware interface. I would like to propose relaxing >> this (likely unwritten) rule. >> > >> > You see, this distinction causes a lot of confusion. Ironic supports >> Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports >> virtual media, Redfish supports virtual media, iDRAC supports virtual >> media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today I >> had to explain the cause of it to a few people. It required diving into how >> exactly Redfish works and how exactly ironic uses it, which is something we >> want to protect our users from. >> > >> > We already have a precedent [2] of adding vendor-specific handling to a >> generic driver. I have proposed a patch [3] to block using >> redfish-virtual-media for Dell hardware, but I grew to dislike this >> approach. It does not have precedents in the ironic code base and it won't >> scale well if we have to handle vendor differences for vendors that don't >> have ironic drivers. >> > >> > Based on all this I suggest relaxing the rule to the following: if a >> feature supported by a generic hardware interface requires additional >> actions or has a minor deviation from the standard, allow handling it in >> the generic hardware interface. Meaning, redfish-virtual-media starts >> handling the Dell case by checking the System manufacturer (via the >> recently added detect_vendor call) and loading the OEM code if it matches >> "Dell". After this idrac-redfish-virtual-media will stay empty (for future >> enhancements and to make the patch backportable). >> > >> > Thoughts? >> > >> > Dmitry >> > >> > [1] >> https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 >> > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ >> > [3] https://review.opendev.org/c/openstack/ironic/+/771619 >> > >> > -- >> > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> > Commercial register: Amtsgericht Muenchen, HRB 153243, >> > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> >> > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aditi.Dukle at ibm.com Fri Jan 22 07:13:43 2021 From: aditi.Dukle at ibm.com (aditi Dukle) Date: Fri, 22 Jan 2021 07:13:43 +0000 Subject: [nova] unit testing on ppc64le In-Reply-To: <20210121131218.44gumx7dvwxqkoci@lyarwood-laptop.usersys.redhat.com> References: <20210121131218.44gumx7dvwxqkoci@lyarwood-laptop.usersys.redhat.com>, <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: An HTML attachment was scrubbed... URL: From madhuwanthapriyashan12 at gmail.com Fri Jan 22 12:50:17 2021 From: madhuwanthapriyashan12 at gmail.com (Madhuwantha priyashan) Date: Fri, 22 Jan 2021 18:20:17 +0530 Subject: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS Message-ID: Dear sir, I am an undergraduate student and I am doing a project using OpenStack now. I need to install tap as a service in OpenStack but I could not find any guideline for this. It is great full if u can provide guidelines for this task. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From bekir.fajkovic at citynetwork.eu Fri Jan 22 13:42:45 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Fri, 22 Jan 2021 14:42:45 +0100 Subject: Some questions regarding OpenStack Trove Victoria release Message-ID: <3a0f8fa16da56790161fca1e6a50a870@citynetwork.eu> Hello! My name is Bekir Fajkovic and i work at City Network cloud service provider mostly as a DBA but i am also involved in many other kind of activities inside the Company. We at City Network are currently in a process of deployment of OpenStack Trove project (latest official release) and are thus preparing a dedicated region inside our OpenStack hosting environment to host it as a beta version internally, to begin with, where we are going to involve some of our customers  to evaluate the service after, hopefully, successful installation and configuration. To be totally honest, while there is a whole lot of a certain kind of documentation provided covering many aspects, some other aspects inside the project do  not seem to be documented in depth or at all (which of course is totally understandable), so i am forced to ask some questions this way and i hope You find  them not to be too annoying to anyone. The questions: ------------------------ Having the fact that Trove Victoria release provides Docker containers as a new way of database instance provisioning, i am wondering how far the project is developed in terms of covering the different types of databases. What i can see by mainly parsing the code provided on Github, those seem to be  officially released: - MySQL - MariaDB - PostgreSQL and the rest of the planned database types are in "experimental" phase. And also, regarding certain types of databases (for example MySQL, version 5.7 and 8.0) only certain  versions of the datastores seems to be supported, but not all. On the other hand, nothing regarding datastore versions supported for MariaDB and PostgreSQL seems to be mentioned somewhere. Could someone please confirm that as well as give some more details about it? I successfully managed to create certain versions of datastores in my devstack environment, belonging to those 3 database types mentioned above (and based on trovestack-generated dev guest image that is by default delivered with devstack installation), but not without some undesirable events. For example, i am able to register  PostgreSQL datastore version 12 and instantiate a database instance of that version but not version 13 and above, where i get some hostname-related errors etc. Also, a question regarding the building of the production-ready guest image. As mentioned, Trovestack script is provided as a possible way of producing the images (by omitting dev option the Trove Guest Agent binaries are deployed into the instantiated VM). How does an image produced this way looks like? From where the base image is fetched, is it a "cloud based image" with cloud-init in it, are the automatic security and software patching features disabled in produced image, so that we do not get unexpected service  interruptions when the OS suddenly decides to start updating itself etc.. Regarding the Trove Guest Agent service - i read in some Trove books previously that there are dedicated agents for each and every database type, is it the same situation in Victoria release, or is there an "universal" Guest Agent covering all the database types nowadays? Where is the code that adapts the Agent commands towards the database  instances placed inside the project? The backups - as i can see there seem to be some kind of dedicated docker-backup images involved in each database type. Could someone explain the internals of backup mechanisms inside Trove Victoria release in more details? So, that would be all for the moment and although You probably consider my questions being as stupid as possible, i still dared to ask them :) I hope You will be able to provide the answers to at least some of them! Thanks in advance! Best Regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jan 22 19:53:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 22 Jan 2021 20:53:51 +0100 Subject: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS In-Reply-To: References: Message-ID: <4838976.LsKZaqDJbv@p1> Hi, Dnia piątek, 22 stycznia 2021 13:50:17 CET Madhuwantha priyashan pisze: > Dear sir, > I am an undergraduate student and I am doing a project using OpenStack now. > I need to install tap as a service in OpenStack but I could not find any > guideline for this. It is great full if u can provide guidelines for this > task. > > Thank you The only document which I found is [1]. There is also some Helm documentation at [2] but please keep in mind that this project isn't official OpenStack Neutron stadium project and I'm not sure how well maintained it is really. [1] https://opendev.org/x/tap-as-a-service/src/branch/master/INSTALL.rst [2] https://docs.openstack.org/openstack-helm/latest/install/plugins/deploy-tap-as-a-service-neutron-plugin.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Fri Jan 22 19:53:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 22 Jan 2021 20:53:51 +0100 Subject: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS In-Reply-To: References: Message-ID: <4838976.LsKZaqDJbv@p1> Hi, Dnia piątek, 22 stycznia 2021 13:50:17 CET Madhuwantha priyashan pisze: > Dear sir, > I am an undergraduate student and I am doing a project using OpenStack now. > I need to install tap as a service in OpenStack but I could not find any > guideline for this. It is great full if u can provide guidelines for this > task. > > Thank you The only document which I found is [1]. There is also some Helm documentation at [2] but please keep in mind that this project isn't official OpenStack Neutron stadium project and I'm not sure how well maintained it is really. [1] https://opendev.org/x/tap-as-a-service/src/branch/master/INSTALL.rst [2] https://docs.openstack.org/openstack-helm/latest/install/plugins/deploy-tap-as-a-service-neutron-plugin.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From korondi.mark at gmail.com Fri Jan 22 22:03:34 2021 From: korondi.mark at gmail.com (Mark Korondi) Date: Fri, 22 Jan 2021 23:03:34 +0100 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: <20210122220334.6oo6vvu2eq5w3hx7@sigurd.localdomain> Hi, Since React.js was listed as a option, I'd also consider Preact[1] which is a smaller / faster react-compatible implementation with browser-native virtual DOM implementation using "htm"[2] which can even eliminate build tooling. Not affiliated, just keeping an eye on it. Cheers, Mark [1]: https://preactjs.com/ [2]: https://github.com/developit/htm On Thu, Jan 21, 2021 at 01:23:25PM -0800, Kendall Nelson wrote: >Hello Everyone! > >The StoryBoard team is looking at alternatives to Angular.js since its >going end of life. After some research, we've boiled all the options down >to two possibilities: > >Vue.js > >or > >React.js > >I am diving more deeply into researching those two options this week, but >any opinions or feedback on your experiences with either of them would be >helpful! > >Here is the etherpad with our research so far[3]. > >Feel free to add opinions there or in response to this thread! > >-Kendall Nelson (diablo_rojo) & The StoryBoard Team > >[1] https://vuejs.org/ >[2] https://reactjs.org/ >[3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research -- This email is signed with the PGP key: 8EA1 89A1 41E6 D1F0 Verify and send me an encrypted email: https://keybase.io/encrypt#kmarc -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ruslanas at lpic.lt Fri Jan 22 22:55:11 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 22 Jan 2021 23:55:11 +0100 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user Message-ID: Hi all, I am trying to build some cloud again and failing with [1]. I think this is a most critical line. It fails during GRUB2 deployment on overcloud image, I believe main error message is: mount: only root can do that. I used the following steps to create image: cp -ar /etc/yum.repos.d repos # enable: repos/CentOS-HA.repo sed -i "s/enabled=0/enabled=1/g" repos/CentOS-Linux-HighAvailability.repo sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" export STABLE_RELEASE="ussuri" source /home/stack/stackrc mkdir /home/stack/images cd /home/stack/images openstack overcloud image build --config-file /home/stack/overcloud-images-python3.yaml openstack overcloud image upload --update-existing # openstack overcloud node configure --all-manageable overcloud-images-python3.yaml file content can be found here [2], just added additional packages, such as tcpdump, iperf, iptraf and so on... And steps from link [3] to set root pass to troubleshoot if needed. any thoughts? Maybe there were some changes recently (from 22 of December) PRevious images from there. Thank you for your thoughts and attention. [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Fri Jan 22 23:46:34 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 22 Jan 2021 15:46:34 -0800 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: Umm, wow. That is a new one. Any chance you can post or supply the entire ramdisk log that would have been uploaded to ironic? On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis wrote: > > Hi all, > > I am trying to build some cloud again and failing with [1]. I think this is a most critical line. It fails during GRUB2 deployment on overcloud image, I believe main error message is: mount: only root can do that. I used the following steps to create image: > > cp -ar /etc/yum.repos.d repos > # enable: repos/CentOS-HA.repo > sed -i "s/enabled=0/enabled=1/g" repos/CentOS-Linux-HighAvailability.repo > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" > export STABLE_RELEASE="ussuri" > source /home/stack/stackrc > mkdir /home/stack/images > cd /home/stack/images > openstack overcloud image build --config-file /home/stack/overcloud-images-python3.yaml > openstack overcloud image upload --update-existing > # openstack overcloud node configure --all-manageable > > overcloud-images-python3.yaml file content can be found here [2], just added additional packages, such as tcpdump, iperf, iptraf and so on... > And steps from link [3] to set root pass to troubleshoot if needed. > > any thoughts? Maybe there were some changes recently (from 22 of December) PRevious images from there. Thank you for your thoughts and attention. > > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ > [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From kennelson11 at gmail.com Sat Jan 23 00:35:33 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Jan 2021 16:35:33 -0800 Subject: [SIG] [all] [TC] [Automation] [Cloud Research] [k8s] [Containers] [Multi Arch] [Packaging] [PowerVMStacker] [Public Cloud] [Resource Management] Updates Required? Message-ID: Hello Everyone! I don't know if any of you have looked at the sigs governance[0] site lately, but there are a number of SIGs that seem to have out of date statuses and/or chairs listed that are no longer active in the community. >From what I can tell, the following list of SIGs could maybe use some updating. - Automation - Cloud Research - Containers - k8s - Multi Arch - Packaging - PowerVMStacker - Public Cloud - Resource Management I have created an etherpad[1] with all the SIGs for more cohesive discussion on changes. Please make comments there! I am happy to push the updates if things need to change; I just need to know what to change it to. Tangentially related, kudos Rico for chairing the most SIGs by far! -Kendall Nelson (diablo_rojo) [0] https://governance.openstack.org/sigs/ [1] https://etherpad.opendev.org/p/2021-SIG-Updates -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Jan 23 13:31:29 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 23 Jan 2021 14:31:29 +0100 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: I have used older images, with a bit less sw, but still, with my modifications, and it works. drwxrwxr-x. 3 stack stack 27 Dec 22 15:51 ironic-python-agent.d -rw-rw-r--. 1 stack stack 514499100 Dec 22 19:07 ironic-python-agent.initramfs -rwxr-xr-x. 1 stack stack 9514120 Dec 22 15:52 ironic-python-agent.kernel -rw-rw-r--. 1 stack stack 511381 Dec 22 15:52 ironic-python-agent.log drwxrwxr-x. 3 stack stack 27 Dec 22 15:39 overcloud-full.d -rw-r--r--. 1 root root 62170282 Dec 22 15:39 overcloud-full.initrd -rw-rw-r--. 1 stack stack 810492 Dec 22 15:43 overcloud-full.log -rw-r--r--. 1 stack stack 1199177728 Dec 22 19:09 overcloud-full.qcow2 -rwxr-xr-x. 1 root root 9514120 Dec 22 15:39 overcloud-full.vmlinuz after this date, it does not work... On Sat, 23 Jan 2021 at 11:55, Ruslanas Gžibovskis wrote: > hmmmm, I will try to catch it. > > I have setup rsyslog to send logs to my undercloud... > This is what I received from remote side. > > it's rebooting after unsuccessful boot, so I cannot get final version in > one file... > > On Sat, 23 Jan 2021 at 00:46, Julia Kreger > wrote: > >> Umm, wow. That is a new one. >> >> Any chance you can post or supply the entire ramdisk log that would >> have been uploaded to ironic? >> >> On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis >> wrote: >> > >> > Hi all, >> > >> > I am trying to build some cloud again and failing with [1]. I think >> this is a most critical line. It fails during GRUB2 deployment on overcloud >> image, I believe main error message is: mount: only root can do that. I >> used the following steps to create image: >> > >> > cp -ar /etc/yum.repos.d repos >> > # enable: repos/CentOS-HA.repo >> > sed -i "s/enabled=0/enabled=1/g" >> repos/CentOS-Linux-HighAvailability.repo >> > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo >> > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" >> > export STABLE_RELEASE="ussuri" >> > source /home/stack/stackrc >> > mkdir /home/stack/images >> > cd /home/stack/images >> > openstack overcloud image build --config-file >> /home/stack/overcloud-images-python3.yaml >> > openstack overcloud image upload --update-existing >> > # openstack overcloud node configure --all-manageable >> > >> > overcloud-images-python3.yaml file content can be found here [2], just >> added additional packages, such as tcpdump, iperf, iptraf and so on... >> > And steps from link [3] to set root pass to troubleshoot if needed. >> > >> > any thoughts? Maybe there were some changes recently (from 22 of >> December) PRevious images from there. Thank you for your thoughts and >> attention. >> > >> > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ >> > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ >> > [3] >> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sat Jan 23 13:32:51 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 23 Jan 2021 14:32:51 +0100 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: I will deploy essential part of my cloud and will continue with newer image to test On Sat, 23 Jan 2021 at 14:31, Ruslanas Gžibovskis wrote: > I have used older images, with a bit less sw, but still, with my > modifications, and it works. > > drwxrwxr-x. 3 stack stack 27 Dec 22 15:51 ironic-python-agent.d > -rw-rw-r--. 1 stack stack 514499100 Dec 22 19:07 > ironic-python-agent.initramfs > -rwxr-xr-x. 1 stack stack 9514120 Dec 22 15:52 > ironic-python-agent.kernel > -rw-rw-r--. 1 stack stack 511381 Dec 22 15:52 ironic-python-agent.log > drwxrwxr-x. 3 stack stack 27 Dec 22 15:39 overcloud-full.d > -rw-r--r--. 1 root root 62170282 Dec 22 15:39 overcloud-full.initrd > -rw-rw-r--. 1 stack stack 810492 Dec 22 15:43 overcloud-full.log > -rw-r--r--. 1 stack stack 1199177728 Dec 22 19:09 overcloud-full.qcow2 > -rwxr-xr-x. 1 root root 9514120 Dec 22 15:39 overcloud-full.vmlinuz > > after this date, it does not work... > > On Sat, 23 Jan 2021 at 11:55, Ruslanas Gžibovskis > wrote: > >> hmmmm, I will try to catch it. >> >> I have setup rsyslog to send logs to my undercloud... >> This is what I received from remote side. >> >> it's rebooting after unsuccessful boot, so I cannot get final version in >> one file... >> >> On Sat, 23 Jan 2021 at 00:46, Julia Kreger >> wrote: >> >>> Umm, wow. That is a new one. >>> >>> Any chance you can post or supply the entire ramdisk log that would >>> have been uploaded to ironic? >>> >>> On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis >>> wrote: >>> > >>> > Hi all, >>> > >>> > I am trying to build some cloud again and failing with [1]. I think >>> this is a most critical line. It fails during GRUB2 deployment on overcloud >>> image, I believe main error message is: mount: only root can do that. I >>> used the following steps to create image: >>> > >>> > cp -ar /etc/yum.repos.d repos >>> > # enable: repos/CentOS-HA.repo >>> > sed -i "s/enabled=0/enabled=1/g" >>> repos/CentOS-Linux-HighAvailability.repo >>> > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo >>> > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" >>> > export STABLE_RELEASE="ussuri" >>> > source /home/stack/stackrc >>> > mkdir /home/stack/images >>> > cd /home/stack/images >>> > openstack overcloud image build --config-file >>> /home/stack/overcloud-images-python3.yaml >>> > openstack overcloud image upload --update-existing >>> > # openstack overcloud node configure --all-manageable >>> > >>> > overcloud-images-python3.yaml file content can be found here [2], just >>> added additional packages, such as tcpdump, iperf, iptraf and so on... >>> > And steps from link [3] to set root pass to troubleshoot if needed. >>> > >>> > any thoughts? Maybe there were some changes recently (from 22 of >>> December) PRevious images from there. Thank you for your thoughts and >>> attention. >>> > >>> > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ >>> > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ >>> > [3] >>> https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password >>> > >>> > >>> > -- >>> > Ruslanas Gžibovskis >>> > +370 6030 7030 >>> >> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Sat Jan 23 15:55:41 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 23 Jan 2021 07:55:41 -0800 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: So the logs that are needed are actually a tgz file that is uploaded from the running agent which generated the error to the conductor. The agent packages the files, and uploads a single file with it's UUID in the filename to the filesystem. Typically they would be in something like /var/log/containers/ironic/deploy_logs or something along those lines. On Sat, Jan 23, 2021 at 2:55 AM Ruslanas Gžibovskis wrote: > > hmmmm, I will try to catch it. > > I have setup rsyslog to send logs to my undercloud... > This is what I received from remote side. > > it's rebooting after unsuccessful boot, so I cannot get final version in one file... > > On Sat, 23 Jan 2021 at 00:46, Julia Kreger wrote: >> >> Umm, wow. That is a new one. >> >> Any chance you can post or supply the entire ramdisk log that would >> have been uploaded to ironic? >> >> On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis wrote: >> > >> > Hi all, >> > >> > I am trying to build some cloud again and failing with [1]. I think this is a most critical line. It fails during GRUB2 deployment on overcloud image, I believe main error message is: mount: only root can do that. I used the following steps to create image: >> > >> > cp -ar /etc/yum.repos.d repos >> > # enable: repos/CentOS-HA.repo >> > sed -i "s/enabled=0/enabled=1/g" repos/CentOS-Linux-HighAvailability.repo >> > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo >> > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" >> > export STABLE_RELEASE="ussuri" >> > source /home/stack/stackrc >> > mkdir /home/stack/images >> > cd /home/stack/images >> > openstack overcloud image build --config-file /home/stack/overcloud-images-python3.yaml >> > openstack overcloud image upload --update-existing >> > # openstack overcloud node configure --all-manageable >> > >> > overcloud-images-python3.yaml file content can be found here [2], just added additional packages, such as tcpdump, iperf, iptraf and so on... >> > And steps from link [3] to set root pass to troubleshoot if needed. >> > >> > any thoughts? Maybe there were some changes recently (from 22 of December) PRevious images from there. Thank you for your thoughts and attention. >> > >> > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ >> > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ >> > [3] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password >> > >> > >> > -- >> > Ruslanas Gžibovskis >> > +370 6030 7030 > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 From ruslanas at lpic.lt Sat Jan 23 10:55:00 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sat, 23 Jan 2021 11:55:00 +0100 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: hmmmm, I will try to catch it. I have setup rsyslog to send logs to my undercloud... This is what I received from remote side. it's rebooting after unsuccessful boot, so I cannot get final version in one file... On Sat, 23 Jan 2021 at 00:46, Julia Kreger wrote: > Umm, wow. That is a new one. > > Any chance you can post or supply the entire ramdisk log that would > have been uploaded to ironic? > > On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I am trying to build some cloud again and failing with [1]. I think this > is a most critical line. It fails during GRUB2 deployment on overcloud > image, I believe main error message is: mount: only root can do that. I > used the following steps to create image: > > > > cp -ar /etc/yum.repos.d repos > > # enable: repos/CentOS-HA.repo > > sed -i "s/enabled=0/enabled=1/g" repos/CentOS-Linux-HighAvailability.repo > > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo > > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" > > export STABLE_RELEASE="ussuri" > > source /home/stack/stackrc > > mkdir /home/stack/images > > cd /home/stack/images > > openstack overcloud image build --config-file > /home/stack/overcloud-images-python3.yaml > > openstack overcloud image upload --update-existing > > # openstack overcloud node configure --all-manageable > > > > overcloud-images-python3.yaml file content can be found here [2], just > added additional packages, such as tcpdump, iperf, iptraf and so on... > > And steps from link [3] to set root pass to troubleshoot if needed. > > > > any thoughts? Maybe there were some changes recently (from 22 of > December) PRevious images from there. Thank you for your thoughts and > attention. > > > > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ > > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ > > [3] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: all-logs-from-rsyslog.tar.xz Type: application/x-xz Size: 31872 bytes Desc: not available URL: From feilong at catalyst.net.nz Sat Jan 23 17:55:05 2021 From: feilong at catalyst.net.nz (feilong) Date: Sun, 24 Jan 2021 06:55:05 +1300 Subject: [magnum][heat] Rolling system upgrades In-Reply-To: <14a045a4-8705-438d-942f-f11f2d0258b2@www.fastmail.com> References: <14a045a4-8705-438d-942f-f11f2d0258b2@www.fastmail.com> Message-ID: <09e337c8-5fed-40aa-6835-ea5f64d6d943@catalyst.net.nz> Hi Krzysztof, Thanks for raising this topic because I'm planning to do improvements for this area. I would like to help as the original author of this feature. Now let me explain the current situation: 1. The first method is designed to work for both Fedora Atomic and Fedora CoreOS. Though I agree after upgrade, the node image will be remain the old name and ID which will bring troubles for auto healing later. That's the problem I'm trying to fix but it's not easy. As for the new node, I think it's a bug and I think I know how to fix it. Your concern about upgrade from a very old node's OS to a quite new OS version is valid :( 2. It works under conditions. The node should be image based instead of volume based, because AFAIK, Nova still doesn't support volume based instance rebuild. Did you try this with image based nodes? As for the drain part, it's because we would like to achieve a zero-downtime upgrade (at least it's my goal for this), so each node will be drained before upgrading. However, I didn't see a way to manage the orchestration to call a k8s drain before doing the rebuild of the node, because it's out of the control of Magnum. Heat is a like a black box at this stage. Also, even if we can have a chance to call k8s drain to drain the node, it's impossible to do that if the cluster is a private cluster. Private cluster means Magnum control plane cannot reach the k8s API. Again, thank you raising this and I'm happy to help to address it. On 22/01/21 10:03 pm, Krzysztof Klimonda wrote: > Hi, > > While testing magnum, a problem of upgrades came up - while work has been done to make kubernetes upgrades without interruption, operating system upgrades seem to be handled only partially. > > According to the documentation, two ways of upgrading system are available: > - via specifying ostree_commit or ostree_remote labels in the cluster template used for upgrade > - via specifying a new image in the cluster template used for upgrade > > The first one is specific to Fedora Atomic (and, while probably untested, seems to be mostly working with Fedora CoreOS) but it has some drawbacks. Firstly, due to base image staying the same we require this image for the life of the cluster, even if OS has already been upgraded. Secondly, using this method only upgrades existing instances and new instances (spawned via scaling cluster up) will not be upgraded. Thirdly, even if that is fixed I'm worried that at some point upgrading from old base image to some future ostree snapshot will fail (there is also cost associated with diff growing with each release). > > The second method, of specifying a new image in the cluster template used for upgrade, comes with an ugly warning about nodes not being drained properly before server rebuild (and it actually doesn't seem to be working anyway as the new image parameter is not being passed to the heat template on upgrade). This does however seem like a more valid approach in general. > > I'm not that familar with Heat, and the documentation of various OS::Heat::Software* resources seems inconclusive, but is there no way of executing some code before instance is rebuilt? If not, how are other projects and users handling this in general? > -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ From eandersson at blizzard.com Sat Jan 23 23:21:24 2021 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Sat, 23 Jan 2021 23:21:24 +0000 Subject: [review] oslo.messaging patterns in various projects Message-ID: I was helping troubleshoot an issue with Magnum recently and found out that there was a minor issue with how they handle RPC connections, and decided to do an audit of all OpenStack projects and found a bunch that are likely to be experiencing similar issues. The gist of the issue is that most of these projects creates a new rpc transport on every API call. If someone has some time over to help review these fixes, I would be grateful. https://review.opendev.org/c/openstack/magnum/+/770707 - Re-use transport for rpc calls https://review.opendev.org/c/openstack/magnum/+/770720 - Re-use transport for rpc server https://review.opendev.org/c/openstack/blazar/+/771110 - Re-use rpc transport https://review.opendev.org/c/openstack/solum/+/771111 - Re-use rpc transport https://review.opendev.org/c/openstack/murano/+/771341 - Use common rpc pattern for all services https://review.opendev.org/c/openstack/watcher/+/771381 - Use common rpc pattern for all services I tried to use a standardized approach with all of these, but feel free to add any feedback. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sun Jan 24 08:00:40 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sun, 24 Jan 2021 10:00:40 +0200 Subject: [tripleo][ussuri][centos8] overcloud deploy fails with no valid host was found, GRUB2 cannot mount with regular user In-Reply-To: References: Message-ID: Should i try keep only ironic-python-agent-ramdisk ? In my overcloud-image yaml? On Sat, 23 Jan 2021, 23:28 Julia Kreger, wrote: > I'm not so sure about that. Note the different > "ironic-python-agent-ramdisk" versus "ironic-agent" in > > https://github.com/openstack/tripleo-common/blob/stable/ussuri/image-yaml/overcloud-images-python3.yaml > > On Fri, Jan 22, 2021 at 3:02 PM Ruslanas Gžibovskis > wrote: > > > > Hi all, > > > > I am trying to build some cloud again and failing with [1]. I think this > is a most critical line. It fails during GRUB2 deployment on overcloud > image, I believe main error message is: mount: only root can do that. I > used the following steps to create image: > > > > cp -ar /etc/yum.repos.d repos > > # enable: repos/CentOS-HA.repo > > sed -i "s/enabled=0/enabled=1/g" repos/CentOS-Linux-HighAvailability.repo > > sed -i "s/gpgcheck=1/gpgcheck=0/g" repos/*repo > > export DIB_YUM_REPO_CONF="$(ls $(pwd)/repos/*repo)" > > export STABLE_RELEASE="ussuri" > > source /home/stack/stackrc > > mkdir /home/stack/images > > cd /home/stack/images > > openstack overcloud image build --config-file > /home/stack/overcloud-images-python3.yaml > > openstack overcloud image upload --update-existing > > # openstack overcloud node configure --all-manageable > > > > overcloud-images-python3.yaml file content can be found here [2], just > added additional packages, such as tcpdump, iperf, iptraf and so on... > > And steps from link [3] to set root pass to troubleshoot if needed. > > > > any thoughts? Maybe there were some changes recently (from 22 of > December) PRevious images from there. Thank you for your thoughts and > attention. > > > > [1] http://paste.openstack.org/show/GDYriAXxQniZTPQGJGbH/ > > [2] http://paste.openstack.org/show/2tkVFdwSZ0QMjoJKmOrm/ > > [3] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html-single/partner_integration/index#qcow_setting_the_root_password > > > > > > -- > > Ruslanas Gžibovskis > > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Sun Jan 24 11:02:39 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 25 Jan 2021 00:02:39 +1300 Subject: Some questions regarding OpenStack Trove Victoria release In-Reply-To: <3a0f8fa16da56790161fca1e6a50a870@citynetwork.eu> References: <3a0f8fa16da56790161fca1e6a50a870@citynetwork.eu> Message-ID: Hi Bekir, I'm very happy to answer your questions. See me comments in line. On Sat, Jan 23, 2021 at 8:24 AM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > > The questions: > ------------------------ > > Having the fact that Trove Victoria release provides Docker containers as > a new way of database instance provisioning, i am wondering how far the > project > is developed in terms of covering the different types of databases. What i > can see by mainly parsing the code provided on Github, those seem to be > officially released: > > - MySQL > - MariaDB > - PostgreSQL > > and the rest of the planned database types are in "experimental" phase. > And also, regarding certain types of databases (for example MySQL, version > 5.7 and 8.0) only certain > versions of the datastores seems to be supported, but not all. > MySQL 5.7.x is supported but 8.0 is in experimental. > On the other hand, nothing regarding datastore versions supported for > MariaDB and PostgreSQL seems to be > mentioned somewhere. Could someone please confirm that as well as give > some more details about it? > MariaDB 10.4.x and PostgreSQL 12.4 are supported. The other versions need to be fully tested. > I successfully managed to create certain versions of datastores in my > devstack environment, belonging to those 3 database types mentioned above > (and based on > trovestack-generated dev guest image that is by default delivered with > devstack installation), but not without some undesirable events. For > example, i am able to register > PostgreSQL datastore version 12 and instantiate a database instance of > that version but not version 13 and above, where i get some > hostname-related errors etc. > Yes, because PostgreSQL 13 has never been tested. > Also, a question regarding the building of the production-ready guest > image. As mentioned, Trovestack script is provided as a possible way of > producing the images (by omitting > dev option the Trove Guest Agent binaries are deployed into the > instantiated VM). How does an image produced this way looks like? From > where the base image is fetched, > is it a "cloud based image" with cloud-init in it, are the automatic > security and software patching features disabled in produced image, so that > we do not get unexpected service > interruptions when the OS suddenly decides to start updating itself etc.. > If you look at trovestack script implementation, you can see it's calling disk-image-create script from diskimage-builder, and there are some elements[2] defined in trove repo[3] for building the image. [1]: https://docs.openstack.org/diskimage-builder/latest [2]: https://docs.openstack.org/diskimage-builder/latest/elements.html [3]: https://github.com/openstack/trove/tree/master/integration/scripts/files/elements > Regarding the Trove Guest Agent service - i read in some Trove books > previously that there are dedicated agents for each and every database > type, is it the same situation > in Victoria release, or is there an "universal" Guest Agent covering all > the database types nowadays? Where is the code that adapts the Agent > commands towards the database > instances placed inside the project? > Trove never has dedicated agents for each and every database type, it's using the same trove-guestagent but with different configurations for different datastores. > The backups - as i can see there seem to be some kind of dedicated > docker-backup images involved in each database type. Could someone explain > the internals of backup mechanisms > inside Trove Victoria release in more details? > The backup container image is created to help trove-guestagent to achieve datastore-agnostic (so we only need to maintain a universal guest image), we shift the backup and restore functionalities and needed tools to a dedicated container. The implementation can be found here[4]. [4]: https://github.com/openstack/trove/tree/master/backup --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Sun Jan 24 14:43:39 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 24 Jan 2021 15:43:39 +0100 Subject: pip 21 heads-up (Python 2 support gone) Message-ID: Hello Fellow OpenStackers, Please notice pip 21 was released just yesterday. It means Python 2 is no longer supported. Upgrading pip to latest on Python 2 may render pip unusable (as ancient pip may not consider 21 an unsupported version and install it nonetheless). Similarly, using default get-pip.py on Python 2 no longer works. There is, however, https://bootstrap.pypa.io/2.7/get-pip.py for those that want to use get-pip.py on Python 2. Kind regards, -yoctozepto From Richard.Pioso at dell.com Mon Jan 25 05:59:10 2021 From: Richard.Pioso at dell.com (Pioso, Richard) Date: Mon, 25 Jan 2021 05:59:10 +0000 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur wrote: > > Hi all, > > Now that we've gained some experience with using Redfish virtual > media I'd like to reopen the discussion about $subj. For the context, the > idrac-redfish-virtual-media boot interface appeared because Dell > machines need an additional action [1] to boot from virtual media. The > initial position on hardware interfaces was that anything requiring OEM > actions must go into a vendor hardware interface. I would like to propose > relaxing this (likely unwritten) rule. > > You see, this distinction causes a lot of confusion. Ironic supports > Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports > virtual media, Redfish supports virtual media, iDRAC supports virtual > media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today > I had to explain the cause of it to a few people. It required diving into > how exactly Redfish works and how exactly ironic uses it, which is > something we want to protect our users from. Wow! Now I’m confused, too. AFAIU, the people you had to help decided to use the redfish driver, instead of the idrac driver. It is puzzling that they decided to do that considering the ironic driver composition reform [1] of a couple years ago. Recall that reform allows “having one vendor driver with options configurable per node instead of many drivers for every vendor” and had the following goals. “- Make vendors in charge of defining a set of supported interface implementations in priority order. - Allow vendors to guarantee that unsupported interface implementations will not be used with hardware types they define. This is done by having a hardware type list all interfaces it supports.” The idrac driver is Dell Technologies’ vendor driver for systems with an iDRAC. It offers a one-stop shop for using ironic to manage its systems. Users can select among the hardware interfaces it supports. Each interface uses a single management protocol -- Redfish, WS-Man, and soon IPMI [2] -- to communicate with the BMC. While it supports the idrac-redfish-virtual-media boot interface, it does not support redfish-virtual-media. One cannot configure a node with the idrac driver to use redfish-virtual-media. > > We already have a precedent [2] of adding vendor-specific handling to > a generic driver. That change was introduced about a month ago in the community’s vendor-independent ipmi driver. That was very understandable, since IPMI is a very mature management protocol and was introduced over 22 years ago. I cannot remember what I was doing back then :) As one would expect, the ipmi driver has experienced very little change over the past two-plus years. I count roughly two (2) substantive changes over that period. By contrast, the Redfish protocol is just over five (5) years old. Its vendor-independent driver, redfish, has been fertile ground for adding new, advanced features, such as BIOS settings configuration, firmware update, and RAID configuration, and fixing bugs. It fosters lots of change, too many for me to count. > I have proposed a patch [3] to block using redfish- > virtual-media for Dell hardware, but I grew to dislike this approach. It > does not have precedents in the ironic code base and it won't scale well > if we have to handle vendor differences for vendors that don't have > ironic drivers. Dell understands and is on board with ironic’s desire that vendors support the full functionality offered by the vendor-independent redfish driver. If the iDRAC is broken with regards to redfish-virtual-media, then we have a vested interest in fixing it. While that is worked, an alternative approach could be for our community to strengthen its promotion of the goals of the driver composition reform. That would leverage ironic’s long-standing ability to ensure people only use hardware interfaces which the vendor and its driver support. > > Based on all this I suggest relaxing the rule to the following: if a feature > supported by a generic hardware interface requires additional actions or > has a minor deviation from the standard, allow handling it in the generic > hardware interface. Meaning, redfish-virtual-media starts handling the > Dell case by checking the System manufacturer (via the recently added > detect_vendor call) and loading the OEM code if it matches "Dell". After > this idrac-redfish-virtual-media will stay empty (for future enhancements > and to make the patch backportable). That would cause the vendor-independent redfish driver to become dependent on sushy-oem-idrac, which is not under ironic governance. It is worth pointing out the sushy-oem-idrac library is necessary to get virtual media to work with Dell systems. It was first created for that purpose. It is not a workaround like those in sushy, which accommodate common, minor standards interpretation and implementation differences across vendors by sprinkling a bit of code here and there within the library, unbeknownst to ironic proper. We at Dell Technologies are concerned that the proposed rule change would result in a greater code review load on the ironic community. Since vendor-specific code would be in the generic hardware interface, much more care, eyes, and integration testing against physical hardware would be needed to ensure it does not break others. And our community is already concerned about its limited available review bandwidth [3]. Generally speaking, the vendor third-party CIs do not cover all drivers. Rather, each vendor only tests its own driver, and, in some cases, sushy. Therefore, changes to the vendor-independent redfish driver may introduce regressions in what has been working with various hardware and not be detected by automated testing before being merged. Can we afford this additional review load, prospective slowing down of innovation with Redfish, and likely undetected regressions? Would that be best for our users when we could fix the problem in other ways, such as the one suggested above? Also consider that feedback to the DMTF to drive vendor consistency is critical, but the DMTF needs feedback on what is broken in order to push others to address a problem. Remember the one-time boot debacle when three vendors broke at the same time? Once folks went screaming to the DMTF about the issue, it quickly explained it to member companies, clarified the standard, and created a test case for that condition. Changing the driver model to accommodate everyone's variations will reduce that communication back to the DMTF, meaning the standard stalls and interoperability does not gain traction. > > Thoughts? > TL;DR, we strongly recommend ironic not make this rule change. Clearly communicating users should use the vendor driver should simplify their experience and eliminate the confusion. The code as-is is factored well as a result of the 21st century approach the community has taken to date. Vendors can implement the driver OEM changes they need to accommodate their unique hardware and BMC requirements, with reduced concern about the risk of breaking other drivers or ironic itself. Ironic’s driver composition reform, sushy, and sushy’s OEM extension mechanism support that modern approach. Our goal is to continue to improve the iDRAC Redfish service’s compliance with the standard and eliminate the kind of OEM code Dmitry identified. Beware of unintended consequences, including - reduced quality, - slowed feature and bug fix velocity, - stalled DMTF Redfish standard, - lost Redfish interoperability traction, and - increased code review load. > Dmitry > > [1] > https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9 > 905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ > [3] https://review.opendev.org/c/openstack/ironic/+/771619 > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, > Michael O'Neill I welcome your feedback. Rick [1] https://opendev.org/openstack/ironic-specs/src/branch/master/specs/approved/driver-composition-reform.rst [2] https://storyboard.openstack.org/#!/story/2008528 [3] https://etherpad.opendev.org/p/ironic-wallaby-midcycle From katonalala at gmail.com Mon Jan 25 09:09:44 2021 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 25 Jan 2021 10:09:44 +0100 Subject: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS In-Reply-To: References: Message-ID: Hi, You can use taas with devstack (see [1]). The project now lacks maintainers, but basically works, if you have any questions don't hesitate to ask :-) [1] https://opendev.org/x/tap-as-a-service/src/branch/master/devstack/README.rst Regards Lajos Madhuwantha priyashan ezt írta (időpont: 2021. jan. 22., P, 20:22): > Dear sir, > I am an undergraduate student and I am doing a project using OpenStack > now. I need to install tap as a service in OpenStack but I could not find > any guideline for this. It is great full if u can provide guidelines for > this task. > > Thank you > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jan 25 10:23:50 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 25 Jan 2021 11:23:50 +0100 Subject: [largescale-sig] Next meeting: January 27, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210127T15 Our main topic will be to review progress and blockers on improving the documentation of various stages of our Scaling journey[1]. [1] https://wiki.openstack.org/wiki/Large_Scale_SIG Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Talk to you all later, -- Thierry Carrez From C-Albert.Braden at charter.com Mon Jan 25 13:34:10 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Mon, 25 Jan 2021 13:34:10 +0000 Subject: [kolla] Keycloak "More than one user" error Message-ID: <163486103a7a4cbc9cee3a64996dd619@ncwmexgp009.CORP.CHARTERCOM.com> We're running Train on Centos 7, and using Keycloak for auth. After I setup Keycloak, create a user in Keycloak, and then login to Horizon via Keycloak, a user is created in Keystone: | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | test | If I try to address that user by name, I get an error: (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show test More than one user exists with the name 'test'. I can address it by id. When I list users, I only see one "test" user." (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | domain_id | 4678301ef9a24d54bcd2e87a8fbc6872 | | email | test at example.com | If I create a second user in Keycloak and login the same way, this doesn't happen: (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show test2 +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | domain_id | 4678301ef9a24d54bcd2e87a8fbc6872 | | email | test2 at example.com | These 2 users look identical in the database: user: | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | {"email": "test at example.com"} | 1 | NULL | 2021-01-22 18:33:20 | NULL | 4678301ef9a24d54bcd2e87a8fbc6872 | | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | {"email": "test2 at example.com"} | 1 | NULL | 2021-01-22 21:01:54 | NULL | 4678301ef9a24d54bcd2e87a8fbc6872 | federated_user: | 6 | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | keycloak | openid | test | test | | 9 | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | keycloak | openid | test2 | test2 | Where should I be looking for the cause of this error? I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Jan 25 13:59:57 2021 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 25 Jan 2021 14:59:57 +0100 Subject: [neutron] bug deputy report for week of 2021-01-18 Message-ID: Hi Neutron Team, Last week's new bugs: High: * https://bugs.launchpad.net/neutron/+bug/1912359 [OVN] Add support for WSGI mod Lack of feature parity in OVN * https://bugs.launchpad.net/neutron/+bug/1912369 [FT] "test_gateway_chassis_rebalance" failing because lrp is not bound Rare gate failure, assigned to Rodolfo. * https://bugs.launchpad.net/neutron/+bug/1912450 flows on br-int wasn't been deleted completely Fix in progress: https://review.opendev.org/c/openstack/neutron/+/771903 * https://bugs.launchpad.net/neutron/+bug/1912779 [ovn-octavia-provider]: batch update fails when members to remove is empty Fix in progress: https://review.opendev.org/c/openstack/ovn-octavia-provider/+/771971 Medium: * https://bugs.launchpad.net/neutron/+bug/1912320 TestTimer breaks VPNaaS functional tests Fix merged: https://review.opendev.org/c/openstack/neutron/+/771436 * https://bugs.launchpad.net/neutron/+bug/1912596 neutron-server report 500 error when update floating ip port forwarding Fix in progress: https://review.opendev.org/c/openstack/neutron/+/771776 * https://bugs.launchpad.net/neutron/+bug/1912651 ovs flows are not readded when ovsdb-server/ovs-vswitchd are restarted. Unassigned Low: * https://bugs.launchpad.net/neutron/+bug/1912948 Missing option for OSP16.2 OVN migration RFE: * https://bugs.launchpad.net/neutron/+bug/1912460 [RFE] [QoS] add qos rule type packet per second (pps) * https://bugs.launchpad.net/neutron/+bug/1912672 [RFE] Enable set quota per floating-ips pool. Incomplete: * https://bugs.launchpad.net/neutron/+bug/1912379 Neutron causes systemd to hang on Linux guests with SELinux disabled * https://bugs.launchpad.net/neutron/+bug/1912513 Port creation fails with error IP already allocated but the IP is available Cheers, Bence From dtantsur at redhat.com Mon Jan 25 14:52:35 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 25 Jan 2021 15:52:35 +0100 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: Hi, On Mon, Jan 25, 2021 at 7:04 AM Pioso, Richard wrote: > On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur > wrote: > > > > Hi all, > > > > Now that we've gained some experience with using Redfish virtual > > media I'd like to reopen the discussion about $subj. For the context, the > > idrac-redfish-virtual-media boot interface appeared because Dell > > machines need an additional action [1] to boot from virtual media. The > > initial position on hardware interfaces was that anything requiring OEM > > actions must go into a vendor hardware interface. I would like to propose > > relaxing this (likely unwritten) rule. > > > > You see, this distinction causes a lot of confusion. Ironic supports > > Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports > > virtual media, Redfish supports virtual media, iDRAC supports virtual > > media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today > > I had to explain the cause of it to a few people. It required diving into > > how exactly Redfish works and how exactly ironic uses it, which is > > something we want to protect our users from. > > Wow! Now I’m confused, too. AFAIU, the people you had to help decided to > use the redfish driver, instead of the idrac driver. It is puzzling that > they decided to do that considering the ironic driver composition reform > [1] of a couple years ago. Recall that reform allows “having one vendor > driver with options configurable per node instead of many drivers for every > vendor” and had the following goals. > When discussing the user's confusion we should not operate in terms of ironic (especially since the problem happened in metal3 land, which abstracts away ironic). As a user, when I see Redfish and virtual media, and I know that Dell supports them, I can expect redfish-virtual-media to work. The fact that it does not may cause serious perception problems. The one I'm particularly afraid of is end users thinking "iDRAC is not Redfish compliant". > > “- Make vendors in charge of defining a set of supported interface > implementations in priority order. > - Allow vendors to guarantee that unsupported interface implementations > will not be used with hardware types they define. This is done by having a > hardware type list all interfaces it supports.” > > The idrac driver is Dell Technologies’ vendor driver for systems with an > iDRAC. It offers a one-stop shop for using ironic to manage its systems. > Users can select among the hardware interfaces it supports. Each interface > uses a single management protocol -- Redfish, WS-Man, and soon IPMI [2] -- > to communicate with the BMC. While it supports the > idrac-redfish-virtual-media boot interface, it does not support > redfish-virtual-media. One cannot configure a node with the idrac driver to > use redfish-virtual-media. > I know, the problem is explaining to users why they can use the redfish hardware type with Dell machines, but only partly. > > > > > We already have a precedent [2] of adding vendor-specific handling to > > a generic driver. > > That change was introduced about a month ago in the community’s > vendor-independent ipmi driver. That was very understandable, since IPMI is > a very mature management protocol and was introduced over 22 years ago. I > cannot remember what I was doing back then :) As one would expect, the ipmi > driver has experienced very little change over the past two-plus years. I > count roughly two (2) substantive changes over that period. By contrast, > the Redfish protocol is just over five (5) years old. Its > vendor-independent driver, redfish, has been fertile ground for adding new, > advanced features, such as BIOS settings configuration, firmware update, > and RAID configuration, and fixing bugs. It fosters lots of change, too > many for me to count. > > > I have proposed a patch [3] to block using redfish- > > virtual-media for Dell hardware, but I grew to dislike this approach. It > > does not have precedents in the ironic code base and it won't scale well > > if we have to handle vendor differences for vendors that don't have > > ironic drivers. > > Dell understands and is on board with ironic’s desire that vendors support > the full functionality offered by the vendor-independent redfish driver. If > the iDRAC is broken with regards to redfish-virtual-media, then we have a > vested interest in fixing it. > While that is worked, an alternative approach could be for our community > to strengthen its promotion of the goals of the driver composition reform. > That would leverage ironic’s long-standing ability to ensure people only > use hardware interfaces which the vendor and its driver support. > Yep. I don't necessarily disagree with that, but it poses issues for layered products like metal3, where on each abstraction level a small nuance is lost, and the end result is confusion and frustration. > > > > > Based on all this I suggest relaxing the rule to the following: if a > feature > > supported by a generic hardware interface requires additional actions or > > has a minor deviation from the standard, allow handling it in the generic > > hardware interface. Meaning, redfish-virtual-media starts handling the > > Dell case by checking the System manufacturer (via the recently added > > detect_vendor call) and loading the OEM code if it matches "Dell". After > > this idrac-redfish-virtual-media will stay empty (for future enhancements > > and to make the patch backportable). > > That would cause the vendor-independent redfish driver to become dependent > on sushy-oem-idrac, which is not under ironic governance. > This itself is not a problem, most of the projects we depend on are not under ironic governance. Also it won't be a hard dependency, only if we detect 'Dell' in system.manufacturer. > > It is worth pointing out the sushy-oem-idrac library is necessary to get > virtual media to work with Dell systems. It was first created for that > purpose. It is not a workaround like those in sushy, which accommodate > common, minor standards interpretation and implementation differences > across vendors by sprinkling a bit of code here and there within the > library, unbeknownst to ironic proper. > > We at Dell Technologies are concerned that the proposed rule change would > result in a greater code review load on the ironic community. Since > vendor-specific code would be in the generic hardware interface, much more > care, eyes, and integration testing against physical hardware would be > needed to ensure it does not break others. And our community is already > concerned about its limited available review bandwidth [3]. Generally > speaking, the vendor third-party CIs do not cover all drivers. Rather, each > vendor only tests its own driver, and, in some cases, sushy. Therefore, > changes to the vendor-independent redfish driver may introduce regressions > in what has been working with various hardware and not be detected by > automated testing before being merged. > The change will, in fact, be tested by your 3rd party CI because it was used by both the generic redfish hardware type and the idrac one. I guess a source of confusion may be this: I don't suggest the idrac hardware type goes away, nor do I suggest we start copying its Dell-specific features to redfish. > > Can we afford this additional review load, prospective slowing down of > innovation with Redfish, and likely undetected regressions? Would that be > best for our users when we could fix the problem in other ways, such as the > one suggested above? > > Also consider that feedback to the DMTF to drive vendor consistency is > critical, but the DMTF needs feedback on what is broken in order to push > others to address a problem. Remember the one-time boot debacle when three > vendors broke at the same time? Once folks went screaming to the DMTF about > the issue, it quickly explained it to member companies, clarified the > standard, and created a test case for that condition. Changing the driver > model to accommodate everyone's variations will reduce that communication > back to the DMTF, meaning the standard stalls and interoperability does not > gain traction. > I would welcome somebody raising to DMTF the issue that causes iDRAC to need another action to boot from virtual media, I suspect other vendors may have similar issues. That being said, our users are way too far away from DMTF, and even we (Julia and myself, for example) don't have a direct way of influencing it, only through you and other folks who help (thank you!). > > > > > Thoughts? > > > > TL;DR, we strongly recommend ironic not make this rule change. Clearly > communicating users should use the vendor driver should simplify their > experience and eliminate the confusion. > > The code as-is is factored well as a result of the 21st century approach > the community has taken to date. Vendors can implement the driver OEM > changes they need to accommodate their unique hardware and BMC > requirements, with reduced concern about the risk of breaking other drivers > or ironic itself. Ironic’s driver composition reform, sushy, and sushy’s > OEM extension mechanism support that modern approach. Our goal is to > continue to improve the iDRAC Redfish service’s compliance with the > standard and eliminate the kind of OEM code Dmitry identified. > > Beware of unintended consequences, including > > - reduced quality, > - slowed feature and bug fix velocity, > I don't see how this happens, given that the code is merely copied from one place to the other (with the 1st place inheriting it from its base class). > - stalled DMTF Redfish standard, > - lost Redfish interoperability traction, and > I'm afraid we're actually hurting Redfish adoption when we start complicating its usage. Think, with IPMI everything "Just Works" (except when it does not, but that happens much later), while for Redfish the users need to be aware of... flavors of Redfish? Something that we (and DMTF) don't even have a name for. Dmitry > - increased code review load. > > > Dmitry > > > > [1] > > https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9 > > 905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 > > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ > > [3] https://review.opendev.org/c/openstack/ironic/+/771619 > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, > > Michael O'Neill > > I welcome your feedback. > > Rick > > [1] > https://opendev.org/openstack/ironic-specs/src/branch/master/specs/approved/driver-composition-reform.rst > [2] https://storyboard.openstack.org/#!/story/2008528 > [3] https://etherpad.opendev.org/p/ironic-wallaby-midcycle > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Jan 25 17:40:59 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 25 Jan 2021 18:40:59 +0100 Subject: [all] Proposed Xena cycle schedule Message-ID: Hey everyone, The Wallaby cycle is going by fast, and it's already time to start planning some of the early things for the Xena release. One of the first steps for that is actually deciding on the release schedule. Typically we have done this based on when the next Summit event was planned to take place. Due to several reasons, we don't have a date yet for the second 2021 event. The current thinking is it will likely take place in October (nothing is set, just an educated guess, so please don't use that for any other planning). So for the sake of figuring out the release schedule, we are proposing two schedules, one targeting a release date in mid of september, one targeting a release date in early October. Hopefully this will then align well with event plans. I have two proposed release schedule up for review here: - https://review.opendev.org/c/openstack/releases/+/772367 (23w) - https://review.opendev.org/c/openstack/releases/+/772357 (25w) Please feel free to comment on the patch if you see any major issues that we may have not considered. Thanks! -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mateigoga at gmail.com Mon Jan 25 06:20:10 2021 From: mateigoga at gmail.com (Matei Goga) Date: Mon, 25 Jan 2021 01:20:10 -0500 Subject: Syntribos Message-ID: Hi, Was wondering if there was a replacement for Syntribos or some equivalent project that could be used to pentest an OpenStack deployment. Much thanks! ~ Matei -------------- next part -------------- An HTML attachment was scrubbed... URL: From madhuwanthapriyashan12 at gmail.com Mon Jan 25 09:27:34 2021 From: madhuwanthapriyashan12 at gmail.com (Madhuwantha priyashan) Date: Mon, 25 Jan 2021 14:57:34 +0530 Subject: [openstack-dev] [neutron][taas] Proposal of Dashboard for TaaS In-Reply-To: References: Message-ID: Thanks a lot. But I tried this too.But it was not worked for me On Mon, Jan 25, 2021, 2:39 PM Lajos Katona Hi, > You can use taas with devstack (see [1]). > The project now lacks maintainers, but basically works, if you have any > questions don't hesitate to ask :-) > > [1] > https://opendev.org/x/tap-as-a-service/src/branch/master/devstack/README.rst > > Regards > Lajos > > Madhuwantha priyashan ezt írta > (időpont: 2021. jan. 22., P, 20:22): > >> Dear sir, >> I am an undergraduate student and I am doing a project using OpenStack >> now. I need to install tap as a service in OpenStack but I could not find >> any guideline for this. It is great full if u can provide guidelines for >> this task. >> >> Thank you >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ts-takahashi at nec.com Tue Jan 26 07:59:38 2021 From: ts-takahashi at nec.com (=?utf-8?B?VEFLQUhBU0hJIFRPU0hJQUtJKOmrmOapi+OAgOaVj+aYjik=?=) Date: Tue, 26 Jan 2021 07:59:38 +0000 Subject: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance In-Reply-To: References: <9aedf122-5ebb-e29a-d977-b58635cabe51@nokia.com> <1760a4b795a.10e369974791414.187901784754730298@ghanshyammann.com> Message-ID: Hi Bob and Sahdev, As we discussed last month, Tacker team want to join tosca-parser and heat-translator maintenance. How do we proceed? We already discussed who join which project maintenance mainly. Can the following members participate core team of the projects? (Of course, all Tacker team members join activities, but decided main roles.) tosca-parser - yasufum (yasufum.o at gmail.com) - manpreet (kaurmanpreet2620 at gmail.com) - takahashi-tsc (ts-takahashi at nec.com) heat-translator - yoshito-ito (yoshito.itou.dr at hco.ntt.co.jp) - LiangLu (lu.liang at jp.fujitsu.com) Regards, Toshiaki From: TAKAHASHI TOSHIAKI(高橋 敏明) Sent: Tuesday, December 8, 2020 10:20 PM To: ueha.ayumu at fujitsu.com; openstack-discuss Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Bob and Sahdev, Tacker team has started to discuss, and at least 5 members want to participate in the maintenance of heat-translator and tosca-parser. In my understanding, heat-translator and tosca-parser are different projects and core team is different. We’d like to different members to participate each core team. Is it OK? Should I send the name list to you? Best regards, Toshiaki From: ueha.ayumu at fujitsu.com > Sent: Tuesday, December 1, 2020 9:25 AM To: openstack-discuss > Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Bob and Sahdev I’m Ueha from tacker team. Thank you for reviewing my patch on the Victria release. Excuse me during the discussion about maintenance. I posted a new bug fix patch for policies validate. Could you review it? Thanks! https://bugs.launchpad.net/tosca-parser/+bug/1903233 https://review.opendev.org/c/openstack/tosca-parser/+/763144 Best regards, Ueha From: TAKAHASHI TOSHIAKI(高橋 敏明) > Sent: Monday, November 30, 2020 6:09 PM To: Rico Lin >; openstack-discuss > Subject: RE: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance Hi Rico, Thanks. OK, we’ll discuss with Bob to proceed with development of the projects. Regards, Toshiaki From: Rico Lin > Sent: Monday, November 30, 2020 4:34 PM To: openstack-discuss > Subject: Re: [tc][heat][tacker][tosca-parser][heat-translator] Discusssion about heat-translater and tosca-parser maintenance On Mon, Nov 30, 2020 at 11:06 AM TAKAHASHI TOSHIAKI(高橋 敏明) > wrote: > > Need to discuss with Heat, tc, etc.? > > And I'd like to continue to discuss other points such as cooperation with other members(Heat, or is there any users of those?). I don't think you need further discussion with tc as there still are ways for your patch to get reviewed, release package, or for you to join heat-translator-core team As we treat heat translator as a separated team, I'm definitely +1 on any decision from Bob. So not necessary to discuss with heat core team unless you find it difficult to achieve above tasks. I'm more than happy to provide help if needed. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Tue Jan 26 08:42:06 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Tue, 26 Jan 2021 09:42:06 +0100 Subject: [magnum][heat] Rolling system upgrades In-Reply-To: <09e337c8-5fed-40aa-6835-ea5f64d6d943@catalyst.net.nz> References: <14a045a4-8705-438d-942f-f11f2d0258b2@www.fastmail.com> <09e337c8-5fed-40aa-6835-ea5f64d6d943@catalyst.net.nz> Message-ID: Hi Feilong, Regarding first point, could you share your idea on how to fix it? I haven't yet put much thought into that, but solving system-level upgrades for nodes (and getting it to work with auto healing/auto scaling etc.) is something I'll have to tackle myself if we want to go into production with full feature-set, and I'd be happy to put work into that. Regarding image updates, perhaps that has been fixed since ussuri? I'm testing it on some ussuri snapshot, my nodes use images for root disk, and I can see that the updated image property from cluster template is not populated into heat stack itself. I see your point about lack of communication from magnum to the cluster, but perhaps that could be handled in a similar way as OS::Heat::Software* updates, with an agent running on nodes? Perhaps heat's pre-update hook could be used, with agent clearing it after node has been drained. I'm not overly familiar with heat inner workings, and I was hoping that someone with more heat experience could chime in and give some idea how that could be handled. Perhaps OS::Heat::Software* resources already provide a way to handle this (although I'm not sure how could that work given that they are probably updated only after server resource update is processed). I feel like getting images to update in a non-invasive way would be a cleaner and safer way of handling OS-level upgrades, although I'm not sure how feasible it is in the end. -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com On Sat, Jan 23, 2021, at 18:55, feilong wrote: > Hi Krzysztof, > > Thanks for raising this topic because I'm planning to do improvements > for this area. I would like to help as the original author of this > feature. Now let me explain the current situation: > > 1. The first method is designed to work for both Fedora Atomic and > Fedora CoreOS. Though I agree after upgrade, the node image will be > remain the old name and ID which will bring troubles for auto healing > later. That's the problem I'm trying to fix but it's not easy. As for > the new node, I think it's a bug and I think I know how to fix it. Your > concern about upgrade from a very old node's OS to a quite new OS > version is valid :( > > 2. It works under conditions. The node should be image based instead of > volume based, because AFAIK, Nova still doesn't support volume based > instance rebuild. Did you try this with image based nodes? As for the > drain part, it's because we would like to achieve a zero-downtime > upgrade (at least it's my goal for this), so each node will be drained > before upgrading. However, I didn't see a way to manage the > orchestration to call a k8s drain before doing the rebuild of the node, > because it's out of the control of Magnum. Heat is a like a black box at > this stage. Also, even if we can have a chance to call k8s drain to > drain the node, it's impossible to do that if the cluster is a private > cluster. Private cluster means Magnum control plane cannot reach the k8s > API. > > Again, thank you raising this and I'm happy to help to address it. > > > On 22/01/21 10:03 pm, Krzysztof Klimonda wrote: > > Hi, > > > > While testing magnum, a problem of upgrades came up - while work has been done to make kubernetes upgrades without interruption, operating system upgrades seem to be handled only partially. > > > > According to the documentation, two ways of upgrading system are available: > > - via specifying ostree_commit or ostree_remote labels in the cluster template used for upgrade > > - via specifying a new image in the cluster template used for upgrade > > > > The first one is specific to Fedora Atomic (and, while probably untested, seems to be mostly working with Fedora CoreOS) but it has some drawbacks. Firstly, due to base image staying the same we require this image for the life of the cluster, even if OS has already been upgraded. Secondly, using this method only upgrades existing instances and new instances (spawned via scaling cluster up) will not be upgraded. Thirdly, even if that is fixed I'm worried that at some point upgrading from old base image to some future ostree snapshot will fail (there is also cost associated with diff growing with each release). > > > > The second method, of specifying a new image in the cluster template used for upgrade, comes with an ugly warning about nodes not being drained properly before server rebuild (and it actually doesn't seem to be working anyway as the new image parameter is not being passed to the heat template on upgrade). This does however seem like a more valid approach in general. > > > > I'm not that familar with Heat, and the documentation of various OS::Heat::Software* resources seems inconclusive, but is there no way of executing some code before instance is rebuilt? If not, how are other projects and users handling this in general? > > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > > > From mark at stackhpc.com Tue Jan 26 08:46:57 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 26 Jan 2021 08:46:57 +0000 Subject: [kolla][keystone] Keycloak "More than one user" error In-Reply-To: <163486103a7a4cbc9cee3a64996dd619@ncwmexgp009.CORP.CHARTERCOM.com> References: <163486103a7a4cbc9cee3a64996dd619@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Adding keystone tag. On Mon, 25 Jan 2021 at 13:35, Braden, Albert wrote: > > We’re running Train on Centos 7, and using Keycloak for auth. After I setup Keycloak, create a user in Keycloak, and then login to Horizon via Keycloak, a user is created in Keystone: > > > > | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | test | > > > > If I try to address that user by name, I get an error: > > > > (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show test > > More than one user exists with the name 'test'. > > > > I can address it by id. When I list users, I only see one “test” user.” > > > > (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 > > +---------------------+------------------------------------------------------------------+ > > | Field | Value | > > +---------------------+------------------------------------------------------------------+ > > | domain_id | 4678301ef9a24d54bcd2e87a8fbc6872 | > > | email | test at example.com | > > > > If I create a second user in Keycloak and login the same way, this doesn’t happen: > > > > (openstack) [root at chrnc-area51-build-01 our-ok-kolla-ansible]# os user show test2 > > +---------------------+------------------------------------------------------------------+ > > | Field | Value | > > +---------------------+------------------------------------------------------------------+ > > | domain_id | 4678301ef9a24d54bcd2e87a8fbc6872 | > > | email | test2 at example.com | > > > > These 2 users look identical in the database: > > > > user: > > > > | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | {"email": "test at example.com"} | 1 | NULL | 2021-01-22 18:33:20 | NULL | 4678301ef9a24d54bcd2e87a8fbc6872 | > > | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | {"email": "test2 at example.com"} | 1 | NULL | 2021-01-22 21:01:54 | NULL | 4678301ef9a24d54bcd2e87a8fbc6872 | > > > > federated_user: > > > > | 6 | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | keycloak | openid | test | test | > > | 9 | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | keycloak | openid | test2 | test2 | > > > > Where should I be looking for the cause of this error? > Have you checked if there are other test users in a different domain? > > > > > I apologize for the nonsense below. So far I have not been able to stop it from being attached to my external emails. I'm working on it. > > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. From anlin.kong at gmail.com Tue Jan 26 09:51:31 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 26 Jan 2021 22:51:31 +1300 Subject: Some questions regarding OpenStack Trove Victoria release In-Reply-To: References: <3a0f8fa16da56790161fca1e6a50a870@citynetwork.eu> Message-ID: On Tue, Jan 26, 2021 at 8:43 PM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > Hi! > > Thanks a lot for the kind answers and explanations, now the picture of the > concept and the current development situation overall is much clearer to me. > > Regarding the question about different types of Guest Agents acquired, > depending on the database type, that i asked, it is mainly based on the > information i read > about in the latest edition of the book "*Openstack Trove Essentials*" by > Alok Shrivastwa and Sunil Sarat that i purchased recently. > > For example, as mentioned in the book: > > > *- Let's also look at the different types of guest agents that are > required depending on the database engine that needs to be supported. The > different guest agents (for example, the MySQL and PostgreSQL guest > agents) may even have different capabilities depending on what is supported > on the particular database.* (page 6) > *- The Guest Agent code is different for every datastore that needs to be > supported and the Guest Agent for that particular datastore is installed on > the corresponding image of the datastore version. *(page 10) > *- As we have already seen in the previous chapters, the guest agent is > different for different database engines, and hence the correct version of > the guest agent needs to be installed on the system. *(page 58) > Some of those have been changed and not the case any more. After database containerization, there is no database related stuff installed in the guest image. However, it's correct that different datastores are implemented as different drivers, so you can say "The Guest Agent code is different". > When it comes to guest image creation, i found now the places in the code > that are used, as well as the acquired elements. A call to the function > *build_guest_image() *is performed, involving those needed elements > as minimal requirements: > > - *ubuntu-minimal *(which also invokes *ubuntu-common* i think) > - *cloud-init-datasources* > - *pip-and-virtualenv* > - *pip-cache* > - *guest-agent* > - ${guest_os}-*docker* > - *root-passwd* > > ref: > > https://github.com/openstack/trove/blob/master/integration/scripts/functions_qemu > > So, when it comes to my question regarding the disabling of the automatic > updates, it should be doable in a couple of ways. Either by executing a > script placed in UserData during guest VM creation and initialisation > or by manipulating elements (for example, such as we have a script placed > in *ubuntu-common* element that disables privacy extensions for IPv6 > (RFC4941): > > > /usr/local/lib/python3.6/dist-packages/diskimage_builder/elements/ubuntu-common/install.d/80-disable-rfc3041 > You are right, but the recommended way is either those changes could be contributed back to the upstream if they are common feature requests and could benefit the others, or they are implemented in a separate element so there is little chance that conflict may happen when upgrading trove. > I am really looking forward to our soon deployment of the Trove project, i > see huge potential there! > Good luck and please let me know if you have any other questions. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Jan 26 10:22:45 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 26 Jan 2021 11:22:45 +0100 Subject: Secure RBAC work In-Reply-To: References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> Message-ID: <20210126102245.42gzsdn36uvc54sq@localhost> On 19/01, Lance Bragstad wrote: > Hey all, > > I want to follow up on this thread because there's been some discussion and > questions (some of which are in reviews) as services work through the > proposed changes [0]. > > TL;DR - OpenStack services implementing secure RBAC should update default > policies with the `reader` role in a consistent manner, where it is not > meant to protect sensitive information. > Hi Lance, Thank you very much for this great summary of all the different discussions and decisions. I have just one question regarding your TL;DR, shouldn't it be "where it is meant to protect sensitive information"? As I understood it the reader should be updated so it doesn't expose sensitive information (thus protecting it) because it's the least privileged role. Cheers, Gorka. > In the process of reviewing changes for various resources, some folks > raised concerns about the `reader` role definition. > > One of the intended use-cases for implementing a `reader` role was to use > it for auditing, as noted in the keystone definitions for each role and > persona [1]. Another key point of that document, and the underlying design > of secure RBAC, is that the default roles have role implications built > between them (e.g., reader implies member, and member implies admin). This > detail serves two important functions. > > First, it reduces duplication in check strings because keystone expands > role implications in token response bodies. For example, someone with the > `admin` role on a project will have `member` and `reader` roles in their > token body when they authenticate for a token or validate a token. This > reduces the complexity of our check strings by writing the policy to the > highest level of authorization required to access an API or resource. Users > with anything above that level will work through the role implications > feature. > > Second, it reduces the need for extra role assignments. If you grant > someone the `admin` role on a project you don't need to also give them > `reader` and `member` role assignments. This is true regardless of how > services implement check strings. > > Ultimately, the hierarchical role structure in keystone and role expansion > in token responses give us shorter check strings and less role assignments. > But, one thing we're aware of now is that we need to be careful how we > expose certain information to users via the `reader` role, since it is the > least-privileged role in the hierarchy. For example, one concern was > exposing license key information in images to anyone with the `reader` role > on the system. Some deployments, depending on their security posture or > auditing targets, might not allow sensitive information to be implicitly > exposed. Instead, they may require deployments to explicitly grant access > to sensitive information [2]. > > So what do we do moving forward? > > I think it's clear that there are APIs and resources in OpenStack that fall > into a special category where we shouldn't expose certain information to > the lowest level of the role hierarchy, regardless of the scope. But, the > role implication functionality served a purpose initially to deliver a > least-privileged role used only for read operations within a given scope. I > think breaking that implication now is confusing considering we implemented > the implication in Rocky [3], but I think future work for an elevated > read-only role is a good path forward. Eventually, keystone can consider > implementing support for a new default role, which implies `reader`, making > all the work we do today still useful. At that time, we can update relevant > policies to expose sensitive information with the elevated read-only role. > I suspect this will be a much smaller set of APIs and policies. I think > this approach strikes a balance between what we have today, and a way to > move forward that still protects sensitive data. > > I proposed an update to the documentation in keystone to clarify this point > [4]. It also doesn't assume all audits are the same. Instead, it phrases > the ability to use `reader` roles for auditing in a way that leaves that up > to the deployer and auditor. I think that's an important detail since > different deployments have different security requirements. Instead of > assuming everyone can use `reader` for auditing, we can give them a list of > APIs they can interact with as a `reader` (or have them generate those > policies themselves, especially if they have custom policy) and let them > determine if that access is sufficient for their audit. If it isn't, > deployers aren't in a worse position today, but it emphasizes the > importance of expanding the default roles to include another tier for > elevated read-only permissions. Given where we are in the release cycle for > Wallaby, I don't expect keystone to implement a new default role this late > in the release [5]. Perhaps Xena is a better target, but I'll talk with > Kristi about it next week during the keystone meeting. > > I hope this helps clarify some of the confusion around the secure RBAC > patches. If you have additional comments or questions about this topic, let > me know. We can obviously iterate here, or use the policy pop up time slot > which is in a couple of days [6]. > > Thanks, > > Lance > > [0] https://review.opendev.org/q/topic:secure-rbac > [1] > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > [2] FedRAMP control AC -06 (01) is an example of this - *The organization > explicitly authorizes access to [Assignment: organization-defined security > functions (deployed in hardware, software, and firmware) and > security-relevant information].* > [3] https://docs.openstack.org/releasenotes/keystone/rocky.html#new-features > [4] https://review.opendev.org/c/openstack/keystone/+/771509 > [5] https://releases.openstack.org/wallaby/schedule.html > [6] https://etherpad.opendev.org/p/default-policy-meeting-agenda > > On Thu, Dec 10, 2020 at 7:15 PM Ghanshyam Mann > wrote: > > > ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad < > > lbragstad at gmail.com> wrote ---- > > > Hey everyone, > > > > > > I wanted to take an opportunity to clarify some work we have been doing > > upstream, specifically modifying the default policies across projects. > > > > > > These changes are the next phase of an initiative that’s been underway > > since Queens to fix some long-standing security concerns in OpenStack [0]. > > For context, we have been gradually improving policy enforcement for years. > > We started by improving policy formats, registering default policies into > > code [1], providing better documentation for policy writers, implementing > > necessary identity concepts in keystone [2], developing support for those > > concepts in libraries [3][4][5][6][7][8], and consuming all of those > > changes to provide secure default policies in a way operators can consume > > and roll out to their users [9][10]. > > > > > > All of this work is in line with some high-level documentation we > > started writing about three years ago [11][12][13]. > > > > > > There are a handful of services that have implemented the goals that > > define secure RBAC by default, but a community-wide goal is still > > out-of-reach. To help with that, the community formed a pop-up team with a > > focused objective and disbanding criteria [14]. > > > > > > The work we currently have in progress [15] is an attempt to start > > applying what we have learned from existing implementations to other > > projects. The hope is that we can complete the work for even more projects > > in Wallaby. Most deployers looking for this functionality won't be able to > > use it effectively until all services in their deployment support it. > > > > Thanks, Lance for pushing this work forwards. I completely agree and that > > is what we get feedback in > > forum sessions also that we should implement this in all the services > > first before we ask operators to > > move their cloud to the new RBAC. > > > > We discussed these in today's policy-popup meeting also and encourage > > every project to help in those > > patches to add tests and review. This will help to finish the work on > > priority and we can provide better > > RBAC experience to the deployer. > > > > -gmann > > > > > > > > > > > I hope this helps clarify or explain the patches being proposed. > > > > > > > > > As always, I'm happy to elaborate on specific concerns if folks have > > them. > > > > > > > > > Thanks, > > > > > > > > > Lance > > > > > > > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > > > [1] > > https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > > > [2] > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > > > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > > > [4] > > https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > > > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > > > [6] https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > > > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > > > [8] > > https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > > > [9] > > https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > > > [10] > > https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > > > [11] > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > > > [12] > > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > > [13] > > https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > > > [14] > > https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > > > [15] > > https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open > > > > > From sshnaidm at redhat.com Tue Jan 26 11:49:08 2021 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 26 Jan 2021 13:49:08 +0200 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <20210119055830.GB3137911@fedora19.localdomain> References: <20210119055830.GB3137911@fedora19.localdomain> Message-ID: Hi, all Thanks a lot for the plugin, and I'd like to suggest a little improvement. It's difficult for me to match job names and results with times because their columns are far from each other. I submitted a change[1] that should make it closer to each other (like old-style that we had before). The example how it'd look you can see here: https://imgur.com/5onUpHX Would appreciate your opinion and review Thanks! [1] https://gerrit-review.googlesource.com/c/plugins/zuul-results-summary/+/294622 On Tue, Jan 19, 2021 at 7:59 AM Ian Wienand wrote: > On Thu, Nov 26, 2020 at 01:39:13PM +0100, Balázs Gibizer wrote: > > I understand that adapting the old CI test result table to the new gerrit > > review UI is not a simple task. > > We got there in the end :) Change [1] enabled the zuul-summary-results > plugin, which is available from [2]. I just restarted opendev gerrit > with it, and it seems to be working. Look for the new "Zuul Summary" > tab next to "Files". I would consider it a 0.1 release and welcome > any contributions to make it better. > > If you want to make changes, you should be able to submit a change to > system-config with a Depends-On: and trigger the > system-config-run-review test; in the results returned there are > screenshot artifacts that will show the results (expanding this > testing also welcome!). We can also a put a node on hold for you to > work on the plugin if you have interest. It's also fairly easy to run > the container locally, so there's plenty of options. > > Thanks, > > -i > > [1] https://review.opendev.org/c/opendev/system-config/+/767079 > [2] > https://gerrit-review.googlesource.com/admin/repos/plugins/zuul-results-summary > > > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From CAPSEY at augusta.edu Tue Jan 26 13:09:11 2021 From: CAPSEY at augusta.edu (Apsey, Christopher) Date: Tue, 26 Jan 2021 13:09:11 +0000 Subject: [nova][dev] Revisiting qemu emulation where guest arch != host arch Message-ID: Resurrecting this old thread… The first bits of work for this are starting to be submitted for review: https://review.opendev.org/c/openstack/nova/+/772156 The developer is going to go through the rest of the tcg guest supported architectures in QEMU and add them in (he only has aarch64 done right now) which will require a bit of time/testing, but we hope to have it ready to go for potential inclusion in wallaby. Any comments from nova team on approach/implementation are welcome. Chris Apsey GEORGIA CYBER CENTER From: Sean Mooney Sent: Wednesday, July 15, 2020 10:37 AM To: Apsey, Christopher ; openstack-discuss at lists.openstack.org Cc: Belmiro Moreira Subject: [EXTERNAL] Re: [nova][dev] Revisiting qemu emulation where guest arch != host arch CAUTION: EXTERNAL SENDER This email originated from an external source. Please exercise caution before opening attachments, clicking links, replying, or providing information to the sender. If you believe it to be fraudulent, contact the AU Cybersecurity Hotline at 72-CYBER (2-9237 / 706-722-9237) or 72CYBER at augusta.edu On Wed, 2020-07-15 at 14:17 +0000, Apsey, Christopher wrote: > All, > > A few years ago I asked a question[1] about why nova, when given a hw_architecture property from glance for an image, > would not end up using the correct qemu-system-xx binary when starting the guest process on a compute node if that > compute nodes architecture did not match the proposed guest architecture. As an example, if we had all x86 hosts, but > wanted to run an emulated ppc guest, we should be able to do that given that at least one compute node had qemu- > system-ppc already installed and libvirt was successfully reporting that as a supported architecture to nova. It > seemed like a heavy lift at the time, so it was put on the back burner. > > I am now in a position to fund a contract developer to make this happen, so the question is: would this be a useful > blueprint that would potentially be accepted? this came up during the ptg and the over all felling was it should really work already and if it does not its a bug. so yes i fa blueprint was filed to support emulation based on the image hw_architecture property i dont think you will get objection altough we proably will want to allso have schduler support for this and report it to placemnt or have a whigher of some kind to make it a compelte solution. i.e. enhance the virt driver to report all the achitecure it support via traits and add a weigher to prefer native execution over emulation. so placement can tell use where it can run and the weigher can say where it will run best. see line 467 https://etherpad.opendev.org/p/nova-victoria-ptg > Most of the time when people want to run an emulated guest they would just nest it inside of an already running > guest of the native architecture, but that severely limits observability and the task of managing any more than a > handful of instances in this manner quickly becomes a tangled nightmare of networking, etc. I see real benefit in > allowing this scenario to run natively so all of the tooling that exists for fleet management 'just works'. This > would also be a significant differentiator for OpenStack as a whole. > > Thoughts? > > [1] > http://lists.openstack.org/pipermail/openstack-operators/2018-August/015653.html > > Chris Apsey > Director | Georgia Cyber Range > GEORGIA CYBER CENTER > > 100 Grace Hopper Lane | Augusta, Georgia | 30901 > https://www.gacybercenter.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Tue Jan 26 13:29:10 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 26 Jan 2021 13:29:10 +0000 Subject: [EXTERNAL] Re: [kolla][keystone] Keycloak "More than one user" error In-Reply-To: References: <163486103a7a4cbc9cee3a64996dd619@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: <3ce603b315704ef8a06a67343ab8200f@ncwmexgp009.CORP.CHARTERCOM.com> >-----Original Message----- >From: Mark Goddard >Sent: Tuesday, January 26, 2021 3:47 AM >To: Braden, Albert >Cc: openstack-discuss at lists.openstack.org >Subject: [EXTERNAL] Re: [kolla][keystone] Keycloak "More than one user" error > >CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. > >Adding keystone tag. > >On Mon, 25 Jan 2021 at 13:35, Braden, Albert > wrote: >> >> We’re running Train on Centos 7, and using Keycloak for auth. After I setup Keycloak, create a user in Keycloak, and then login to Horizon via Keycloak, a user is created in Keystone: >> >> >> ... >> >> Where should I be looking for the cause of this error? >> >Have you checked if there are other test users in a different domain? I think I successfully checked that. Looking at " openstack help user list" I see that it allows me to filter users by domain, group or project. It appears that not adding any filters will show all users in all domains. Also I checked the database. I tried deleting the "test" user: (openstack) [root at chrnc-area51-build-01 config]# os user show test More than one user exists with the name 'test'. (openstack) [root at chrnc-area51-build-01 config]# os user delete test Failed to delete user with name or ID 'test': More than one user exists with the name 'test'. 1 of 1 users failed to delete. (openstack) [root at chrnc-area51-build-01 config]# os user list +------------------------------------------------------------------+-------------------+ | ID | Name | +------------------------------------------------------------------+-------------------+ | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | test | | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | test2 | | e81999534559450688c730aad58738dc | admin | | 23fb5632aaa548b68871634577c5bf42 | glance | | 5e7d65357275446bbc2007826327350d | cinder | | 76217f42ce37481faa69b6b610e65f19 | placement | | e1832eb444044d7f8a266d22d517dc98 | nova | | cba584661261497f9b522c4752120d5f | neutron | | 034d6fcd28ef4b61b5e56d1dc79c9927 | heat | | 6d38774ad4614764932cb338add97403 | heat_domain_admin | | 59f68b88481e4e738f4a4943ff6c6496 | masakari | | 5d539533ecda4bd197a6ed281c6d268b | abraden | | 5d5f353f00434d9195208efad74f8113 | adjutant | +------------------------------------------------------------------+-------------------+ (openstack) [root at chrnc-area51-build-01 config]# os user delete ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 (openstack) [root at chrnc-area51-build-01 config]# os user show test No user with a name or ID of 'test' exists. After deleting the "test" user, and then re-creating it with a Keycloak login, the problem goes away. It seems to only happen with the first Keycloak user on a new cluster. (openstack) [root at chrnc-area51-build-01 config]# os user show test +---------------------+------------------------------------------------------------------+ | Field | Value | +---------------------+------------------------------------------------------------------+ | domain_id | 4678301ef9a24d54bcd2e87a8fbc6872 | | email | test at example.com | | enabled | True | | id | ccb276f4f507fd9f271d629d2ad896d2c97e04f81336cd8c1332f4b2df115ca2 | | name | test | | options | {} | | password_expires_at | None | +---------------------+------------------------------------------------------------------+ E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From mnaser at vexxhost.com Tue Jan 26 13:35:20 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 26 Jan 2021 08:35:20 -0500 Subject: [tc] weekly update Message-ID: Hi everyone, Here's an update on what happened in the OpenStack TC this week. You can get more information by checking for changes in the openstack/governance repository. # Patches ## Open Reviews - [manila] add assert:supports-api-interoperability https://review.opendev.org/c/openstack/governance/+/770859 - Define Xena release testing runtime https://review.opendev.org/c/openstack/governance/+/770860 - Setting Ke Chen as Watcher's PTL https://review.opendev.org/c/openstack/governance/+/770913 - Cool-down cycle goal https://review.opendev.org/c/openstack/governance/+/770616 - Drop openSUSE from commonly tested distro list https://review.opendev.org/c/openstack/governance/+/770855 - Define 2021 upstream investment opportunities https://review.opendev.org/c/openstack/governance/+/771707 - js-openstack-lib does not make releases https://review.opendev.org/c/openstack/governance/+/771789 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/c/openstack/governance/+/759904 - Move openstack-tempest-skiplist to release-management: none https://review.opendev.org/c/openstack/governance/+/771488 - monasca-log-api & monasca-ceilometer does not make releases https://review.opendev.org/c/openstack/governance/+/771785 - Remove Karbor project team https://review.opendev.org/c/openstack/governance/+/767056 - WIP NO MERGE Move os-*-config to Heat project governance https://review.opendev.org/c/openstack/governance/+/770285 ## Abandoned Reviews - Add QA in 2021 upstream investment opportunities https://review.opendev.org/c/openstack/governance/+/771708 - Add RBAC work in 2021 upstream investment opportunities https://review.opendev.org/c/openstack/governance/+/771709 # Other Reminders - Our next [TC] Weekly meeting is scheduled for January 28th at 1500 UTC. If you would like to add topics for discussion, please go to https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting and fill out your suggestions by Wednesday, January 27th, at 2100 UTC. Thanks for reading! Mohammed & Kendall -- Mohammed Naser VEXXHOST, Inc. From syedammad83 at gmail.com Tue Jan 26 13:34:34 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 26 Jan 2021 18:34:34 +0500 Subject: Trove Image Create Error Message-ID: Hi, I am creating a trove image with default parameters. ./trovestack build-image ubuntu bionic true ubuntu I am having below error . Please advise. 2021-01-26 13:19:24.905 | Installing collected packages: wheel, setuptools, pip 2021-01-26 13:19:25.292 | Attempting uninstall: pip 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 setuptools-52.0.0 wheel-0.36.2 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall 2021-01-26 13:19:27.367 | Traceback (most recent call last): 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in 2021-01-26 13:19:27.371 | main() 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in bootstrap 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main as pip_entry_point 2021-01-26 13:19:27.372 | File "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") 2021-01-26 13:19:27.372 | ^ 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax 2021-01-26 13:19:27.396 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash 2021-01-26 13:19:27.409 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : echo '' 2021-01-26 13:19:27.410 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q 2021-01-26 13:19:27.428 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup 2021-01-26 13:19:27.439 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 : exitval=1 2021-01-26 13:19:27.447 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 : cleanup 2021-01-26 13:19:27.457 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 : unmount_image -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From zaitcev at redhat.com Tue Jan 26 15:57:21 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Tue, 26 Jan 2021 09:57:21 -0600 Subject: Secure RBAC work In-Reply-To: <20210126102245.42gzsdn36uvc54sq@localhost> References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> <20210126102245.42gzsdn36uvc54sq@localhost> Message-ID: <20210126095721.30aa7aeb@suzdal.zaitcev.lan> On Tue, 26 Jan 2021 11:22:45 +0100 Gorka Eguileor wrote: > On 19/01, Lance Bragstad wrote: > > TL;DR - OpenStack services implementing secure RBAC should update default > > policies with the `reader` role in a consistent manner, where it is not > > meant to protect sensitive information. > I have just one question regarding your TL;DR, shouldn't it be "where it > is meant to protect sensitive information"? > > As I understood it the reader should be updated so it doesn't expose > sensitive information (thus protecting it) because it's the least > privileged role. I think Lance means that the reader role can see what member sees (in any scope - system, domain, project). If a member sees some sensitive information, then reader is not a place to make access reduction. In fact it's harmful, because reader is not emulating member sufficiently for being useful for anything. At least that's how I read Lance's intent. > > I think it's clear that there are APIs and resources in OpenStack that fall > > into a special category where we shouldn't expose certain information to > > the lowest level of the role hierarchy, regardless of the scope. But, the > > role implication functionality served a purpose initially to deliver a > > least-privileged role used only for read operations within a given scope. I > > think breaking that implication now is confusing considering we implemented > > the implication in Rocky [3], but I think future work for an elevated > > read-only role is a good path forward. That's not going to do a whole lot to simplify those check strings. Perfect example of kicking the can down the road and committing to more technical baggage. -- Pete From rosmaita.fossdev at gmail.com Tue Jan 26 16:06:03 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 26 Jan 2021 11:06:03 -0500 Subject: [cinder] this week's meeting in video+IRC Message-ID: <9bbb3999-6eae-3b04-12c5-b5ebf35811fc@gmail.com> Quick reminder that this week's Cinder team meeting on Wednesday 27 January, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. Here's a quick reminder of the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. connection info: https://bluejeans.com/3228528973 cheers, brian From balazs.gibizer at est.tech Tue Jan 26 16:14:15 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 26 Jan 2021 17:14:15 +0100 Subject: [nova][placement] adding nova-core to placement-core in gerrit Message-ID: Hi, Placement got back under nova governance but so far we haven't consolidated the core teams yet. Stephen pointed out to me that given the ongoing RBAC works it would be beneficial if more nova cores, with API and RBAC experience, could approve such patches. So I'm proposing to add nova-core group to the placement-core group in gerrit. This means Ghanshyam, John, Lee, and Melanie would get core rights in the placement related repositories. @placement-core, @nova-core members: Please let me know if you have any objection to such change until end of this week. cheers, gibi From geguileo at redhat.com Tue Jan 26 16:28:32 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 26 Jan 2021 17:28:32 +0100 Subject: Secure RBAC work In-Reply-To: <20210126095721.30aa7aeb@suzdal.zaitcev.lan> References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> <20210126102245.42gzsdn36uvc54sq@localhost> <20210126095721.30aa7aeb@suzdal.zaitcev.lan> Message-ID: <20210126162832.deyhr5hrwdd2ytrr@localhost> On 26/01, Pete Zaitcev wrote: > On Tue, 26 Jan 2021 11:22:45 +0100 > Gorka Eguileor wrote: > > > On 19/01, Lance Bragstad wrote: > > > > TL;DR - OpenStack services implementing secure RBAC should update default > > > policies with the `reader` role in a consistent manner, where it is not > > > meant to protect sensitive information. > > > I have just one question regarding your TL;DR, shouldn't it be "where it > > is meant to protect sensitive information"? > > > > As I understood it the reader should be updated so it doesn't expose > > sensitive information (thus protecting it) because it's the least > > privileged role. > > I think Lance means that the reader role can see what member sees > (in any scope - system, domain, project). If a member sees some sensitive > information, then reader is not a place to make access reduction. In fact > it's harmful, because reader is not emulating member sufficiently for > being useful for anything. At least that's how I read Lance's intent. > Hi Pete, What I understood was that the reader wouldn't be able to see any sensitive information, not even from a user. Erno brought up a specific case with Glance and how a user may be able to see their licenses but a reader shouldn't. So either I'm misunderstanding something or Erno's concerns won't be addressed with our current approach. Cheers, Gorka. > > > I think it's clear that there are APIs and resources in OpenStack that fall > > > into a special category where we shouldn't expose certain information to > > > the lowest level of the role hierarchy, regardless of the scope. But, the > > > role implication functionality served a purpose initially to deliver a > > > least-privileged role used only for read operations within a given scope. I > > > think breaking that implication now is confusing considering we implemented > > > the implication in Rocky [3], but I think future work for an elevated > > > read-only role is a good path forward. > > That's not going to do a whole lot to simplify those check strings. > Perfect example of kicking the can down the road and committing to > more technical baggage. > > -- Pete > From sylvain.bauza at gmail.com Tue Jan 26 16:34:01 2021 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 26 Jan 2021 17:34:01 +0100 Subject: [nova][placement] adding nova-core to placement-core in gerrit In-Reply-To: References: Message-ID: Le mar. 26 janv. 2021 à 17:25, Balazs Gibizer a écrit : > Hi, > > Placement got back under nova governance but so far we haven't > consolidated the core teams yet. Stephen pointed out to me that given > the ongoing RBAC works it would be beneficial if more nova cores, with > API and RBAC experience, could approve such patches. So I'm proposing > to add nova-core group to the placement-core group in gerrit. This > means Ghanshyam, John, Lee, and Melanie would get core rights in the > placement related repositories. > > @placement-core, @nova-core members: Please let me know if you have any > objection to such change until end of this week. > > No objection at all, let's be pragmatic. We originally asked who wanted to be placement-core but now, since the governance is the same, that's no longer an issue. -Sylvain > cheers, > gibi > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 26 16:44:38 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jan 2021 16:44:38 +0000 Subject: Secure RBAC work In-Reply-To: <20210126162832.deyhr5hrwdd2ytrr@localhost> References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> <20210126102245.42gzsdn36uvc54sq@localhost> <20210126095721.30aa7aeb@suzdal.zaitcev.lan> <20210126162832.deyhr5hrwdd2ytrr@localhost> Message-ID: On Tue, 2021-01-26 at 17:28 +0100, Gorka Eguileor wrote: > On 26/01, Pete Zaitcev wrote: > > On Tue, 26 Jan 2021 11:22:45 +0100 > > Gorka Eguileor wrote: > > > > > On 19/01, Lance Bragstad wrote: > > > > > > TL;DR - OpenStack services implementing secure RBAC should update default > > > > policies with the `reader` role in a consistent manner, where it is not > > > > meant to protect sensitive information. > > > > > I have just one question regarding your TL;DR, shouldn't it be "where it > > > is meant to protect sensitive information"? > > > > > > As I understood it the reader should be updated so it doesn't expose > > > sensitive information (thus protecting it) because it's the least > > > privileged role. > > > > I think Lance means that the reader role can see what member sees > > (in any scope - system, domain, project). If a member sees some sensitive > > information, then reader is not a place to make access reduction. In fact > > it's harmful, because reader is not emulating member sufficiently for > > being useful for anything. At least that's how I read Lance's intent. > > > > Hi Pete, > > What I understood was that the reader wouldn't be able to see any > sensitive information, not even from a user. > > Erno brought up a specific case with Glance and how a user may be able > to see their licenses but a reader shouldn't. > > So either I'm misunderstanding something or Erno's concerns won't be > addressed with our current approach. > > Cheers, > Gorka. i have not tought about what we want the sematics to be yet but we might want more then one reader role e.g. "reader" which has readonly access to non sensitive info and "privledged_reader" which woudl be a read only form of perhaps member with perhaps yet another "admin_reader for readonly project_admin. my initall assumtion was the only detla between reader and member would be reader cannot modify things, i was not expecting it to filter out sensitive info but i also was not expecting it to have access to admin only data. a better approch for sensive info might be to require (reader and private) where we use private to denote anything a user with member can see but reader should not be able to see by default. if we did that i would define privledged_reader= (reader and private) member would then imply privledged_reader instead of just reader im not sure if that works in the larger context but i think filtering out lisince info via reader is not something that was orginally inteneded so that need something like "private" or "member_only" (privledged_reader= (reader and member_only)) to model that. > > > > > > I think it's clear that there are APIs and resources in OpenStack that fall > > > > into a special category where we shouldn't expose certain information to > > > > the lowest level of the role hierarchy, regardless of the scope. But, the > > > > role implication functionality served a purpose initially to deliver a > > > > least-privileged role used only for read operations within a given scope. I > > > > think breaking that implication now is confusing considering we implemented > > > > the implication in Rocky [3], but I think future work for an elevated > > > > read-only role is a good path forward. > > > > That's not going to do a whole lot to simplify those check strings. > > Perfect example of kicking the can down the road and committing to > > more technical baggage. > > > > -- Pete > > > > From smooney at redhat.com Tue Jan 26 16:50:46 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jan 2021 16:50:46 +0000 Subject: [nova][placement] adding nova-core to placement-core in gerrit In-Reply-To: References: Message-ID: <9508d3c610117a42a5e1c1d8e16b6d20f5c7af6a.camel@redhat.com> On Tue, 2021-01-26 at 17:34 +0100, Sylvain Bauza wrote: > Le mar. 26 janv. 2021 à 17:25, Balazs Gibizer a > écrit : > > > Hi, > > > > Placement got back under nova governance but so far we haven't > > consolidated the core teams yet. Stephen pointed out to me that given > > the ongoing RBAC works it would be beneficial if more nova cores, with > > API and RBAC experience, could approve such patches. So I'm proposing > > to add nova-core group to the placement-core group in gerrit. This > > means Ghanshyam, John, Lee, and Melanie would get core rights in the > > placement related repositories. > > > > @placement-core, @nova-core members: Please let me know if you have any > > objection to such change until end of this week. > > > > > No objection at all, let's be pragmatic. > We originally asked who wanted to be placement-core but now, since the > governance is the same, that's no longer an issue. i think this is somewhat like the os-vif situation we added nova-core to os-vif and basically said, while we are granting you the right to appove an merge change in os-vif we are not requireing you to review them. if you feel comfortable in reviewing the code and feel you understand teh context then the nova cores were invited to use there new core right but if they did not have time, interest or knolage to do so then there was no pressure on them to use those new rights. so for placmnet i think common sense would imply the same approch. provided there is no object nova cores would be free to use there own judgement regarding reviewing and or approving changes based on there understanding of the code change but we are not nessisarly requireing the whole nova core team to activly review plamcent changes if its just not relevent to them. > -Sylvain > > > > cheers, > > gibi > > > > > > > > > > > > From marios at redhat.com Tue Jan 26 16:51:46 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 26 Jan 2021 18:51:46 +0200 Subject: [tripleo] move os-refresh-config, os-collect-config, tripleo-ipsec to 'release-management': none In-Reply-To: References: Message-ID: On Tue, Jan 12, 2021 at 2:43 PM Marios Andreou wrote: > > > On Tue, Jan 12, 2021 at 12:07 PM Marios Andreou wrote: > >> >> >> On Mon, Jan 11, 2021 at 5:07 PM Herve Beraud wrote: >> >>> >>> >>> Le lun. 11 janv. 2021 à 15:27, Alex Schultz a >>> écrit : >>> >>>> On Mon, Jan 11, 2021 at 4:59 AM Marios Andreou >>>> wrote: >>>> > >>>> > Hi TripleO, >>>> > >>>> > you may have seen the thread started by Herve at [1] around the >>>> deadline for making a victoria release for os-refresh-config, >>>> os-collect-config and tripleo-ipsec. >>>> > >>>> > This message is to ask if anyone is still using these? In particular >>>> would anyone mind if we stopped making tagged releases, as discussed at >>>> [2]. Would someone mind if there was no stable/victoria branch for these >>>> repos? >>>> > >>>> > For the os-refresh/collect-config I suspect the answer is NO - at >>>> least, we aren't using these any more in the 'normal' tripleo deployment >>>> for a good while now, since we switched to config download by default. We >>>> haven't even created an ussuri branch for these [3] and no one has shouted >>>> about that (or at least not loud enough I haven't heard anything). >>>> >>>> Maybe switch to independent? That being said as James points out they >>>> are still used by Heat so maybe the ownership should be moved. >>>> >>> >>> I agree, moving them to the independent model could be a solution, in >>> this case the patch could be adapted to reflect that choice and we could >>> ignore these repos from victoria deadline point of view. >>> >>> Concerning the "ownership" side of the question this is more an internal >>> discussion between teams and eventually the TC, I don't think that that >>> will impact us from a release management POV. >>> >>> >> ack yes this makes sense thanks James, Alex, Herve and Rabi for your >> comments >> First I'll refactor >> https://review.opendev.org/c/openstack/releases/+/769915 to instead move >> them to independent (and i'll also include os-apply-config). Then I'll >> reach out to Heat PTL to see what they think about the transfer of >> ownership, >> >> thanks all >> >> > > me again ;) > > my apologies but I've spent some time staring at this and have changed my > mind. IMO it is best if we go ahead and create the victoria bits for these > right now whilst also moving forward on the proposed governance change. > > To be clear, I think we should merge [1] as is to create stable/victoria > in time for the deadline. We already have a stable/victoria for > os-apply-config and so let's be consistent and create it for > os-refresh-config and os-collect-config too. > > I reached out to Heat with [2] and posted [3] to illustrate the proposal > of moving the governance for these under Heat. If they want them then they > can decide about moving to independent or not. > > Otherwise I will followup next week with a move to independent. > > o/ folks, so it's been ~2 weeks since I sent [1] but it doesn't seem that taking these repos on is a priority for Heat ;). So let's move them to independent, as suggested and discussed in this thread; if there is a future need for a tagged release (e.g. by Heat or anyone else using them) it can still be made. I posted [2] just now to create the release files under _independent. Each of those is basically a combination of all the previous release files for the given project (i.e. all the releases and branches history for each). I am not sure if that is right (tox validate passes locally but it skips the _independent verification looks like) ; if you know I am grateful for your comments in [2]. If you disagree with this change in general then please say so here or on [2]. thank you, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019777.html [2] https://review.opendev.org/c/openstack/releases/+/772570 > [1] https://review.opendev.org/c/openstack/releases/+/769915 > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019777.html > [3] https://review.opendev.org/c/openstack/governance/+/770285 > > > > >> >> >>> >>>> > >>>> > For tripleo-ipsec it *looks* like we're still using it in the sense >>>> that we carry the template and pass the parameters in >>>> tripleo-heat-templates [4]. However we aren't running that in any CI job as >>>> far as I can see, and we haven't created any branches there since Rocky. So >>>> is anyone using tripleo-ipsec? >>>> > >>>> >>>> I think tripleo-ipsec is no longer needed as we now have proper >>>> tls-everywhere support. We might want to revisit this and >>>> deprecate/remove it. >>>> >>>> > Depending on the answers here and as discussed at [2] I will move to >>>> make these as unreleased (release-management: none in openstack/governance >>>> reference/projects.yaml) and remove the release file altogether. >>>> > >>>> > For now however and given the deadline of this week for a victoria >>>> release I am proposing that we move forward with [2] and cut the victoria >>>> branch for these. >>>> > >>>> > thanks for reading and please speak up if any of the above are >>>> important to you! >>>> > >>>> > thanks, marios >>>> > >>>> > [1] >>>> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019730.html >>>> > [2] >>>> https://review.opendev.org/c/openstack/releases/+/769915/1/deliverables/victoria/os-refresh-config.yaml >>>> > [3] https://pastebin.com/raw/KJ0JxKPx >>>> > [4] >>>> https://opendev.org/openstack/tripleo-heat-templates/src/commit/9fd709019fdd36d4c4821b2486e7151abf84bc3f/deployment/ipsec/ipsec-baremetal-ansible.yaml#L101-L106 >>>> > >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Tue Jan 26 16:52:07 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Tue, 26 Jan 2021 17:52:07 +0100 Subject: [infra][release] delete old EOL'd stable branches Message-ID: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Hi Infra Team! In October there was a discussion at Release Team meeting [1] about what can we do with the old, already EOL'd but not yet deleted branches (this is possible since with the Extended Maintenance process the general/"mass" EOL'ing was stopped and tagging a project branch EOL does not delete the branch anymore). Related to this, I would like to ask two things: 1. I've used the list_eol_stale_branches.sh [2] script to get the list of such not-yet-deleted branches for Ocata [3]. They are all tagged with 'ocata-eol', but stable/ocata branch still exists for them. Could you please delete these? [3] 2. On the Release Team meeting [1] we were hinted that with the newer version of gerrit (that was installed at the end of November) some automation is possible through gerrit API in the future. Can I get some help about where should I start with the automation? Which repository should I look, where can the deletion being triggered ("similarly like branch creation")? Thanks in advance, Előd [1] http://eavesdrop.openstack.org/meetings/releaseteam/2020/releaseteam.2020-10-22-16.00.log.html#l-40 [2] https://opendev.org/openstack/releases/src/commit/eb381492da3f7c826c35b9f147fd9a1ed55ae797/tools/list_eol_stale_branches.sh [3] http://paste.openstack.org/show/801992/ From smooney at redhat.com Tue Jan 26 16:53:15 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jan 2021 16:53:15 +0000 Subject: [nova][placement] adding nova-core to placement-core in gerrit In-Reply-To: <9508d3c610117a42a5e1c1d8e16b6d20f5c7af6a.camel@redhat.com> References: <9508d3c610117a42a5e1c1d8e16b6d20f5c7af6a.camel@redhat.com> Message-ID: On Tue, 2021-01-26 at 16:50 +0000, Sean Mooney wrote: > On Tue, 2021-01-26 at 17:34 +0100, Sylvain Bauza wrote: > > Le mar. 26 janv. 2021 à 17:25, Balazs Gibizer a > > écrit : > > > > > Hi, > > > > > > Placement got back under nova governance but so far we haven't > > > consolidated the core teams yet. Stephen pointed out to me that given > > > the ongoing RBAC works it would be beneficial if more nova cores, with > > > API and RBAC experience, could approve such patches. So I'm proposing > > > to add nova-core group to the placement-core group in gerrit. This > > > means Ghanshyam, John, Lee, and Melanie would get core rights in the > > > placement related repositories. > > > > > > @placement-core, @nova-core members: Please let me know if you have any > > > objection to such change until end of this week. > > > > > > > > No objection at all, let's be pragmatic. > > We originally asked who wanted to be placement-core but now, since the > > governance is the same, that's no longer an issue. > > i think this is somewhat like the os-vif situation > we added nova-core to os-vif and basically said, while we are granting you the right > to appove an merge change in os-vif we are not requireing you to review them. > if you feel comfortable in reviewing the code and feel you understand teh context then > the nova cores were invited to use there new core right but if they did not have time, interest > or knolage to do so then there was no pressure on them to use those new rights. > > > so for placmnet i think common sense would imply the same approch. > provided there is no object nova cores would be free to use there own judgement > regarding reviewing and or approving changes based on there understanding of the code change > but we are not nessisarly requireing the whole nova core team to activly review plamcent changes > if its just not relevent to them. that was a +1 by the way > > > -Sylvain > > > > > > > cheers, > > > gibi > > > > > > > > > > > > > > > > > > > > From C-Albert.Braden at charter.com Tue Jan 26 17:02:05 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Tue, 26 Jan 2021 17:02:05 +0000 Subject: [EXTERNAL] Re: [kolla][keystone] Another keycloak issue Message-ID: <244e2f00644c4960a82b533dc0a23111@ncwmexgp009.CORP.CHARTERCOM.com> Another problem I'm encountering with keycloak is that the keycloak users can't login on the command line. I created user test2 via Keycloak and test3 via CLI. They have identical roles on the admin domain: (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test2 +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ | 406a5f1cd92d45b5b3d54979235e896c | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | | 15c32af517334e28a9427809a9fc4805 | | | False | +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test3 +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | Role | User | Group | Project | Domain | System | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ | 406a5f1cd92d45b5b3d54979235e896c | 06a5f28d061f4d42b3bf64df378338fd | | 15c32af517334e28a9427809a9fc4805 | | | False | +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ I made identical env-setting "rc" files with only the username changed. Test3 logs in successfully but test2 fails: (openstack) [root at chrnc-area51-build-01 ~]# . ./test2-openrc.sh (openstack) [root at chrnc-area51-build-01 ~]# openstack server list The request you have made requires authentication. (HTTP 401) (Request-ID: req-ad7ee855-df98-434a-9afc-89f64a7addd1) (openstack) [root at chrnc-area51-build-01 ~]# . ./test3-openrc.sh (openstack) [root at chrnc-area51-build-01 ~]# openstack server list (openstack) [root at chrnc-area51-build-01 ~]# The only obvious difference is the longer UID for the Keycloak users. Do Keycloak-created users require something different in the env? Do I need to change something in Keycloak, to make the Keycloak users work the same as CLI-created users? Where can I look in the database to find the differences between these two users? RC files: (openstack) [root at chrnc-area51-build-01 ~]# cat test2-openrc.sh # Clear any old environment that may conflict. for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=test2 export OS_TENANT_NAME=test2 export OS_USERNAME=test2 export OS_PASSWORD= export OS_AUTH_URL=http://192.168.0.10:35357/v3 export OS_INTERFACE=internal export OS_ENDPOINT_TYPE=internalURL export OS_IDENTITY_API_VERSION=3 export OS_REGION_NAME=chrnc-area51-01 export OS_AUTH_PLUGIN=password export OS_CACERT=/etc/kolla/certificates/openstack.area51.dev.chtrse.com.pem (openstack) [root at chrnc-area51-build-01 ~]# cat test3-openrc.sh # Clear any old environment that may conflict. for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=test export OS_TENANT_NAME=test export OS_USERNAME=test3 export OS_PASSWORD= export OS_AUTH_URL=http://192.168.0.10:35357/v3 export OS_INTERFACE=internal export OS_ENDPOINT_TYPE=internalURL export OS_IDENTITY_API_VERSION=3 export OS_REGION_NAME=chrnc-area51-01 export OS_AUTH_PLUGIN=password export OS_CACERT=/etc/kolla/certificates/openstack.area51.dev.chtrse.com.pem E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From fungi at yuggoth.org Tue Jan 26 17:32:26 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Jan 2021 17:32:26 +0000 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Message-ID: <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> On 2021-01-26 17:52:07 +0100 (+0100), Előd Illés wrote: [...] > 1. I've used the list_eol_stale_branches.sh [2] script to get the list of > such not-yet-deleted branches for Ocata [3]. They are all tagged with > 'ocata-eol', but stable/ocata branch still exists for them. Could you please > delete these? [3] I'm happy to, have you made sure any open reviews for those branches are abandoned first? Gerrit won't allow deletion of a branch with open reviews. > 2. On the Release Team meeting [1] we were hinted that with the newer > version of gerrit (that was installed at the end of November) some > automation is possible through gerrit API in the future. Can I get some help > about where should I start with the automation? Which repository should I > look, where can the deletion being triggered ("similarly like branch > creation")? [...] The Gerrit REST API method for deleting branches is documented here: https://review.opendev.org/Documentation/rest-api-projects.html#delete-branch I'm not immediately sure where branch creation happens in the forest of our release automation, but I would expect deletion could be implemented similarly. Hopefully someone more intimately familiar with those jobs can chime in. The access control we'll need to grant to automation so that it can call that is documented here: https://review.opendev.org/Documentation/access-control.html#category_delete It'll need to be added manually as a permission for the Release Managers group in our All-Projects global ACL which individual projects inherit, and this documentation updated accordingly: https://opendev.org/opendev/system-config/src/branch/master/doc/source/gerrit.rst Happy to answer other questions as they arise. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jean-francois.taltavull at elca.ch Tue Jan 26 17:46:47 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Tue, 26 Jan 2021 17:46:47 +0000 Subject: Strange behaviour of OSC in keystone MFA context Message-ID: Hello, I'm experiencing the following strange behavior of openstack CLI with os-auth-methods option (most parameters are defined in clouds.yaml): $ openstack token issue --os-auth-type v3multifactor --os-auth-methods password,totp The plugin p could not be found Note that "p" is the first letter of "password". It looks like the option parser handled "password,totp" as a string instead of as a list of strings. Version of openstack CLI is 5.4.0. Any idea ? Thanks ! Jean-François From bekir.fajkovic at citynetwork.eu Tue Jan 26 07:43:02 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Tue, 26 Jan 2021 08:43:02 +0100 Subject: Some questions regarding OpenStack Trove Victoria release In-Reply-To: References: <3a0f8fa16da56790161fca1e6a50a870@citynetwork.eu> Message-ID: Hi! Thanks a lot for the kind answers and explanations, now the picture of the concept and the current development situation overall is much clearer to me. Regarding the question about different types of Guest Agents acquired, depending on the database type, that i asked, it is mainly based on the information i read about in the latest edition of the book "Openstack Trove Essentials" by Alok Shrivastwa and Sunil Sarat that i purchased recently. For example, as mentioned in the book: - Let's also look at the different types of guest agents that are required depending on the database engine that needs to be supported. The different guest agents   (for example, the MySQL and PostgreSQL guest agents) may even have different capabilities depending on what is supported on the particular database. (page 6) - The Guest Agent code is different for every datastore that needs to be supported and the Guest Agent for that particular datastore is installed on the corresponding image of the datastore version. (page 10) - As we have already seen in the previous chapters, the guest agent is different for different database engines, and hence the correct version of the guest agent needs to be installed on the system. (page 58) When it comes to guest image creation, i found now the places in the code that are used, as well as the acquired elements. A call to the function build_guest_image() is performed, involving those needed elements as minimal requirements: - ubuntu-minimal (which also invokes ubuntu-common i think) - cloud-init-datasources - pip-and-virtualenv - pip-cache - guest-agent - ${guest_os}-docker - root-passwd ref: https://github.com/openstack/trove/blob/master/integration/scripts/functions_qemu So, when it comes to my question regarding the disabling of the automatic updates, it should be doable in a couple of ways. Either by executing a script placed in UserData during guest VM creation and initialisation or by manipulating elements (for example, such as we have a script placed in ubuntu-common element that disables privacy extensions for IPv6 (RFC4941): /usr/local/lib/python3.6/dist-packages/diskimage_builder/elements/ubuntu-common/install.d/80-disable-rfc3041 I am really looking forward to our soon deployment of the Trove project, i see huge potential there! Best Regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- From: Lingxian Kong (anlin.kong at gmail.com) Date: 01/24/21 12:02 To: Bekir Fajkovic (bekir.fajkovic at citynetwork.eu) Cc: openstack-discuss (openstack-discuss at lists.openstack.org) Subject: Re: Some questions regarding OpenStack Trove Victoria release Hi Bekir, I'm very happy to answer your questions. See me comments in line. On Sat, Jan 23, 2021 at 8:24 AM Bekir Fajkovic wrote: The questions: ------------------------ Having the fact that Trove Victoria release provides Docker containers as a new way of database instance provisioning, i am wondering how far the project is developed in terms of covering the different types of databases. What i can see by mainly parsing the code provided on Github, those seem to be  officially released: - MySQL - MariaDB - PostgreSQL and the rest of the planned database types are in "experimental" phase. And also, regarding certain types of databases (for example MySQL, version 5.7 and 8.0) only certain  versions of the datastores seems to be supported, but not all. MySQL 5.7.x is supported but 8.0 is in experimental.   On the other hand, nothing regarding datastore versions supported for MariaDB and PostgreSQL seems to be mentioned somewhere. Could someone please confirm that as well as give some more details about it? MariaDB 10.4.x and PostgreSQL 12.4 are supported. The other versions need to be fully tested. I successfully managed to create certain versions of datastores in my devstack environment, belonging to those 3 database types mentioned above (and based on trovestack-generated dev guest image that is by default delivered with devstack installation), but not without some undesirable events. For example, i am able to register  PostgreSQL datastore version 12 and instantiate a database instance of that version but not version 13 and above, where i get some hostname-related errors etc. Yes, because PostgreSQL 13 has never been tested. Also, a question regarding the building of the production-ready guest image. As mentioned, Trovestack script is provided as a possible way of producing the images (by omitting dev option the Trove Guest Agent binaries are deployed into the instantiated VM). How does an image produced this way looks like? From where the base image is fetched, is it a "cloud based image" with cloud-init in it, are the automatic security and software patching features disabled in produced image, so that we do not get unexpected service  interruptions when the OS suddenly decides to start updating itself etc.. If you look at trovestack script implementation, you can see it's calling disk-image-create script from diskimage-builder, and there are some elements[2] defined in trove repo[3] for building the image. [1]: https://docs.openstack.org/diskimage-builder/latest [2]: https://docs.openstack.org/diskimage-builder/latest/elements.html [3]: https://github.com/openstack/trove/tree/master/integration/scripts/files/elements Regarding the Trove Guest Agent service - i read in some Trove books previously that there are dedicated agents for each and every database type, is it the same situation in Victoria release, or is there an "universal" Guest Agent covering all the database types nowadays? Where is the code that adapts the Agent commands towards the database  instances placed inside the project? Trove never has dedicated agents for each and every database type, it's using the same trove-guestagent but with different configurations for different datastores. The backups - as i can see there seem to be some kind of dedicated docker-backup images involved in each database type. Could someone explain the internals of backup mechanisms inside Trove Victoria release in more details? The backup container image is created to help trove-guestagent to achieve datastore-agnostic (so we only need to maintain a universal guest image), we shift the backup and restore functionalities and needed tools to a dedicated container. The implementation can be found here[4]. [4]: https://github.com/openstack/trove/tree/master/backup --- Lingxian Kong Senior Software Engineer Catalyst Cloud www.catalystcloud.nz  -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Tue Jan 26 13:28:57 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Tue, 26 Jan 2021 18:28:57 +0500 Subject: Trove Image Create Error Message-ID: Hi, I am creating a trove image with default parameters. ./trovestack build-image ubuntu bionic true ubuntu I am having below error . Please advise. 2021-01-26 13:19:24.905 | Installing collected packages: wheel, setuptools, pip 2021-01-26 13:19:25.292 | Attempting uninstall: pip 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 setuptools-52.0.0 wheel-0.36.2 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall 2021-01-26 13:19:27.367 | Traceback (most recent call last): 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in 2021-01-26 13:19:27.371 | main() 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in bootstrap 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main as pip_entry_point 2021-01-26 13:19:27.372 | File "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") 2021-01-26 13:19:27.372 | ^ 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax 2021-01-26 13:19:27.396 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash 2021-01-26 13:19:27.409 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : echo '' 2021-01-26 13:19:27.410 | ++ /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q 2021-01-26 13:19:27.428 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup 2021-01-26 13:19:27.439 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 : exitval=1 2021-01-26 13:19:27.447 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 : cleanup 2021-01-26 13:19:27.457 | + /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 : unmount_image -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jan 26 19:00:57 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jan 2021 19:00:57 +0000 Subject: Strange behaviour of OSC in keystone MFA context In-Reply-To: References: Message-ID: On Tue, 2021-01-26 at 17:46 +0000, Taltavull Jean-Francois wrote: > Hello, > > I'm experiencing the following strange behavior of openstack CLI with os-auth-methods option (most parameters are defined in clouds.yaml): > > $ openstack token issue --os-auth-type v3multifactor --os-auth-methods password,totp > --os-auth-methods does not appear to be a standard part of osc infact i cant find it in any openstack repo with i think this is the implemtaions https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth1/loading/_plugins/identity/v3.py#L303-L340 this presumable is where it generates teh optins options.extend([ loading.Opt( 'auth_methods', required=True, help="Methods to authenticate with."), ]) if i do openstack help --os-auth-type v3multifactor it does show up with the following text --os-auth-methods With v3multifactor: Methods to authenticate with. (Env: OS_AUTH_METHODS) that does not say much but https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth1/tests/unit/identity/test_identity_v3.py#L762-L800 implies its a list with that said there are no test for multifactor as far as i can see like this one https://opendev.org/openstack/python-openstackclient/src/branch/master/openstackclient/tests/functional/common/test_args.py#L66-L79 there also does not seam too be a release note declaring support. so while keystone auth support multi factor im not sure that osc actully does i specpec that the fild type is not correct and it is indeed been parsed as a string instead of a list of stirng field. it might be fixable via keystoneauth but it proably need osc support and testing. > The plugin p could not be found > > Note that "p" is the first letter of "password". It looks like the option parser handled "password,totp" as a string instead of as a list of strings. > > Version of openstack CLI is 5.4.0. > > Any idea ? > > Thanks ! > > Jean-François > > From juliaashleykreger at gmail.com Tue Jan 26 19:40:37 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 26 Jan 2021 11:40:37 -0800 Subject: [ironic] Having a regular review jam - Lets figure out a day/time? Message-ID: Greetings my fellow bringiners of irony! Yesterday, during our virtual midcycle meeting[0], we discussed having review jams in order to help facilitate working through some of the large volumes of patches. An example of a case where this will be necessary being the Secure RBAC work. As agreed yesterday, I've created a poll[1] to try and identify the best day of the week and time of day for us to have this get together to review patch chains, spread context of the changes, and discuss further. If you can, please reply to the poll in the next week and from there I'll go ahead and get something on a calendar. In all likelihood, I may actually send calendar invites for this, if that is acceptable for folks. Thanks everyone, and have a wonderful week! -Julia [0]: https://etherpad.opendev.org/p/ironic-wallaby-midcycle [1]: https://doodle.com/poll/mdv6vpw6qdfzteg2 From kennelson11 at gmail.com Tue Jan 26 20:51:45 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 26 Jan 2021 12:51:45 -0800 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: For better or for worse, we are a pretty small team and don't need to poll since we can all attend a meeting and agree there. Calling for outside opinions was also a half hearted plea for help :) -Kendall (diablo_rojo) On Fri, Jan 22, 2021 at 1:28 AM Radosław Piliszek < radoslaw.piliszek at gmail.com> wrote: > On Thu, Jan 21, 2021 at 10:24 PM Kendall Nelson > wrote: > > > > Hello Everyone! > > > > The StoryBoard team is looking at alternatives to Angular.js since its > going end of life. After some research, we've boiled all the options down > to two possibilities: > > > > Vue.js > > > > or > > > > React.js > > Hello, Kendall! > > This is likely the toughest question in the frontend universe at the > moment. > Both solutions are very well thought out and have solid ecosystems. > Based on observed productivity both are good choices. > Personally, I have done more Vue than React. > I have added a few points in the etherpad. > Angular is not a bad choice either but it involves much stronger > bonding with the final product. > The others leave more freedom of choice. > > As for the verdict, I am afraid the best solution would be to run > voting for parties interested in Storyboard development and just stick > to the poll winner. > > -yoctozepto > > > I am diving more deeply into researching those two options this week, > but any opinions or feedback on your experiences with either of them would > be helpful! > > > > Here is the etherpad with our research so far[3]. > > > > Feel free to add opinions there or in response to this thread! > > > > -Kendall Nelson (diablo_rojo) & The StoryBoard Team > > > > [1] https://vuejs.org/ > > [2] https://reactjs.org/ > > [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jan 26 22:03:07 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 26 Jan 2021 14:03:07 -0800 Subject: Trove Image Create Error In-Reply-To: References: Message-ID: On Tue, Jan 26, 2021, at 5:28 AM, Ammad Syed wrote: > > Hi, > > I am creating a trove image with default parameters. > > > ./trovestack build-image ubuntu bionic true ubuntu > > > I am having below error . Please advise. > > 2021-01-26 13:19:24.905 | Installing collected packages: wheel, > setuptools, pip > 2021-01-26 13:19:25.292 | Attempting uninstall: pip > 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 > 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: > 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 > 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 > setuptools-52.0.0 wheel-0.36.2 > 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall > 2021-01-26 13:19:27.367 | Traceback (most recent call last): > 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in > > 2021-01-26 13:19:27.371 | main() > 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main > 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) > 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in > bootstrap > 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main > as pip_entry_point > 2021-01-26 13:19:27.372 | File > "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 > 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") > 2021-01-26 13:19:27.372 | ^ > 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax > 2021-01-26 13:19:27.396 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 : check_break after-error run_in_target bash > 2021-01-26 13:19:27.409 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : echo '' > 2021-01-26 13:19:27.410 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 : egrep -e '(,|^)after-error(,|$)' -q > 2021-01-26 13:19:27.428 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 : trap_cleanup > 2021-01-26 13:19:27.439 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 : exitval=1 > 2021-01-26 13:19:27.447 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 : cleanup > 2021-01-26 13:19:27.457 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 : unmount_image > > -- > Regards, > > > Syed Ammad Ali This issue is that get-pip.py has been updated to be python3 only. There is work going on in diskimage-builder to fix this here: https://review.opendev.org/c/openstack/diskimage-builder/+/772254. If trove is keeping up to date with diskimage-builder this will get fixed in the next release. Clark From anlin.kong at gmail.com Tue Jan 26 22:22:34 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 27 Jan 2021 11:22:34 +1300 Subject: Trove Image Create Error In-Reply-To: References: Message-ID: Hi, I can reproduce the issue. From the log, apparently it's because pip 21 dropped python 2 support recently. I am working on this hopefully could fix ASAP. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Wed, Jan 27, 2021 at 2:40 AM Ammad Syed wrote: > Hi, > > I am creating a trove image with default parameters. > > > ./trovestack build-image ubuntu bionic true ubuntu > > > I am having below error . Please advise. > > 2021-01-26 13:19:24.905 | Installing collected packages: wheel, > setuptools, pip > 2021-01-26 13:19:25.292 | Attempting uninstall: pip > 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 > 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: > 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 > 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 > setuptools-52.0.0 wheel-0.36.2 > 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall > 2021-01-26 13:19:27.367 | Traceback (most recent call last): > 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in > 2021-01-26 13:19:27.371 | main() > 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main > 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) > 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in bootstrap > 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main as > pip_entry_point > 2021-01-26 13:19:27.372 | File > "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 > 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") > 2021-01-26 13:19:27.372 | ^ > 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax > 2021-01-26 13:19:27.396 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 > : check_break after-error run_in_target bash > 2021-01-26 13:19:27.409 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 > : echo '' > 2021-01-26 13:19:27.410 | ++ > /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 > : egrep -e '(,|^)after-error(,|$)' -q > 2021-01-26 13:19:27.428 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 > : trap_cleanup > 2021-01-26 13:19:27.439 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 > : exitval=1 > 2021-01-26 13:19:27.447 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 > : cleanup > 2021-01-26 13:19:27.457 | + > /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 > : unmount_image > > -- > Regards, > > > Syed Ammad Ali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jan 27 02:15:18 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jan 2021 02:15:18 +0000 Subject: [tact-sig][dev][infra][qa] OpenStack TaCT SIG 2021 Request for Help Message-ID: <20210127021518.xhoig5mkpjmr7xbe@yuggoth.org> The tl;dr on this is that we need more volunteers to review CI job configuration and project creation requests for OpenStack, and if your proposed config changes are waiting longer to get reviewed than you like, consider that a not so gentle reminder that you bear some of the responsibility. Please read on and reach out if you'd like to help. Our community infrastructure, these days known as the OpenDev Collaboratory, is first and foremost a group effort for projects and their contributors to join forces in collectively maintaining the services and automation on which they rely. This has always been a bit of a "Tragedy of the Commons" situation, where it's all too easy for users of the commons to feel like it's someone else's job to maintain things. Unfortunately, when everyone feels this way, everyone also gets to suffer through the result. What we need is not just for people to avoid throwing trash on the ground when they visit the park, but for them to bring a trash bag (or bin liner, depending on your continent) along and collect a bit of rubbish they see along the way, even if it isn't theirs. The people who take some time out of their day to pick up after others in OpenStack are colloquially referred to as the Testing and Collaboration Tools Special Interest Group (TaCT SIG). Folks come and go over time, and lately we've had more going than coming, so could really use some new volunteers. Consider this a request that people who have (or are willing to invest in gaining) an understanding of things like our job configuration and project creation process make themselves known and pitch in on reviews. We're happy to fast-track approval rights for anyone who shows they can reliably evaluate these changes. Now down to the nitty-gritty details... if you're inclined to help, first and foremost let me or other TaCT SIG reviewers know so we can make sure to prioritize your input. Then take a look at open reviews in particular for the openstack/project-config and openstack/openstack-zuul-jobs repositories. Beyond that, checking through changes for opendev/base-jobs and zuul/zuul-jobs would also be great as they tend to get intertwined with OpenStack's job changes anyway. Not skipping changes which are failing CI results is really helpful too, since change authors often don't understand what the failure result is trying to tell them to fix. Further, answering job-related questions in IRC or on mailing lists helps spread out that load for the rest of us as well. The quickest way to get started is to join the #openstack-infra channel on the Freenode IRC network, and the #opendev and #opendev-meeting channels too if you like. You may want to set up a highlight in your IRC client for the string "config-core" which is what we've standardized on to try and bring things to the attention of the configuration core reviewers. When there are changes you've looked over and you think are probably ready to merge, let us know (#openstack-infra for OpenStack-related changes, non-OpenStack-specific things in #opendev). Mention config-core when you do so, to get our attention. If you have questions, I'm happy to answer them. You can reply on-list or in private to this message, or ask in #openstack-infra on IRC (I'm "fungi" in there if you need me). -- Jeremy Stanley, your humble OpenStack TaCT SIG chair -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From iwienand at redhat.com Wed Jan 27 04:52:48 2021 From: iwienand at redhat.com (Ian Wienand) Date: Wed, 27 Jan 2021 15:52:48 +1100 Subject: [all] CI test result table in the new gerrit review UI In-Reply-To: <0e8a5b91addb09d16df0872dd671ab4fc7cd81cd.camel@redhat.com> References: <20210119055830.GB3137911@fedora19.localdomain> <0e8a5b91addb09d16df0872dd671ab4fc7cd81cd.camel@redhat.com> Message-ID: <20210127045248.GA3522390@fedora19.localdomain> On Thu, Jan 21, 2021 at 10:58:05AM +0000, Stephen Finucane wrote: > Thanks for this. One issues I've noted is that it doesn't update as I click > through a chain of patches. The results from patch N will be shown for any other > patch I navigate to. I don't know if this is an issue with my browser, though it > sounds like something is being cached and that cache should be invalidated? Thanks, you're right. I think it's not observing when the "change" objects ... change. I'll look into it. -i From syedammad83 at gmail.com Wed Jan 27 06:41:14 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 27 Jan 2021 11:41:14 +0500 Subject: Trove Image Create Error In-Reply-To: References: Message-ID: Hi Lingxian, I have done changes with your provided workaround. https://review.opendev.org/plugins/gitiles/openstack/trove/+/1e04b269ca75067e28ae3e6ecb60ac2d11ab5b3b%5E%21/#F1 It worked and image creation became successful. On Wed, Jan 27, 2021 at 3:22 AM Lingxian Kong wrote: > Hi, > > I can reproduce the issue. From the log, apparently it's because pip 21 > dropped python 2 support recently. > > I am working on this hopefully could fix ASAP. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, Jan 27, 2021 at 2:40 AM Ammad Syed wrote: > >> Hi, >> >> I am creating a trove image with default parameters. >> >> >> ./trovestack build-image ubuntu bionic true ubuntu >> >> >> I am having below error . Please advise. >> >> 2021-01-26 13:19:24.905 | Installing collected packages: wheel, >> setuptools, pip >> 2021-01-26 13:19:25.292 | Attempting uninstall: pip >> 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 >> 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: >> 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 >> 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 >> setuptools-52.0.0 wheel-0.36.2 >> 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall >> 2021-01-26 13:19:27.367 | Traceback (most recent call last): >> 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in >> >> 2021-01-26 13:19:27.371 | main() >> 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main >> 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) >> 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in bootstrap >> 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main as >> pip_entry_point >> 2021-01-26 13:19:27.372 | File >> "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 >> 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") >> 2021-01-26 13:19:27.372 | ^ >> 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax >> 2021-01-26 13:19:27.396 | ++ >> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 >> : check_break after-error run_in_target bash >> 2021-01-26 13:19:27.409 | ++ >> /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 >> : echo '' >> 2021-01-26 13:19:27.410 | ++ >> /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 >> : egrep -e '(,|^)after-error(,|$)' -q >> 2021-01-26 13:19:27.428 | + >> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 >> : trap_cleanup >> 2021-01-26 13:19:27.439 | + >> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 >> : exitval=1 >> 2021-01-26 13:19:27.447 | + >> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 >> : cleanup >> 2021-01-26 13:19:27.457 | + >> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 >> : unmount_image >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Jan 27 09:09:36 2021 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 27 Jan 2021 09:09:36 +0000 Subject: [EXTERNAL] Re: [kolla][keystone] Another keycloak issue In-Reply-To: <244e2f00644c4960a82b533dc0a23111@ncwmexgp009.CORP.CHARTERCOM.com> References: <244e2f00644c4960a82b533dc0a23111@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: On Tue, 26 Jan 2021 at 17:02, Braden, Albert wrote: > > Another problem I'm encountering with keycloak is that the keycloak users can't login on the command line. I created user test2 via Keycloak and test3 via CLI. They have identical roles on the admin domain: > > (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test2 > +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > | Role | User | Group | Project | Domain | System | Inherited | > +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > | 406a5f1cd92d45b5b3d54979235e896c | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | | 15c32af517334e28a9427809a9fc4805 | | | False | > +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test3 > +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > | Role | User | Group | Project | Domain | System | Inherited | > +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > | 406a5f1cd92d45b5b3d54979235e896c | 06a5f28d061f4d42b3bf64df378338fd | | 15c32af517334e28a9427809a9fc4805 | | | False | > +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > > I made identical env-setting "rc" files with only the username changed. Test3 logs in successfully but test2 fails: > > (openstack) [root at chrnc-area51-build-01 ~]# . ./test2-openrc.sh > (openstack) [root at chrnc-area51-build-01 ~]# openstack server list > The request you have made requires authentication. (HTTP 401) (Request-ID: req-ad7ee855-df98-434a-9afc-89f64a7addd1) > (openstack) [root at chrnc-area51-build-01 ~]# . ./test3-openrc.sh > (openstack) [root at chrnc-area51-build-01 ~]# openstack server list > > (openstack) [root at chrnc-area51-build-01 ~]# > > The only obvious difference is the longer UID for the Keycloak users. Do Keycloak-created users require something different in the env? Do I need to change something in Keycloak, to make the Keycloak users work the same as CLI-created users? Where can I look in the database to find the differences between these two users? > I'm no expert on federation, but I understand that you need to use a slightly different method with the CLI. This page has some info: https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html From stephenfin at redhat.com Wed Jan 27 09:49:44 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 27 Jan 2021 09:49:44 +0000 Subject: [nova][placement] adding nova-core to placement-core in gerrit In-Reply-To: References: Message-ID: On Tue, 2021-01-26 at 17:14 +0100, Balazs Gibizer wrote: > Hi, > > Placement got back under nova governance but so far we haven't > consolidated the core teams yet. Stephen pointed out to me that given > the ongoing RBAC works it would be beneficial if more nova cores, with > API and RBAC experience, could approve such patches. So I'm proposing > to add nova-core group to the placement-core group in gerrit. This > means Ghanshyam, John, Lee, and Melanie would get core rights in the > placement related repositories. > > @placement-core, @nova-core members: Please let me know if you have any > objection to such change until end of this week. I brought it up and obviously think it's a sensible idea, so it's an easy +1 from me. Stephen > cheers, > gibi From stephenfin at redhat.com Wed Jan 27 09:53:01 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 27 Jan 2021 09:53:01 +0000 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Message-ID: On Tue, 2021-01-26 at 17:52 +0100, Előd Illés wrote: Hi Infra Team! In October there was a discussion at Release Team meeting [1] about what can we do with the old, already EOL'd but not yet deleted branches (this is possible since with the Extended Maintenance process the general/"mass" EOL'ing was stopped and tagging a project branch EOL does not delete the branch anymore). Not an answer but rather a question for my own understanding: what is the advantage of deleting branches? I understand that these things would no longer maintained and the gates will slowly break, but they're still relatively useful as a reference to explore project history and it's not like branches are expensive in git. Stephen Related to this, I would like to ask two things: 1. I've used the list_eol_stale_branches.sh [2] script to get the list of such not-yet-deleted branches for Ocata [3]. They are all tagged with 'ocata-eol', but stable/ocata branch still exists for them. Could you please delete these? [3] 2. On the Release Team meeting [1] we were hinted that with the newer version of gerrit (that was installed at the end of November) some automation is possible through gerrit API in the future. Can I get some help about where should I start with the automation? Which repository should I look, where can the deletion being triggered ("similarly like branch creation")? Thanks in advance, Előd [1] http://eavesdrop.openstack.org/meetings/releaseteam/2020/releaseteam.2020-10-22-16.00.log.html#l-40 [2] https://opendev.org/openstack/releases/src/commit/eb381492da3f7c826c35b9f147fd9a1ed55ae797/tools/list_eol_stale_branches.sh [3] http://paste.openstack.org/show/801992/ From stephenfin at redhat.com Wed Jan 27 10:00:14 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 27 Jan 2021 10:00:14 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <20210121145011.23pvymfzukevkqsl@yuggoth.org> References: <3c223c99-929e-ab6d-2268-10d361f76349@debian.org> <17716bc8c42.12286180c5673.4264524174161781845@ghanshyammann.com> <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> <0d4bb1d274c5456a835e5c43c890b80867e1f29b.camel@redhat.com> <20210121145011.23pvymfzukevkqsl@yuggoth.org> Message-ID: <0b0224cba026c08ccd3986a5da1672bf44863ad2.camel@redhat.com> On Thu, 2021-01-21 at 14:50 +0000, Jeremy Stanley wrote: > On 2021-01-21 09:30:19 +0000 (+0000), Stephen Finucane wrote: > [...] > > What we have doesn't work, and direct dependencies are the only > > things we can truly control. In the scenario you're suggesting, > > not only do we need to track dependencies, but we also need to > > track the dependencies of dependencies, and the dependencies of > > the dependencies of the dependencies, and the dependencies of the > > dependencies of the dependencies of the dependencies etc. etc. > > down the rabbit hole. For each of these indirect dependencies, of > > which there may be many, we need to figure out what the minimum > > version is for each of these indirect dependencies is manually, > > because as has been noted many times there is no standardized > > machinery in place in pip etc. to find (and test) the minimum > > dependency versions supported by a package. Put another way, if we > > depend on package foo, which depends on package bar, which depends > > on package baz, we can state our own informed minimum version for > > foo, but we will need to inspect foo to find a minimum version of > > bar that is suitable, and we will need to inspect baz to find a > > minimum version of baz that is suitable. An impossible ask. > [...] > > Where this begins to fall apart, as I mentioned earlier, is that the > larger your transitive dependency set, the more likely it is that a > direct dependency is *also* an indirect dependency (maybe many > layers down). If a dependency of your dependency updates to a > version which insists on a newer version of some other direct > dependency of yours than what you've set in lower-constraints.txt, > then your jobs are going to break and need lower bounds adjustments > or additional indirect dependencies added to the > lower-constraints.txt to roll them back to versions which worked > with the others you've set. Unlike upper-constraints.txt where it's > assumed that a complete transitive set of dependencies is covered, > this will mean additional churn in your stable branches over time. Ah, I didn't grasp this particular point when you initially raised it. I've spent some time playing around with pip (there's a fun code base) to no avail, so this (the reduced dependency set) is still the best idea I've got, flaws and all. With that said, our 'upper-constraints.txt' should still capture all dependencies, including those indirect ones, no? As such, none of those should be increasing on stable branches, which means the set of transitive dependencies should remain fixed once the branch is cut? > Or is the idea that we would only every do lower bounds checking on > the release under development, and then remove those jobs when we > branch? This is also an option if it proves to be particularly painful. It does feel like this would indicate a failure of our upper-constraints to cap a dependency, even if its indirect, but I realize there are limits to what we can achieve here. Stephen From stephenfin at redhat.com Wed Jan 27 10:17:38 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 27 Jan 2021 10:17:38 +0000 Subject: [All] Is it time to move on from StoryBoard? (Was: [All][StoryBoard] Angular.js Alternatives) In-Reply-To: References: Message-ID: <35c47f497e8f233905b5fec033174a64fc11fdc8.camel@redhat.com> On Tue, 2021-01-26 at 12:51 -0800, Kendall Nelson wrote: > For better or for worse, we are a pretty small team and don't need to poll > since we can all attend a meeting and agree there.  > > Calling for outside opinions was also a half hearted plea for help :)  Is this an indication that we finally have to settle on another bug tracker? This has been discussed previously but I can't find any conclusions from those past discussions, hence I'm asking again. I do not intend to dismiss the hard work of those people working on StoryBoard in the slightest, but it does seem like StoryBoard as a project has never really taken off outside OpenStack and is having a hard time surviving, let alone growing, likely because of that along with the widespread use of forge-style tools (GitHub, GitLab) and more comprehensive project management tools (sigh, JIRA). I recall fungi (?) raising the idea of enabling the more forge'y features, including the issue tracker, of Gitea in the past, but there are also separate tools like (heaven forbid) Bugzilla, Trac, Mantis, etc. that we could use. Heck, Launchpad is still around and is still open source (though I recall there being other issues with that?). We've already made a tough decision recently, with the sunsetting of ask.o.o, and with the impending deprecation of Angular.js, perhaps this is as good a time as any to do the same with StoryBoard? /me goes back to logging in for the zillionth time while trying to triage OSC bugs :) Cheers, Stephen > -Kendall (diablo_rojo) > > On Fri, Jan 22, 2021 at 1:28 AM Radosław Piliszek > wrote: > > On Thu, Jan 21, 2021 at 10:24 PM Kendall Nelson > > wrote: > > > > > > Hello Everyone! > > > > > > The StoryBoard team is looking at alternatives to Angular.js since its > > going end of life. After some research, we've boiled all the options down to > > two possibilities: > > > > > > Vue.js > > > > > > or > > > > > > React.js > > > > Hello, Kendall! > > > > This is likely the toughest question in the frontend universe at the moment. > > Both solutions are very well thought out and have solid ecosystems. > > Based on observed productivity both are good choices. > > Personally, I have done more Vue than React. > > I have added a few points in the etherpad. > > Angular is not a bad choice either but it involves much stronger > > bonding with the final product. > > The others leave more freedom of choice. > > > > As for the verdict, I am afraid the best solution would be to run > > voting for parties interested in Storyboard development and just stick > > to the poll winner. > > > > -yoctozepto > > > > > I am diving more deeply into researching those two options this week, but > > any opinions or feedback on your experiences with either of them would be > > helpful! > > > > > > Here is the etherpad with our research so far[3]. > > > > > > Feel free to add opinions there or in response to this thread! > > > > > > -Kendall Nelson (diablo_rojo) & The StoryBoard Team > > > > > > [1] https://vuejs.org/ > > > [2] https://reactjs.org/ > > > [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research From hberaud at redhat.com Wed Jan 27 10:20:33 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 27 Jan 2021 11:20:33 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Message-ID: Le mer. 27 janv. 2021 à 10:55, Stephen Finucane a écrit : > On Tue, 2021-01-26 at 17:52 +0100, Előd Illés wrote: > Hi Infra Team! > > In October there was a discussion at Release Team meeting [1] about what > can we do with the old, already EOL'd but not yet deleted branches (this > is possible since with the Extended Maintenance process the > general/"mass" EOL'ing was stopped and tagging a project branch EOL does > not delete the branch anymore). > > > Not an answer but rather a question for my own understanding: what is the > advantage of deleting branches? I understand that these things would no > longer > maintained and the gates will slowly break, AFAIK this is mostly to avoid issues with gates/zuul. > but they're still relatively useful > as a reference to explore project history and it's not like branches are > expensive in git. > Tags can be used to dig in the related history. > > Stephen > > Related to this, I would like to ask two things: > > 1. I've used the list_eol_stale_branches.sh [2] script to get the list > of such not-yet-deleted branches for Ocata [3]. They are all tagged with > 'ocata-eol', but stable/ocata branch still exists for them. Could you > please delete these? [3] > > 2. On the Release Team meeting [1] we were hinted that with the newer > version of gerrit (that was installed at the end of November) some > automation is possible through gerrit API in the future. Can I get some > help about where should I start with the automation? Which repository > should I look, where can the deletion being triggered ("similarly like > branch creation")? > > Thanks in advance, > > Előd > > [1] > > http://eavesdrop.openstack.org/meetings/releaseteam/2020/releaseteam.2020-10-22-16.00.log.html#l-40 > [2] > > https://opendev.org/openstack/releases/src/commit/eb381492da3f7c826c35b9f147fd9a1ed55ae797/tools/list_eol_stale_branches.sh > [3] http://paste.openstack.org/show/801992/ > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 27 10:23:31 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 27 Jan 2021 11:23:31 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> Message-ID: Le mar. 26 janv. 2021 à 18:35, Jeremy Stanley a écrit : > On 2021-01-26 17:52:07 +0100 (+0100), Előd Illés wrote: > [...] > > 1. I've used the list_eol_stale_branches.sh [2] script to get the list of > > such not-yet-deleted branches for Ocata [3]. They are all tagged with > > 'ocata-eol', but stable/ocata branch still exists for them. Could you > please > > delete these? [3] > > I'm happy to, have you made sure any open reviews for those branches > are abandoned first? Gerrit won't allow deletion of a branch with > open reviews. > I think we need a first round of inspection on these stale branches to see if opened patches exist and then if needed start discussion with teams to ask them to drop the patches those who have been found. I'll try to add this feature to check for opened patches within `list_eol_stale_branches.sh`. > > > 2. On the Release Team meeting [1] we were hinted that with the newer > > version of gerrit (that was installed at the end of November) some > > automation is possible through gerrit API in the future. Can I get some > help > > about where should I start with the automation? Which repository should I > > look, where can the deletion being triggered ("similarly like branch > > creation")? > [...] > > The Gerrit REST API method for deleting branches is documented here: > > > https://review.opendev.org/Documentation/rest-api-projects.html#delete-branch > > I'm not immediately sure where branch creation happens in the forest > of our release automation, but I would expect deletion could be > implemented similarly. Hopefully someone more intimately familiar > with those jobs can chime in. > > The access control we'll need to grant to automation so that it can > call that is documented here: > > > https://review.opendev.org/Documentation/access-control.html#category_delete > > It'll need to be added manually as a permission for the Release > Managers group in our All-Projects global ACL which individual > projects inherit, and this documentation updated accordingly: > > > https://opendev.org/opendev/system-config/src/branch/master/doc/source/gerrit.rst > > Happy to answer other questions as they arise. > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Wed Jan 27 10:50:26 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 27 Jan 2021 23:50:26 +1300 Subject: Trove Image Create Error In-Reply-To: References: Message-ID: Thanks for verification, the patch has been merged BTW. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Wed, Jan 27, 2021 at 7:43 PM Ammad Syed wrote: > Hi Lingxian, > > I have done changes with your provided workaround. > > > https://review.opendev.org/plugins/gitiles/openstack/trove/+/1e04b269ca75067e28ae3e6ecb60ac2d11ab5b3b%5E%21/#F1 > > > It worked and image creation became successful. > > On Wed, Jan 27, 2021 at 3:22 AM Lingxian Kong > wrote: > >> Hi, >> >> I can reproduce the issue. From the log, apparently it's because pip 21 >> dropped python 2 support recently. >> >> I am working on this hopefully could fix ASAP. >> >> --- >> Lingxian Kong >> Senior Cloud Engineer (Catalyst Cloud) >> Trove PTL (OpenStack) >> OpenStack Cloud Provider Co-Lead (Kubernetes) >> >> >> On Wed, Jan 27, 2021 at 2:40 AM Ammad Syed wrote: >> >>> Hi, >>> >>> I am creating a trove image with default parameters. >>> >>> >>> ./trovestack build-image ubuntu bionic true ubuntu >>> >>> >>> I am having below error . Please advise. >>> >>> 2021-01-26 13:19:24.905 | Installing collected packages: wheel, >>> setuptools, pip >>> 2021-01-26 13:19:25.292 | Attempting uninstall: pip >>> 2021-01-26 13:19:25.293 | Found existing installation: pip 9.0.1 >>> 2021-01-26 13:19:25.294 | Uninstalling pip-9.0.1: >>> 2021-01-26 13:19:25.309 | Successfully uninstalled pip-9.0.1 >>> 2021-01-26 13:19:26.231 | Successfully installed pip-21.0 >>> setuptools-52.0.0 wheel-0.36.2 >>> 2021-01-26 13:19:26.761 | + python2 /tmp/get-pip.py -U --force-reinstall >>> 2021-01-26 13:19:27.367 | Traceback (most recent call last): >>> 2021-01-26 13:19:27.367 | File "/tmp/get-pip.py", line 24226, in >>> >>> 2021-01-26 13:19:27.371 | main() >>> 2021-01-26 13:19:27.371 | File "/tmp/get-pip.py", line 199, in main >>> 2021-01-26 13:19:27.372 | bootstrap(tmpdir=tmpdir) >>> 2021-01-26 13:19:27.372 | File "/tmp/get-pip.py", line 82, in bootstrap >>> 2021-01-26 13:19:27.372 | from pip._internal.cli.main import main as >>> pip_entry_point >>> 2021-01-26 13:19:27.372 | File >>> "/tmp/tmpxnphbN/pip.zip/pip/_internal/cli/main.py", line 60 >>> 2021-01-26 13:19:27.372 | sys.stderr.write(f"ERROR: {exc}") >>> 2021-01-26 13:19:27.372 | ^ >>> 2021-01-26 13:19:27.372 | SyntaxError: invalid syntax >>> 2021-01-26 13:19:27.396 | ++ >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:59 >>> : check_break after-error run_in_target bash >>> 2021-01-26 13:19:27.409 | ++ >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 >>> : echo '' >>> 2021-01-26 13:19:27.410 | ++ >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/common-functions:check_break:143 >>> : egrep -e '(,|^)after-error(,|$)' -q >>> 2021-01-26 13:19:27.428 | + >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:run_in_target:1 >>> : trap_cleanup >>> 2021-01-26 13:19:27.439 | + >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:36 >>> : exitval=1 >>> 2021-01-26 13:19:27.447 | + >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:trap_cleanup:37 >>> : cleanup >>> 2021-01-26 13:19:27.457 | + >>> /usr/lib/python3/dist-packages/diskimage_builder/lib/img-functions:cleanup:42 >>> : unmount_image >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> > > -- > Regards, > > > Syed Ammad Ali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed Jan 27 10:51:42 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 27 Jan 2021 11:51:42 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> Message-ID: <53411ced-c9e0-25c7-4b2d-f8da847f3b9d@est.tech> Thanks Jeremy and Hervé! @Jeremy, @Hervé: I've checked and found two open patches, out of which I was able to abandon one, and asked the owner on the other patch to abandon it. So the listed branches can be deleted *except* one: openstack/os-collect-config So in my opinion there's no need for further coordination with the teams, as these branches are tagged ocata-eol already. And thanks Hervé for the script, it helped a lot so far, already :) + thanks Jeremy for the pointers! Előd On 2021. 01. 27. 11:23, Herve Beraud wrote: > > > Le mar. 26 janv. 2021 à 18:35, Jeremy Stanley > a écrit : > > On 2021-01-26 17:52:07 +0100 (+0100), Előd Illés wrote: > [...] > > 1. I've used the list_eol_stale_branches.sh [2] script to get > the list of > > such not-yet-deleted branches for Ocata [3]. They are all tagged > with > > 'ocata-eol', but stable/ocata branch still exists for them. > Could you please > > delete these? [3] > > I'm happy to, have you made sure any open reviews for those branches > are abandoned first? Gerrit won't allow deletion of a branch with > open reviews. > > > I think we need a first round of inspection on these stale branches to > see if opened patches exist and then if needed start discussion with > teams to ask them to drop the patches those who have been found. > > I'll try to add this feature to check for opened patches within > `list_eol_stale_branches.sh`. > > > > 2. On the Release Team meeting [1] we were hinted that with the > newer > > version of gerrit (that was installed at the end of November) some > > automation is possible through gerrit API in the future. Can I > get some help > > about where should I start with the automation? Which repository > should I > > look, where can the deletion being triggered ("similarly like branch > > creation")? > [...] > > The Gerrit REST API method for deleting branches is documented here: > > https://review.opendev.org/Documentation/rest-api-projects.html#delete-branch > > I'm not immediately sure where branch creation happens in the forest > of our release automation, but I would expect deletion could be > implemented similarly. Hopefully someone more intimately familiar > with those jobs can chime in. > > The access control we'll need to grant to automation so that it can > call that is documented here: > > https://review.opendev.org/Documentation/access-control.html#category_delete > > It'll need to be added manually as a permission for the Release > Managers group in our All-Projects global ACL which individual > projects inherit, and this documentation updated accordingly: > > https://opendev.org/opendev/system-config/src/branch/master/doc/source/gerrit.rst > > Happy to answer other questions as they arise. > -- > Jeremy Stanley > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 27 10:52:02 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 27 Jan 2021 11:52:02 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> Message-ID: Le mer. 27 janv. 2021 à 11:23, Herve Beraud a écrit : > > > Le mar. 26 janv. 2021 à 18:35, Jeremy Stanley a > écrit : > >> On 2021-01-26 17:52:07 +0100 (+0100), Előd Illés wrote: >> [...] >> > 1. I've used the list_eol_stale_branches.sh [2] script to get the list >> of >> > such not-yet-deleted branches for Ocata [3]. They are all tagged with >> > 'ocata-eol', but stable/ocata branch still exists for them. Could you >> please >> > delete these? [3] >> >> I'm happy to, have you made sure any open reviews for those branches >> are abandoned first? Gerrit won't allow deletion of a branch with >> open reviews. >> > > I think we need a first round of inspection on these stale branches to see > if opened patches exist and then if needed start discussion with teams to > ask them to drop the patches those who have been found. > > I'll try to add this feature to check for opened patches within > `list_eol_stale_branches.sh`. > I created a quick and dirty script [1] to inspect all these repos branches (based on the list given previously) and only os-collect-config still contains unmerged patches [2]. [1] http://paste.openstack.org/show/802035/ [2] https://review.opendev.org/q/project:openstack/os-collect-config+branch:stable/ocata > > >> >> > 2. On the Release Team meeting [1] we were hinted that with the newer >> > version of gerrit (that was installed at the end of November) some >> > automation is possible through gerrit API in the future. Can I get some >> help >> > about where should I start with the automation? Which repository should >> I >> > look, where can the deletion being triggered ("similarly like branch >> > creation")? >> [...] >> >> The Gerrit REST API method for deleting branches is documented here: >> >> >> https://review.opendev.org/Documentation/rest-api-projects.html#delete-branch >> >> I'm not immediately sure where branch creation happens in the forest >> of our release automation, but I would expect deletion could be >> implemented similarly. Hopefully someone more intimately familiar >> with those jobs can chime in. >> >> The access control we'll need to grant to automation so that it can >> call that is documented here: >> >> >> https://review.opendev.org/Documentation/access-control.html#category_delete >> >> It'll need to be added manually as a permission for the Release >> Managers group in our All-Projects global ACL which individual >> projects inherit, and this documentation updated accordingly: >> >> >> https://opendev.org/opendev/system-config/src/branch/master/doc/source/gerrit.rst >> >> Happy to answer other questions as they arise. >> -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Jan 27 10:53:30 2021 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 27 Jan 2021 11:53:30 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: <53411ced-c9e0-25c7-4b2d-f8da847f3b9d@est.tech> References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> <20210126173225.sr4gnptqsewwu2uh@yuggoth.org> <53411ced-c9e0-25c7-4b2d-f8da847f3b9d@est.tech> Message-ID: Le mer. 27 janv. 2021 à 11:51, Előd Illés a écrit : > Thanks Jeremy and Hervé! > > @Jeremy, @Hervé: I've checked and found two open patches, out of which I > was able to abandon one, and asked the owner on the other patch to abandon > it. So the listed branches can be deleted *except* one: > openstack/os-collect-config > > So in my opinion there's no need for further coordination with the teams, > as these branches are tagged ocata-eol already. > Yes indeed, I agree. > And thanks Hervé for the script, it helped a lot so far, already :) + > thanks Jeremy for the pointers! > > Előd > > On 2021. 01. 27. 11:23, Herve Beraud wrote: > > > > Le mar. 26 janv. 2021 à 18:35, Jeremy Stanley a > écrit : > >> On 2021-01-26 17:52:07 +0100 (+0100), Előd Illés wrote: >> [...] >> > 1. I've used the list_eol_stale_branches.sh [2] script to get the list >> of >> > such not-yet-deleted branches for Ocata [3]. They are all tagged with >> > 'ocata-eol', but stable/ocata branch still exists for them. Could you >> please >> > delete these? [3] >> >> I'm happy to, have you made sure any open reviews for those branches >> are abandoned first? Gerrit won't allow deletion of a branch with >> open reviews. >> > > I think we need a first round of inspection on these stale branches to see > if opened patches exist and then if needed start discussion with teams to > ask them to drop the patches those who have been found. > > I'll try to add this feature to check for opened patches within > `list_eol_stale_branches.sh`. > > >> >> > 2. On the Release Team meeting [1] we were hinted that with the newer >> > version of gerrit (that was installed at the end of November) some >> > automation is possible through gerrit API in the future. Can I get some >> help >> > about where should I start with the automation? Which repository should >> I >> > look, where can the deletion being triggered ("similarly like branch >> > creation")? >> [...] >> >> The Gerrit REST API method for deleting branches is documented here: >> >> >> https://review.opendev.org/Documentation/rest-api-projects.html#delete-branch >> >> I'm not immediately sure where branch creation happens in the forest >> of our release automation, but I would expect deletion could be >> implemented similarly. Hopefully someone more intimately familiar >> with those jobs can chime in. >> >> The access control we'll need to grant to automation so that it can >> call that is documented here: >> >> >> https://review.opendev.org/Documentation/access-control.html#category_delete >> >> It'll need to be added manually as a permission for the Release >> Managers group in our All-Projects global ACL which individual >> projects inherit, and this documentation updated accordingly: >> >> >> https://opendev.org/opendev/system-config/src/branch/master/doc/source/gerrit.rst >> >> Happy to answer other questions as they arise. >> -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed Jan 27 11:06:41 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 27 Jan 2021 12:06:41 +0100 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Message-ID: When Extended Maintenance process was created, there was no clear decision whether EOL (and branch deletion) is needed afterwards or not. But one thing we saw is that if a branch was EOL'd and still open, then patches arrived, and even some got approved which caused errors + confusions. Also, if the branch is open and periodic jobs are not deleted, then those are still running day by day. In this case only the branch deletion is the solution (as clearly those branches cannot accept job fixing patches). Though, you are right that if we don't even tag a branch with '$series-eol', then the above issue does not come. So theoretically we could forget about the 'eol' process, it would not cause any issue in my understanding. The periodic jobs needs to be deleted from .zuul.yaml of course, and maybe some other cleanup, otherwise it is possible. That's true. I can accept this, and this was my concept, too, in the beginning. Előd On 2021. 01. 27. 11:20, Herve Beraud wrote: > > > Le mer. 27 janv. 2021 à 10:55, Stephen Finucane > a écrit : > > On Tue, 2021-01-26 at 17:52 +0100, Előd Illés wrote: > Hi Infra Team! > > In October there was a discussion at Release Team meeting [1] > about what > can we do with the old, already EOL'd but not yet deleted branches > (this > is possible since with the Extended Maintenance process the > general/"mass" EOL'ing was stopped and tagging a project branch > EOL does > not delete the branch anymore). > > > Not an answer but rather a question for my own understanding: what > is the > advantage of deleting branches? I understand that these things > would no longer > maintained and the gates will slowly break, > > > AFAIK this is mostly to avoid issues with gates/zuul. > > but they're still relatively useful > as a reference to explore project history and it's not like > branches are > expensive in git. > > > Tags can be used to dig in the related history. > > > Stephen > > Related to this, I would like to ask two things: > > 1. I've used the list_eol_stale_branches.sh [2] script to get the > list > of such not-yet-deleted branches for Ocata [3]. They are all > tagged with > 'ocata-eol', but stable/ocata branch still exists for them. Could you > please delete these? [3] > > 2. On the Release Team meeting [1] we were hinted that with the newer > version of gerrit (that was installed at the end of November) some > automation is possible through gerrit API in the future. Can I get > some > help about where should I start with the automation? Which repository > should I look, where can the deletion being triggered ("similarly > like > branch creation")? > > Thanks in advance, > > Előd > > [1] > http://eavesdrop.openstack.org/meetings/releaseteam/2020/releaseteam.2020-10-22-16.00.log.html#l-40 > [2] > https://opendev.org/openstack/releases/src/commit/eb381492da3f7c826c35b9f147fd9a1ed55ae797/tools/list_eol_stale_branches.sh > [3] http://paste.openstack.org/show/801992/ > > > > > > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From medemine.ibrahim at cloudnet.tn Wed Jan 27 11:43:33 2021 From: medemine.ibrahim at cloudnet.tn (Mohamed Emine IBRAHIM) Date: Wed, 27 Jan 2021 12:43:33 +0100 Subject: [EXTERNAL] Re: [kolla][keystone] Another keycloak issue In-Reply-To: References: <244e2f00644c4960a82b533dc0a23111@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: hello, Maybe the user password is not mapped to keystone, so when you create a new user via keycloak you need to set password manually (openstack user set test2 --password-prompt) and then use the CLI ? On 27/01/2021 10:09, Mark Goddard wrote: > On Tue, 26 Jan 2021 at 17:02, Braden, Albert > wrote: >> >> Another problem I'm encountering with keycloak is that the keycloak users can't login on the command line. I created user test2 via Keycloak and test3 via CLI. They have identical roles on the admin domain: >> >> (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test2 >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ >> | Role | User | Group | Project | Domain | System | Inherited | >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ >> | 406a5f1cd92d45b5b3d54979235e896c | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | | 15c32af517334e28a9427809a9fc4805 | | | False | >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ >> (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test3 >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ >> | Role | User | Group | Project | Domain | System | Inherited | >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ >> | 406a5f1cd92d45b5b3d54979235e896c | 06a5f28d061f4d42b3bf64df378338fd | | 15c32af517334e28a9427809a9fc4805 | | | False | >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ >> >> I made identical env-setting "rc" files with only the username changed. Test3 logs in successfully but test2 fails: >> >> (openstack) [root at chrnc-area51-build-01 ~]# . ./test2-openrc.sh >> (openstack) [root at chrnc-area51-build-01 ~]# openstack server list >> The request you have made requires authentication. (HTTP 401) (Request-ID: req-ad7ee855-df98-434a-9afc-89f64a7addd1) >> (openstack) [root at chrnc-area51-build-01 ~]# . ./test3-openrc.sh >> (openstack) [root at chrnc-area51-build-01 ~]# openstack server list >> >> (openstack) [root at chrnc-area51-build-01 ~]# >> >> The only obvious difference is the longer UID for the Keycloak users. Do Keycloak-created users require something different in the env? Do I need to change something in Keycloak, to make the Keycloak users work the same as CLI-created users? Where can I look in the database to find the differences between these two users? >> > I'm no expert on federation, but I understand that you need to use a > slightly different method with the CLI. This page has some info: > https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html > -- Very truly yours, أطيب التمنيات Mohamed Emine IBRAHIM محمد أمين إبراهيم -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From chkumar at redhat.com Wed Jan 27 12:49:36 2021 From: chkumar at redhat.com (Chandan Kumar) Date: Wed, 27 Jan 2021 18:19:36 +0530 Subject: [tripleo] Removing tempest container from TripleO Message-ID: Hello, In TripleO CI jobs, we used to run tempest tests using a tempest container via tripleo-quickstart-extras 's validate-tempest role. Nowadays, All the tempest execution is done via openstack-ansible-os_tempest role in TrIpleO CI job which is dependent on tempest rpms. Validate-tempest is also getting removed. Since the tempest container is no longer used and tested in CI. So we are removing it from TripleO. Here is the tripleo-common patch for Removing tempest container build: https://review.opendev.org/c/openstack/tripleo-common/+/771993 Thanks for reading. Thanks, Chandan Kumar From kklimonda at syntaxhighlighted.com Wed Jan 27 13:27:54 2021 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Wed, 27 Jan 2021 14:27:54 +0100 Subject: [EXTERNAL] Re: [kolla][keystone] Another keycloak issue In-Reply-To: References: <244e2f00644c4960a82b533dc0a23111@ncwmexgp009.CORP.CHARTERCOM.com> Message-ID: Hi, With SSO enabled you are no longer authenticating against keystone directly, and so your openrc.sh must be crafted to take that into account. For example, this is snippet from my clouds.yaml for deployment that is federated with keycloak via oidc: ----8<----8<---- cloud_oidc: auth_type: v3oidcpassword auth: auth_url: https://[redacted]:5000/v3 discovery_endpoint: https://[redacted]/.well-known/openid-configuration identity_provider: oidc protocol: openid client_id: [redacted] client_secret: [redacted] project_name: test-project project_domain_name: default username: [redacted] password: [redacted] ----8<----8<---- This can be translated into openrc.sh script that sets up proper variables (although I have no example of that on hand). Similar configuration can be done for SAML2-based integration. Additionally, not all third-party tools will work with such authentication, and for them you'll probably have to issue token and use it instead. Setting password for user in keystone goes against the idea of SSO and introduces an issue of how to reset keystone password when one in keycloak is changed (and vice versa). Also I'm not even sure if it's possible for default federated users (as opposed to "local" federated users which work a little bit differently). -- Krzysztof Klimonda kklimonda at syntaxhighlighted.com On Wed, Jan 27, 2021, at 12:43, Mohamed Emine IBRAHIM wrote: > hello, > > Maybe the user password is not mapped to keystone, so when you create a > new user via keycloak you need to set password manually (openstack user > set test2 --password-prompt) and then use the CLI ? > > On 27/01/2021 10:09, Mark Goddard wrote: > > On Tue, 26 Jan 2021 at 17:02, Braden, Albert > > wrote: > >> > >> Another problem I'm encountering with keycloak is that the keycloak users can't login on the command line. I created user test2 via Keycloak and test3 via CLI. They have identical roles on the admin domain: > >> > >> (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test2 > >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> | Role | User | Group | Project | Domain | System | Inherited | > >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> | 406a5f1cd92d45b5b3d54979235e896c | f4287b6082b8f36048d052eaa3d35facb94e5eff598d59d2aee68252ddb13339 | | 15c32af517334e28a9427809a9fc4805 | | | False | > >> +----------------------------------+------------------------------------------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> (openstack) [root at chrnc-area51-build-01 ~]# os role assignment list --user test3 > >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> | Role | User | Group | Project | Domain | System | Inherited | > >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> | 406a5f1cd92d45b5b3d54979235e896c | 06a5f28d061f4d42b3bf64df378338fd | | 15c32af517334e28a9427809a9fc4805 | | | False | > >> +----------------------------------+----------------------------------+-------+----------------------------------+--------+--------+-----------+ > >> > >> I made identical env-setting "rc" files with only the username changed. Test3 logs in successfully but test2 fails: > >> > >> (openstack) [root at chrnc-area51-build-01 ~]# . ./test2-openrc.sh > >> (openstack) [root at chrnc-area51-build-01 ~]# openstack server list > >> The request you have made requires authentication. (HTTP 401) (Request-ID: req-ad7ee855-df98-434a-9afc-89f64a7addd1) > >> (openstack) [root at chrnc-area51-build-01 ~]# . ./test3-openrc.sh > >> (openstack) [root at chrnc-area51-build-01 ~]# openstack server list > >> > >> (openstack) [root at chrnc-area51-build-01 ~]# > >> > >> The only obvious difference is the longer UID for the Keycloak users. Do Keycloak-created users require something different in the env? Do I need to change something in Keycloak, to make the Keycloak users work the same as CLI-created users? Where can I look in the database to find the differences between these two users? > >> > > I'm no expert on federation, but I understand that you need to use a > > slightly different method with the CLI. This page has some info: > > https://docs.openstack.org/python-openstackclient/latest/cli/man/openstack.html > > > > -- > Very truly yours, أطيب التمنيات > Mohamed Emine IBRAHIM > محمد أمين إبراهيم > > > Attachments: > * signature.asc From eblock at nde.ag Wed Jan 27 13:38:15 2021 From: eblock at nde.ag (Eugen Block) Date: Wed, 27 Jan 2021 13:38:15 +0000 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> Message-ID: <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> Hi everyone, I have a question regarding read-only ask.openstack.org. I understand the decision to make it read-only but one thing still bothers me since I look up problems from time to time. And I noticed that you can't expand all comments if there were more than a few. Users were able to expand by clicking "see more comments" but that's not possible anymore. Is there any way to make the whole page visible, maybe remove that button and kind of "auto-expand" all comments? Some of the comments might have valuable information. Regards, Eugen Zitat von Bernd Bausch : > Thanks for calling me out, but I am certainly not the only one > answering questions. > > After the notification feature broke down entirely, leaving me no > way to see which questions I am involved in, it's indeed time to > move on. I agree with the change as well. > > Bernd. > > On 8/18/2020 7:44 PM, Thierry Carrez wrote: >> Hi everyone, >> >> This has been discussed several times on this mailing list in the >> past, but we never got to actually pull the plug. >> >> Ask.openstack.org was launched in 2013. The reason for hosting our >> own setup was to be able to support multiple languages, while >> StackOverflow rejected our proposal to have our own >> openstack-branded StackExchange site. The Chinese ask.o.o side >> never really took off. The English side also never really worked >> perfectly (like email alerts are hopelessly broken), but we figured >> it would get better with time if a big community formed around it. >> >> Fast-forward to 2020 and the instance is lacking volunteers to help >> run it, while the code (and our customization of it) has become >> more complicated to maintain. It regularly fails one way or >> another, and questions there often go unanswered, making us look >> bad. Of the top 30 users, most have abandoned the platform since >> 2017, leaving only Bernd Bausch actively engaging and helping >> moderate questions lately. We have called for volunteers several >> times, but the offers for help never really materialized. >> >> At the same time, people are asking OpenStack questions on >> StackOverflow, and sometimes getting answers there[1]. The >> fragmentation of the "questions" space is not helping users getting >> good answers. >> >> I think it's time to pull the plug, make ask.openstack.org >> read-only (so that links to old answers are not lost) and redirect >> users to the mailing-list and the "OpenStack" tag on StackOverflow. >> I picked StackOverflow since it seems to have the most openstack >> questions (2,574 on SO, 76 on SuperUser and 430 on ServerFault). >> >> We discussed that option several times, but I now proposed a change >> to actually make it happen: >> >> https://review.opendev.org/#/c/746497/ >> >> It's always a difficult decision to make to kill a resource, but I >> feel like in this case, consolidation and simplification would help. >> >> Thoughts, comments? >> >> [1] https://stackoverflow.com/questions/tagged/openstack >> From radoslaw.piliszek at gmail.com Wed Jan 27 13:52:37 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 27 Jan 2021 14:52:37 +0100 Subject: [All] Is it time to move on from StoryBoard? (Was: [All][StoryBoard] Angular.js Alternatives) In-Reply-To: <35c47f497e8f233905b5fec033174a64fc11fdc8.camel@redhat.com> References: <35c47f497e8f233905b5fec033174a64fc11fdc8.camel@redhat.com> Message-ID: On Wed, Jan 27, 2021 at 11:18 AM Stephen Finucane wrote: > > On Tue, 2021-01-26 at 12:51 -0800, Kendall Nelson wrote: > > For better or for worse, we are a pretty small team and don't need to poll > > since we can all attend a meeting and agree there. > > > > Calling for outside opinions was also a half hearted plea for help :) > > Is this an indication that we finally have to settle on another bug tracker? > This has been discussed previously but I can't find any conclusions from those > past discussions, hence I'm asking again. I do not intend to dismiss the hard > work of those people working on StoryBoard in the slightest, but it does seem > like StoryBoard as a project has never really taken off outside OpenStack and is > having a hard time surviving, let alone growing, likely because of that along > with the widespread use of forge-style tools (GitHub, GitLab) and more > comprehensive project management tools (sigh, JIRA). I recall fungi (?) raising > the idea of enabling the more forge'y features, including the issue tracker, of > Gitea in the past, but there are also separate tools like (heaven forbid) > Bugzilla, Trac, Mantis, etc. that we could use. Heck, Launchpad is still around > and is still open source (though I recall there being other issues with that?). > We've already made a tough decision recently, with the sunsetting of ask.o.o, > and with the impending deprecation of Angular.js, perhaps this is as good a time > as any to do the same with StoryBoard? FWIW, I started a similar thread in September 2020: [1] You might want to read it too. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017164.html -yoctozepto From lbragstad at gmail.com Wed Jan 27 14:56:57 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 27 Jan 2021 08:56:57 -0600 Subject: Secure RBAC work In-Reply-To: <20210126102245.42gzsdn36uvc54sq@localhost> References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> <20210126102245.42gzsdn36uvc54sq@localhost> Message-ID: On Tue, Jan 26, 2021 at 4:23 AM Gorka Eguileor wrote: > On 19/01, Lance Bragstad wrote: > > Hey all, > > > > I want to follow up on this thread because there's been some discussion > and > > questions (some of which are in reviews) as services work through the > > proposed changes [0]. > > > > TL;DR - OpenStack services implementing secure RBAC should update default > > policies with the `reader` role in a consistent manner, where it is not > > meant to protect sensitive information. > > > > Hi Lance, > > Thank you very much for this great summary of all the different > discussions and decisions. > > I have just one question regarding your TL;DR, shouldn't it be "where it > is meant to protect sensitive information"? > > As I understood it the reader should be updated so it doesn't expose > sensitive information (thus protecting it) because it's the least > privileged role. > > Cheers, > Gorka. > I think you're understanding things correctly, and re-reading my original TL;DR I can see how the wording is confusing. Does the following help? *OpenStack services implementing secure RBAC should update default policies with the `reader` role in a consistent manner, where users with the `reader` role are not allowed to view sensitive information.* Nice catch, Gorka. > > > In the process of reviewing changes for various resources, some folks > > raised concerns about the `reader` role definition. > > > > One of the intended use-cases for implementing a `reader` role was to use > > it for auditing, as noted in the keystone definitions for each role and > > persona [1]. Another key point of that document, and the underlying > design > > of secure RBAC, is that the default roles have role implications built > > between them (e.g., reader implies member, and member implies admin). > This > > detail serves two important functions. > > > > First, it reduces duplication in check strings because keystone expands > > role implications in token response bodies. For example, someone with the > > `admin` role on a project will have `member` and `reader` roles in their > > token body when they authenticate for a token or validate a token. This > > reduces the complexity of our check strings by writing the policy to the > > highest level of authorization required to access an API or resource. > Users > > with anything above that level will work through the role implications > > feature. > > > > Second, it reduces the need for extra role assignments. If you grant > > someone the `admin` role on a project you don't need to also give them > > `reader` and `member` role assignments. This is true regardless of how > > services implement check strings. > > > > Ultimately, the hierarchical role structure in keystone and role > expansion > > in token responses give us shorter check strings and less role > assignments. > > But, one thing we're aware of now is that we need to be careful how we > > expose certain information to users via the `reader` role, since it is > the > > least-privileged role in the hierarchy. For example, one concern was > > exposing license key information in images to anyone with the `reader` > role > > on the system. Some deployments, depending on their security posture or > > auditing targets, might not allow sensitive information to be implicitly > > exposed. Instead, they may require deployments to explicitly grant access > > to sensitive information [2]. > > > > So what do we do moving forward? > > > > I think it's clear that there are APIs and resources in OpenStack that > fall > > into a special category where we shouldn't expose certain information to > > the lowest level of the role hierarchy, regardless of the scope. But, the > > role implication functionality served a purpose initially to deliver a > > least-privileged role used only for read operations within a given > scope. I > > think breaking that implication now is confusing considering we > implemented > > the implication in Rocky [3], but I think future work for an elevated > > read-only role is a good path forward. Eventually, keystone can consider > > implementing support for a new default role, which implies `reader`, > making > > all the work we do today still useful. At that time, we can update > relevant > > policies to expose sensitive information with the elevated read-only > role. > > I suspect this will be a much smaller set of APIs and policies. I think > > this approach strikes a balance between what we have today, and a way to > > move forward that still protects sensitive data. > > > > I proposed an update to the documentation in keystone to clarify this > point > > [4]. It also doesn't assume all audits are the same. Instead, it phrases > > the ability to use `reader` roles for auditing in a way that leaves that > up > > to the deployer and auditor. I think that's an important detail since > > different deployments have different security requirements. Instead of > > assuming everyone can use `reader` for auditing, we can give them a list > of > > APIs they can interact with as a `reader` (or have them generate those > > policies themselves, especially if they have custom policy) and let them > > determine if that access is sufficient for their audit. If it isn't, > > deployers aren't in a worse position today, but it emphasizes the > > importance of expanding the default roles to include another tier for > > elevated read-only permissions. Given where we are in the release cycle > for > > Wallaby, I don't expect keystone to implement a new default role this > late > > in the release [5]. Perhaps Xena is a better target, but I'll talk with > > Kristi about it next week during the keystone meeting. > > > > I hope this helps clarify some of the confusion around the secure RBAC > > patches. If you have additional comments or questions about this topic, > let > > me know. We can obviously iterate here, or use the policy pop up time > slot > > which is in a couple of days [6]. > > > > Thanks, > > > > Lance > > > > [0] https://review.opendev.org/q/topic:secure-rbac > > [1] > > > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > [2] FedRAMP control AC -06 (01) is an example of this - *The organization > > explicitly authorizes access to [Assignment: organization-defined > security > > functions (deployed in hardware, software, and firmware) and > > security-relevant information].* > > [3] > https://docs.openstack.org/releasenotes/keystone/rocky.html#new-features > > [4] https://review.opendev.org/c/openstack/keystone/+/771509 > > [5] https://releases.openstack.org/wallaby/schedule.html > > [6] https://etherpad.opendev.org/p/default-policy-meeting-agenda > > > > On Thu, Dec 10, 2020 at 7:15 PM Ghanshyam Mann > > wrote: > > > > > ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad < > > > lbragstad at gmail.com> wrote ---- > > > > Hey everyone, > > > > > > > > I wanted to take an opportunity to clarify some work we have been > doing > > > upstream, specifically modifying the default policies across projects. > > > > > > > > These changes are the next phase of an initiative that’s been > underway > > > since Queens to fix some long-standing security concerns in OpenStack > [0]. > > > For context, we have been gradually improving policy enforcement for > years. > > > We started by improving policy formats, registering default policies > into > > > code [1], providing better documentation for policy writers, > implementing > > > necessary identity concepts in keystone [2], developing support for > those > > > concepts in libraries [3][4][5][6][7][8], and consuming all of those > > > changes to provide secure default policies in a way operators can > consume > > > and roll out to their users [9][10]. > > > > > > > > All of this work is in line with some high-level documentation we > > > started writing about three years ago [11][12][13]. > > > > > > > > There are a handful of services that have implemented the goals that > > > define secure RBAC by default, but a community-wide goal is still > > > out-of-reach. To help with that, the community formed a pop-up team > with a > > > focused objective and disbanding criteria [14]. > > > > > > > > The work we currently have in progress [15] is an attempt to start > > > applying what we have learned from existing implementations to other > > > projects. The hope is that we can complete the work for even more > projects > > > in Wallaby. Most deployers looking for this functionality won't be > able to > > > use it effectively until all services in their deployment support it. > > > > > > Thanks, Lance for pushing this work forwards. I completely agree and > that > > > is what we get feedback in > > > forum sessions also that we should implement this in all the services > > > first before we ask operators to > > > move their cloud to the new RBAC. > > > > > > We discussed these in today's policy-popup meeting also and encourage > > > every project to help in those > > > patches to add tests and review. This will help to finish the work on > > > priority and we can provide better > > > RBAC experience to the deployer. > > > > > > -gmann > > > > > > > > > > > > > > > I hope this helps clarify or explain the patches being proposed. > > > > > > > > > > > > As always, I'm happy to elaborate on specific concerns if folks have > > > them. > > > > > > > > > > > > Thanks, > > > > > > > > > > > > Lance > > > > > > > > > > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > > > > [1] > > > > https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > > > > [2] > > > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > > > > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > > > > [4] > > > https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > > > > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > > > > [6] > https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > > > > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > > > > [8] > > > > https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > > > > [9] > > > > https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > > > > [10] > > > > https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > > > > [11] > > > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > > > > [12] > > > > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > > > [13] > > > > https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > > > > [14] > > > > https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > > > > [15] > > > https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Jan 27 15:06:11 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 27 Jan 2021 16:06:11 +0100 Subject: Secure RBAC work In-Reply-To: References: <1764f5dc30b.ce68476360070.2800550259733297462@ghanshyammann.com> <20210126102245.42gzsdn36uvc54sq@localhost> Message-ID: <20210127150611.7xmicogxeddnf2th@localhost> On 27/01, Lance Bragstad wrote: > On Tue, Jan 26, 2021 at 4:23 AM Gorka Eguileor wrote: > > > On 19/01, Lance Bragstad wrote: > > > Hey all, > > > > > > I want to follow up on this thread because there's been some discussion > > and > > > questions (some of which are in reviews) as services work through the > > > proposed changes [0]. > > > > > > TL;DR - OpenStack services implementing secure RBAC should update default > > > policies with the `reader` role in a consistent manner, where it is not > > > meant to protect sensitive information. > > > > > > > Hi Lance, > > > > Thank you very much for this great summary of all the different > > discussions and decisions. > > > > I have just one question regarding your TL;DR, shouldn't it be "where it > > is meant to protect sensitive information"? > > > > As I understood it the reader should be updated so it doesn't expose > > sensitive information (thus protecting it) because it's the least > > privileged role. > > > > Cheers, > > Gorka. > > > > I think you're understanding things correctly, and re-reading my original > TL;DR I can see how the wording is confusing. Does the following help? > > > *OpenStack services implementing secure RBAC should update default policies > with the `reader` role in a consistent manner, where users with the > `reader` role are not allowed to view sensitive information.* > Thanks Lance, that looks great and consistent with what I had understood from your longer summary. :-) > Nice catch, Gorka. > > > > > > > In the process of reviewing changes for various resources, some folks > > > raised concerns about the `reader` role definition. > > > > > > One of the intended use-cases for implementing a `reader` role was to use > > > it for auditing, as noted in the keystone definitions for each role and > > > persona [1]. Another key point of that document, and the underlying > > design > > > of secure RBAC, is that the default roles have role implications built > > > between them (e.g., reader implies member, and member implies admin). > > This > > > detail serves two important functions. > > > > > > First, it reduces duplication in check strings because keystone expands > > > role implications in token response bodies. For example, someone with the > > > `admin` role on a project will have `member` and `reader` roles in their > > > token body when they authenticate for a token or validate a token. This > > > reduces the complexity of our check strings by writing the policy to the > > > highest level of authorization required to access an API or resource. > > Users > > > with anything above that level will work through the role implications > > > feature. > > > > > > Second, it reduces the need for extra role assignments. If you grant > > > someone the `admin` role on a project you don't need to also give them > > > `reader` and `member` role assignments. This is true regardless of how > > > services implement check strings. > > > > > > Ultimately, the hierarchical role structure in keystone and role > > expansion > > > in token responses give us shorter check strings and less role > > assignments. > > > But, one thing we're aware of now is that we need to be careful how we > > > expose certain information to users via the `reader` role, since it is > > the > > > least-privileged role in the hierarchy. For example, one concern was > > > exposing license key information in images to anyone with the `reader` > > role > > > on the system. Some deployments, depending on their security posture or > > > auditing targets, might not allow sensitive information to be implicitly > > > exposed. Instead, they may require deployments to explicitly grant access > > > to sensitive information [2]. > > > > > > So what do we do moving forward? > > > > > > I think it's clear that there are APIs and resources in OpenStack that > > fall > > > into a special category where we shouldn't expose certain information to > > > the lowest level of the role hierarchy, regardless of the scope. But, the > > > role implication functionality served a purpose initially to deliver a > > > least-privileged role used only for read operations within a given > > scope. I > > > think breaking that implication now is confusing considering we > > implemented > > > the implication in Rocky [3], but I think future work for an elevated > > > read-only role is a good path forward. Eventually, keystone can consider > > > implementing support for a new default role, which implies `reader`, > > making > > > all the work we do today still useful. At that time, we can update > > relevant > > > policies to expose sensitive information with the elevated read-only > > role. > > > I suspect this will be a much smaller set of APIs and policies. I think > > > this approach strikes a balance between what we have today, and a way to > > > move forward that still protects sensitive data. > > > > > > I proposed an update to the documentation in keystone to clarify this > > point > > > [4]. It also doesn't assume all audits are the same. Instead, it phrases > > > the ability to use `reader` roles for auditing in a way that leaves that > > up > > > to the deployer and auditor. I think that's an important detail since > > > different deployments have different security requirements. Instead of > > > assuming everyone can use `reader` for auditing, we can give them a list > > of > > > APIs they can interact with as a `reader` (or have them generate those > > > policies themselves, especially if they have custom policy) and let them > > > determine if that access is sufficient for their audit. If it isn't, > > > deployers aren't in a worse position today, but it emphasizes the > > > importance of expanding the default roles to include another tier for > > > elevated read-only permissions. Given where we are in the release cycle > > for > > > Wallaby, I don't expect keystone to implement a new default role this > > late > > > in the release [5]. Perhaps Xena is a better target, but I'll talk with > > > Kristi about it next week during the keystone meeting. > > > > > > I hope this helps clarify some of the confusion around the secure RBAC > > > patches. If you have additional comments or questions about this topic, > > let > > > me know. We can obviously iterate here, or use the policy pop up time > > slot > > > which is in a couple of days [6]. > > > > > > Thanks, > > > > > > Lance > > > > > > [0] https://review.opendev.org/q/topic:secure-rbac > > > [1] > > > > > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > > [2] FedRAMP control AC -06 (01) is an example of this - *The organization > > > explicitly authorizes access to [Assignment: organization-defined > > security > > > functions (deployed in hardware, software, and firmware) and > > > security-relevant information].* > > > [3] > > https://docs.openstack.org/releasenotes/keystone/rocky.html#new-features > > > [4] https://review.opendev.org/c/openstack/keystone/+/771509 > > > [5] https://releases.openstack.org/wallaby/schedule.html > > > [6] https://etherpad.opendev.org/p/default-policy-meeting-agenda > > > > > > On Thu, Dec 10, 2020 at 7:15 PM Ghanshyam Mann > > > wrote: > > > > > > > ---- On Wed, 09 Dec 2020 14:04:57 -0600 Lance Bragstad < > > > > lbragstad at gmail.com> wrote ---- > > > > > Hey everyone, > > > > > > > > > > I wanted to take an opportunity to clarify some work we have been > > doing > > > > upstream, specifically modifying the default policies across projects. > > > > > > > > > > These changes are the next phase of an initiative that’s been > > underway > > > > since Queens to fix some long-standing security concerns in OpenStack > > [0]. > > > > For context, we have been gradually improving policy enforcement for > > years. > > > > We started by improving policy formats, registering default policies > > into > > > > code [1], providing better documentation for policy writers, > > implementing > > > > necessary identity concepts in keystone [2], developing support for > > those > > > > concepts in libraries [3][4][5][6][7][8], and consuming all of those > > > > changes to provide secure default policies in a way operators can > > consume > > > > and roll out to their users [9][10]. > > > > > > > > > > All of this work is in line with some high-level documentation we > > > > started writing about three years ago [11][12][13]. > > > > > > > > > > There are a handful of services that have implemented the goals that > > > > define secure RBAC by default, but a community-wide goal is still > > > > out-of-reach. To help with that, the community formed a pop-up team > > with a > > > > focused objective and disbanding criteria [14]. > > > > > > > > > > The work we currently have in progress [15] is an attempt to start > > > > applying what we have learned from existing implementations to other > > > > projects. The hope is that we can complete the work for even more > > projects > > > > in Wallaby. Most deployers looking for this functionality won't be > > able to > > > > use it effectively until all services in their deployment support it. > > > > > > > > Thanks, Lance for pushing this work forwards. I completely agree and > > that > > > > is what we get feedback in > > > > forum sessions also that we should implement this in all the services > > > > first before we ask operators to > > > > move their cloud to the new RBAC. > > > > > > > > We discussed these in today's policy-popup meeting also and encourage > > > > every project to help in those > > > > patches to add tests and review. This will help to finish the work on > > > > priority and we can provide better > > > > RBAC experience to the deployer. > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > I hope this helps clarify or explain the patches being proposed. > > > > > > > > > > > > > > > As always, I'm happy to elaborate on specific concerns if folks have > > > > them. > > > > > > > > > > > > > > > Thanks, > > > > > > > > > > > > > > > Lance > > > > > > > > > > > > > > > [0] https://bugs.launchpad.net/keystone/+bug/968696/ > > > > > [1] > > > > > > https://governance.openstack.org/tc/goals/selected/queens/policy-in-code.html > > > > > [2] > > > > > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html > > > > > [3] https://review.opendev.org/c/openstack/keystoneauth/+/529665 > > > > > [4] > > > > https://review.opendev.org/c/openstack/python-keystoneclient/+/524415 > > > > > [5] https://review.opendev.org/c/openstack/oslo.context/+/530509 > > > > > [6] > > https://review.opendev.org/c/openstack/keystonemiddleware/+/564072 > > > > > [7] https://review.opendev.org/c/openstack/oslo.policy/+/578995 > > > > > [8] > > > > > > https://review.opendev.org/q/topic:%22system-scope%22+(status:open%20OR%20status:merged) > > > > > [9] > > > > > > https://review.opendev.org/q/status:merged+topic:bp/policy-defaults-refresh+branch:master > > > > > [10] > > > > > > https://review.opendev.org/q/topic:%22implement-default-roles%22+(status:open%20OR%20status:merged) > > > > > [11] > > > > > > https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ongoing/policy-goals-and-roadmap.html > > > > > [12] > > > > > > https://docs.openstack.org/keystone/latest/admin/service-api-protection.html > > > > > [13] > > > > > > https://docs.openstack.org/keystone/latest/contributor/services.html#authorization-scopes > > > > > [14] > > > > > > https://governance.openstack.org/tc/reference/popup-teams.html#secure-default-policies > > > > > [15] > > > > https://review.opendev.org/q/topic:%2522secure-rbac%2522+status:open > > > > > > > > > > > > > From thierry at openstack.org Wed Jan 27 15:51:53 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 27 Jan 2021 16:51:53 +0100 Subject: [largescale-sig] Next meeting: January 27, 15utc In-Reply-To: References: Message-ID: <6633fec8-9f89-a0e9-cd3a-ba9110414d10@openstack.org> We held our meeting today. We discussed progress documenting an openstack operator's Scaling Journey[1], as well as ways to share more information and generate more engagement in our regular meeting. [1] https://wiki.openstack.org/wiki/Large_Scale_SIG Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-01-27-15.00.html Action items: * ttx to revive the OSarchiver upstreaming effort * ttx to start a "how many compute nodes in your typical cluster" discussion on the ML * belmoreira to post first draft of the ScaleOut FAQ * all to think about 5-10min presentations to use in a video version of our SIG meeting Our next meeting will be Wednesday, February 10 at 15utc in #openstack-meeting-3 on Freenode IRC. Please join us if interested in helping make large scale openstack less scary! -- Thierry Carrez (ttx) From stephenfin at redhat.com Wed Jan 27 16:08:45 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Wed, 27 Jan 2021 16:08:45 +0000 Subject: [All] Is it time to move on from StoryBoard? (Was: [All][StoryBoard] Angular.js Alternatives) In-Reply-To: References: <35c47f497e8f233905b5fec033174a64fc11fdc8.camel@redhat.com> Message-ID: <0ed612da12d94902fd52582779316cc88cc8a1b0.camel@redhat.com> On Wed, 2021-01-27 at 14:52 +0100, Radosław Piliszek wrote: > On Wed, Jan 27, 2021 at 11:18 AM Stephen Finucane wrote: > > > > On Tue, 2021-01-26 at 12:51 -0800, Kendall Nelson wrote: > > > For better or for worse, we are a pretty small team and don't need to poll > > > since we can all attend a meeting and agree there. > > > > > > Calling for outside opinions was also a half hearted plea for help :) > > > > Is this an indication that we finally have to settle on another bug tracker? > > This has been discussed previously but I can't find any conclusions from those > > past discussions, hence I'm asking again. I do not intend to dismiss the hard > > work of those people working on StoryBoard in the slightest, but it does seem > > like StoryBoard as a project has never really taken off outside OpenStack and is > > having a hard time surviving, let alone growing, likely because of that along > > with the widespread use of forge-style tools (GitHub, GitLab) and more > > comprehensive project management tools (sigh, JIRA). I recall fungi (?) raising > > the idea of enabling the more forge'y features, including the issue tracker, of > > Gitea in the past, but there are also separate tools like (heaven forbid) > > Bugzilla, Trac, Mantis, etc. that we could use. Heck, Launchpad is still around > > and is still open source (though I recall there being other issues with that?). > > We've already made a tough decision recently, with the sunsetting of ask.o.o, > > and with the impending deprecation of Angular.js, perhaps this is as good a time > > as any to do the same with StoryBoard? > > FWIW, I started a similar thread in September 2020: [1] > You might want to read it too. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017164.html Ugh, my search wasn't anywhere near thorough enough. Apologies and thanks for the link. To summarize, it seems the blocker to using Gitea's issue tracking feature is (a) lack of support for confidential issues and (b) how the front-end is currently deployed, while the reason not to prefer Launchpad is the requirement to use Canonical's SSO. We're not going to be able to fix the latter so someone needs to either add confidential issue support to Gitea or propose yet another tool to fill the gap. That or help rewrite the entire frontend of StoryBoard, of course :) /me also wonders how rough newer versions of upstream Bugzilla (vs. the heavily customized Mozilla and Red Hat instances) are these days. Cheers, Stephen > -yoctozepto > From fungi at yuggoth.org Wed Jan 27 16:52:59 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jan 2021 16:52:59 +0000 Subject: [all][tc] Dropping lower-constraints testing from all projects In-Reply-To: <0b0224cba026c08ccd3986a5da1672bf44863ad2.camel@redhat.com> References: <20210119153202.l6jcm723qq5uq2zc@yuggoth.org> <7a3f1ebd-cfbd-7907-7b4d-f4d2331c7057@debian.org> <20210119235149.xxfwuwcsfzog6isi@yuggoth.org> <17720e3e0f6.d479c210145054.613445843370759402@ghanshyammann.com> <0d4bb1d274c5456a835e5c43c890b80867e1f29b.camel@redhat.com> <20210121145011.23pvymfzukevkqsl@yuggoth.org> <0b0224cba026c08ccd3986a5da1672bf44863ad2.camel@redhat.com> Message-ID: <20210127165259.epmhkp4icxlgnhcd@yuggoth.org> On 2021-01-27 10:00:14 +0000 (+0000), Stephen Finucane wrote: > On Thu, 2021-01-21 at 14:50 +0000, Jeremy Stanley wrote: > > On 2021-01-21 09:30:19 +0000 (+0000), Stephen Finucane wrote: > > [...] > > > What we have doesn't work, and direct dependencies are the > > > only things we can truly control. In the scenario you're > > > suggesting, not only do we need to track dependencies, but we > > > also need to track the dependencies of dependencies, and the > > > dependencies of the dependencies of the dependencies, and the > > > dependencies of the dependencies of the dependencies of the > > > dependencies etc. etc. down the rabbit hole. For each of these > > > indirect dependencies, of which there may be many, we need to > > > figure out what the minimum version is for each of these > > > indirect dependencies is manually, because as has been noted > > > many times there is no standardized machinery in place in pip > > > etc. to find (and test) the minimum dependency versions > > > supported by a package. Put another way, if we depend on > > > package foo, which depends on package bar, which depends on > > > package baz, we can state our own informed minimum version for > > > foo, but we will need to inspect foo to find a minimum version > > > of bar that is suitable, and we will need to inspect baz to > > > find a minimum version of baz that is suitable. An impossible > > > ask. > > [...] > > > > Where this begins to fall apart, as I mentioned earlier, is that > > the larger your transitive dependency set, the more likely it is > > that a direct dependency is *also* an indirect dependency (maybe > > many layers down). If a dependency of your dependency updates to > > a version which insists on a newer version of some other direct > > dependency of yours than what you've set in > > lower-constraints.txt, then your jobs are going to break and > > need lower bounds adjustments or additional indirect > > dependencies added to the lower-constraints.txt to roll them > > back to versions which worked with the others you've set. Unlike > > upper-constraints.txt where it's assumed that a complete > > transitive set of dependencies is covered, this will mean > > additional churn in your stable branches over time. > > Ah, I didn't grasp this particular point when you initially raised > it. I've spent some time playing around with pip (there's a fun > code base) to no avail, so this (the reduced dependency set) is > still the best idea I've got, flaws and all. With that said, our > 'upper-constraints.txt' should still capture all dependencies, > including those indirect ones, no? As such, none of those should > be increasing on stable branches, which means the set of > transitive dependencies should remain fixed once the branch is > cut? > > > Or is the idea that we would only every do lower bounds checking > > on the release under development, and then remove those jobs > > when we branch? > > This is also an option if it proves to be particularly painful. It > does feel like this would indicate a failure of our > upper-constraints to cap a dependency, even if its indirect, but I > realize there are limits to what we can achieve here. Well, that's perhaps an option, you're basically suggesting combining the complete global upper-constraints.txt from openstack/requirements with an incomplete local lower-constraints.txt in each project. That should allow you to have a complete transitive set, however how you would go about integrating them together would still need to be determined. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Wed Jan 27 16:58:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jan 2021 16:58:31 +0000 Subject: [infra][release] delete old EOL'd stable branches In-Reply-To: References: <00a4690f-0859-e8d5-78ad-54e1d6be05fa@est.tech> Message-ID: <20210127165831.5favqoz4snytk6r3@yuggoth.org> On 2021-01-27 12:06:41 +0100 (+0100), Előd Illés wrote: > When Extended Maintenance process was created, there was no clear > decision whether EOL (and branch deletion) is needed afterwards or > not. But one thing we saw is that if a branch was EOL'd and still > open, then patches arrived, and even some got approved which > caused errors + confusions. Also, if the branch is open and > periodic jobs are not deleted, then those are still running day by > day. In this case only the branch deletion is the solution (as > clearly those branches cannot accept job fixing patches). > > Though, you are right that if we don't even tag a branch with > '$series-eol', then the above issue does not come. So > theoretically we could forget about the 'eol' process, it would > not cause any issue in my understanding. The periodic jobs needs > to be deleted from .zuul.yaml of course, and maybe some other > cleanup, otherwise it is possible. That's true. I can accept this, > and this was my concept, too, in the beginning. [...] Deleting the branch serves a couple of purposes for visibility: first, as noted, people will get an error when trying to push a change for them; but second, consumers of those branches will actually no longer be able to pull from them nor check them out, and will have to switch to checking out the -eol tag for them instead, which makes it very clear that we are no longer developing on them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kennelson11 at gmail.com Wed Jan 27 18:04:58 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 27 Jan 2021 10:04:58 -0800 Subject: [all][mentoring][outreachy][GSoC]Summer Interns/ Student Coders Anyone? Message-ID: Hello :) So, if you've been involved in Outreachy, the Google Summer of Code[1] is pretty similar. It looks like the applications for projects open January 29, 2021 at 11:00 (Pacific Standard Time) if anyone wants to apply. I encourage you all to do so! It gets a lot of attention from university students so it's a great opportunity to get some new contributors and teach some more of the world about our community and open source! If you are interested in applying, let me know and I will do what I can to help you with the application to get it done on time (deadline not listed on the site yet, but they always come faster than we'd like)! -Kendall (diablo_rojo) [1] https://summerofcode.withgoogle.com/get-started/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Jan 27 19:01:44 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 27 Jan 2021 20:01:44 +0100 Subject: [All] Is it time to move on from StoryBoard? (Was: [All][StoryBoard] Angular.js Alternatives) In-Reply-To: <0ed612da12d94902fd52582779316cc88cc8a1b0.camel@redhat.com> References: <35c47f497e8f233905b5fec033174a64fc11fdc8.camel@redhat.com> <0ed612da12d94902fd52582779316cc88cc8a1b0.camel@redhat.com> Message-ID: On Wed, Jan 27, 2021 at 5:12 PM Stephen Finucane wrote: > On Wed, 2021-01-27 at 14:52 +0100, Radosław Piliszek wrote: > > On Wed, Jan 27, 2021 at 11:18 AM Stephen Finucane > wrote: > > > > > > On Tue, 2021-01-26 at 12:51 -0800, Kendall Nelson wrote: > > > > For better or for worse, we are a pretty small team and don't need > to poll > > > > since we can all attend a meeting and agree there. > > > > > > > > Calling for outside opinions was also a half hearted plea for help :) > > > > > > Is this an indication that we finally have to settle on another bug > tracker? > > > This has been discussed previously but I can't find any conclusions > from those > > > past discussions, hence I'm asking again. I do not intend to dismiss > the hard > > > work of those people working on StoryBoard in the slightest, but it > does seem > > > like StoryBoard as a project has never really taken off outside > OpenStack and is > > > having a hard time surviving, let alone growing, likely because of > that along > > > with the widespread use of forge-style tools (GitHub, GitLab) and more > > > comprehensive project management tools (sigh, JIRA). I recall fungi > (?) raising > > > the idea of enabling the more forge'y features, including the issue > tracker, of > > > Gitea in the past, but there are also separate tools like (heaven > forbid) > > > Bugzilla, Trac, Mantis, etc. that we could use. Heck, Launchpad is > still around > > > and is still open source (though I recall there being other issues > with that?). > > > We've already made a tough decision recently, with the sunsetting of > ask.o.o, > > > and with the impending deprecation of Angular.js, perhaps this is as > good a time > > > as any to do the same with StoryBoard? > > > > FWIW, I started a similar thread in September 2020: [1] > > You might want to read it too. > > > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-September/017164.html > > Ugh, my search wasn't anywhere near thorough enough. Apologies and thanks > for > the link. To summarize, it seems the blocker to using Gitea's issue > tracking > feature is (a) lack of support for confidential issues and (b) how the > front-end > is currently deployed, while the reason not to prefer Launchpad is the > requirement to use Canonical's SSO. We're not going to be able to fix the > latter > so someone needs to either add confidential issue support to Gitea or > propose > yet another tool to fill the gap. That or help rewrite the entire frontend > of > StoryBoard, of course :) > > /me also wonders how rough newer versions of upstream Bugzilla (vs. the > heavily > customized Mozilla and Red Hat instances) are these days. > I only have experience with the Red Hat instance, and I find virtually any other solution superior to it, including a shared folder over NFS (only half-kidding). Dmitry > > Cheers, > Stephen > > > -yoctozepto > > > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ankelezhang at gmail.com Wed Jan 27 09:29:42 2021 From: ankelezhang at gmail.com (Ankele zhang) Date: Wed, 27 Jan 2021 17:29:42 +0800 Subject: nova modify quota Message-ID: Hi~ I have an Rocky OpenStack platform on CentOS7.6. I need to delete some code about 'quota' in nova, because I build my baremetal node and my quota usage changed by nova. I have traced the Nova code by pdb, but I could not found the change quota usage place in nova. If ironic has config to avoid 'quota usage waste', or some good method to delete the code? Looking forward to your reply. Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Jan 27 22:34:48 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 27 Jan 2021 17:34:48 -0500 Subject: [cinder] s3 backup driver status Message-ID: (I think there are time zone issues preventing the team working on the driver from attending the cinder weekly meeting, so I'm sending this update to the mailing list.) We discussed the proposed s3 backup driver [0] at today's cinder meeting. The code is looking good and the developers have been responsive to comments and doing quick revisions. The patch has one +2 and looks very close to being approved. One thing that's missing at this point is a comment on the review from one of the developers that they've deployed it with cinder and it works, and under what conditions it's been tested (that is, actually with S3, or only with the swift proxy). So please add such a comment to the review. This will allow us to merge the backup driver while you continue to work on getting the CI running [1]. We haven't required CI on backup drivers previously, so the cinder team has taken the position that lack of CI is not a reason to hold up the backup driver from merging. I want to stress, however, that we are excited that you are working on getting CI for the driver. The CI is looking close to completion as well (the holdup at the moment seems to be a problem with the swift proxy and recent versions of the boto library). The devstack support for this effort [2] is also looking good. [0] https://review.opendev.org/c/openstack/cinder/+/746561 [1] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772085 [2] https://review.opendev.org/c/openstack/devstack/+/770171 From mnaser at vexxhost.com Thu Jan 28 01:21:11 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 27 Jan 2021 20:21:11 -0500 Subject: [tc] weekly meeting Message-ID: Hi everyone, Here’s the agenda for our weekly TC meeting. It will happen tomorrow (Thursday the 28th) at 1500 UTC in #openstack-tc and I will be your chair. If you can’t attend, please put your name in the “Apologies for Absence” section. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting # ACTIVE INITIATIVES * Follow up on past action items * Audit SIG list and chairs (diablo_rojo) - https://etherpad.opendev.org/p/2021-SIG-Updates - http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019994.html * Gate performance and heavy job configs (dansmith) - http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Dropping lower-constraints testing from all projects (gmann) - http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019672.html * Open Reviews - https://review.opendev.org/q/project:openstack/governance+is:open Thank you, Mohammed -- Mohammed Naser VEXXHOST, Inc. From rosmaita.fossdev at gmail.com Thu Jan 28 03:04:59 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 27 Jan 2021 22:04:59 -0500 Subject: [cinder] wallaby R-9 mid-cycle poll Message-ID: <8edeadaf-baba-616a-6db8-2c45bab0271d@gmail.com> Hello Cinder team and anyone interested in Cinder, The second wallaby mid-cycle meeting will be held during week R-9 (the week of 8 February 2021). It will be 2 hours long. Please indicate your availability on the following poll: https://doodle.com/poll/9vqri7855cab858d Please respond before 21:00 UTC on Tuesday 2 February 2021. thanks, brian From jean-francois.taltavull at elca.ch Thu Jan 28 07:59:50 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Thu, 28 Jan 2021 07:59:50 +0000 Subject: Strange behaviour of OSC in keystone MFA context In-Reply-To: References: Message-ID: <27cda0ba41634425b5c4d688381d6107@elca.ch> > -----Original Message----- > From: Sean Mooney > Sent: mardi, 26 janvier 2021 20:01 > To: openstack-discuss at lists.openstack.org > Subject: Re: Strange behaviour of OSC in keystone MFA context > > On Tue, 2021-01-26 at 17:46 +0000, Taltavull Jean-Francois wrote: > > Hello, > > > > I'm experiencing the following strange behavior of openstack CLI with os- > auth-methods option (most parameters are defined in clouds.yaml): > > > > $ openstack token issue --os-auth-type v3multifactor --os-auth-methods > > password,totp > > > --os-auth-methods does not appear to be a standard part of osc infact i cant > find it in any openstack repo with > > i think this is the implemtaions > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > 1/loading/_plugins/identity/v3.py#L303-L340 > > this presumable is where it generates teh optins > > options.extend([ > loading.Opt( > 'auth_methods', > required=True, > help="Methods to authenticate with."), > ]) > > > if i do openstack help --os-auth-type v3multifactor it does show up with the > following text > > --os-auth-methods > With v3multifactor: Methods to authenticate with. (Env: > OS_AUTH_METHODS) > > that does not say much but > > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > 1/tests/unit/identity/test_identity_v3.py#L762-L800 > implies its a list > > with that said there are no test for multifactor as far as i can see like this one > https://opendev.org/openstack/python- > openstackclient/src/branch/master/openstackclient/tests/functional/common/t > est_args.py#L66-L79 > > there also does not seam too be a release note declaring support. > > so while keystone auth support multi factor im not sure that osc actully does > > i specpec that the fild type is not correct and it is indeed been parsed as a string > instead of a list of stirng field. > it might be fixable via keystoneauth but it proably need osc support and testing. > > > The plugin p could not be found > > > > Note that "p" is the first letter of "password". It looks like the option parser > handled "password,totp" as a string instead of as a list of strings. > > > > Version of openstack CLI is 5.4.0. > > > > Any idea ? > > > > Thanks ! > > > > Jean-François > > > > > > Thanks for your answer Sean. What can I do on my end to get things done ? Jean-François From jayadityagupta11 at gmail.com Thu Jan 28 08:48:36 2021 From: jayadityagupta11 at gmail.com (jayaditya gupta) Date: Thu, 28 Jan 2021 09:48:36 +0100 Subject: [osc-placement] placement_api_version in clouds.yaml not recognized Message-ID: Hello it seems during plugin creation placement_api_version is not recognized by the osc-placement plugin. I have looked at other plugins too but can't seem to understand where the clouds.yaml file is loaded or referenced to get the desired value. Story : https://storyboard.openstack.org/#!/story/2008553 Would be helpful if i can get some pointers Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jan 28 11:35:25 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Jan 2021 12:35:25 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> Message-ID: <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> Eugen Block wrote: > Hi everyone, > > I have a question regarding read-only ask.openstack.org. > I understand the decision to make it read-only but one thing still > bothers me since I look up problems from time to time. And I noticed > that you can't expand all comments if there were more than a few. Users > were able to expand by clicking "see more comments" but that's not > possible anymore. Is there any way to make the whole page visible, maybe > remove that button and kind of "auto-expand" all comments? Some of the > comments might have valuable information. Hi Eugen, I fear that ask is not giving us that much control on what's enabled and disabled in "read-only mode". I see there is still pagination to access later answers... Would you have an example of a question with the "see more comments" behavior ? More generally, note that we'll hit a dead-end in April when the ask server LTS distribution will cease to be supported, and we have nobody in the community with time to work on porting the ask setup to more recent distributions... So the plan is to turn it off. At that point ask.openstack.org content will be lost, unless we somehow make a static copy somewhere. The Internet archive has copies of ask.openstack.org but they do not seem to run very deep. If anyone has ideas on how we could preserve that content without spending too many cycles on it, please share :) -- Thierry Carrez (ttx) From eblock at nde.ag Thu Jan 28 11:54:13 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 28 Jan 2021 11:54:13 +0000 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> Message-ID: <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> Hi, thanks for your response and the clarification. I guess at some point most of the content will be obsolete, but I understand that it has to end at some point. > I fear that ask is not giving us that much control on what's enabled > and disabled in "read-only mode". I see there is still pagination to > access later answers... Would you have an example of a question with > the "see more comments" behavior ? One example would be here: https://ask.openstack.org/en/question/128442/how-to-setup-controller-node-ha-with-all-services/ Regards, Eugen Zitat von Thierry Carrez : > Eugen Block wrote: >> Hi everyone, >> >> I have a question regarding read-only ask.openstack.org. >> I understand the decision to make it read-only but one thing still >> bothers me since I look up problems from time to time. And I >> noticed that you can't expand all comments if there were more than >> a few. Users were able to expand by clicking "see more comments" >> but that's not possible anymore. Is there any way to make the whole >> page visible, maybe remove that button and kind of "auto-expand" >> all comments? Some of the comments might have valuable information. > > Hi Eugen, > > I fear that ask is not giving us that much control on what's enabled > and disabled in "read-only mode". I see there is still pagination to > access later answers... Would you have an example of a question with > the "see more comments" behavior ? > > More generally, note that we'll hit a dead-end in April when the ask > server LTS distribution will cease to be supported, and we have > nobody in the community with time to work on porting the ask setup > to more recent distributions... So the plan is to turn it off. > > At that point ask.openstack.org content will be lost, unless we > somehow make a static copy somewhere. The Internet archive has > copies of ask.openstack.org but they do not seem to run very deep. > If anyone has ideas on how we could preserve that content without > spending too many cycles on it, please share :) > > -- > Thierry Carrez (ttx) From skaplons at redhat.com Thu Jan 28 12:11:45 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Jan 2021 13:11:45 +0100 Subject: [neutron] Drivers meeting agenda - 29.01.2021 Message-ID: <20210128121145.5xtzhddefo5l6rwx@p1.localdomain> Hi, Agenda for tomorrow's drivers meeting is available at [1]. We have 2 things to discuss: * 1 RFE - [RFE] [QoS] add qos rule type packet per second (pps) [2] * 1 On demand topic - Do we want to apply for tag 'assert:supports-api-interoperability' - see [3] for details. See You tomorrow on the meeting. [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda [2] https://bugs.launchpad.net/neutron/+bug/1912460 [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019505.html -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From smooney at redhat.com Thu Jan 28 12:38:08 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 28 Jan 2021 12:38:08 +0000 Subject: [keystone][osc]Strange behaviour of OSC in keystone MFA context In-Reply-To: <27cda0ba41634425b5c4d688381d6107@elca.ch> References: <27cda0ba41634425b5c4d688381d6107@elca.ch> Message-ID: <18d48a3d208317e9c9b220ff20ae2f46a2442ef0.camel@redhat.com> On Thu, 2021-01-28 at 07:59 +0000, Taltavull Jean-Francois wrote: > > -----Original Message----- > > From: Sean Mooney > > Sent: mardi, 26 janvier 2021 20:01 > > To: openstack-discuss at lists.openstack.org > > Subject: Re: Strange behaviour of OSC in keystone MFA context > > > > On Tue, 2021-01-26 at 17:46 +0000, Taltavull Jean-Francois wrote: > > > Hello, > > > > > > I'm experiencing the following strange behavior of openstack CLI with os- > > auth-methods option (most parameters are defined in clouds.yaml): > > > > > > $ openstack token issue --os-auth-type v3multifactor --os-auth-methods > > > password,totp > > > > > --os-auth-methods does not appear to be a standard part of osc infact i cant > > find it in any openstack repo with > > > > i think this is the implemtaions > > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > > 1/loading/_plugins/identity/v3.py#L303-L340 > > > > this presumable is where it generates teh optins > > > >   options.extend([ > >             loading.Opt( > >                 'auth_methods', > >                 required=True, > >                 help="Methods to authenticate with."), > >         ]) > > > > > > if i do openstack help --os-auth-type v3multifactor it does show up with the > > following text > > > > --os-auth-methods > >                         With v3multifactor: Methods to authenticate with. (Env: > > OS_AUTH_METHODS) > > > > that does not say much but > > > > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > > 1/tests/unit/identity/test_identity_v3.py#L762-L800 > > implies its a list > > > > with that said there are no test for multifactor as far as i can see like this one > > https://opendev.org/openstack/python- > > openstackclient/src/branch/master/openstackclient/tests/functional/common/t > > est_args.py#L66-L79 > > > > there also does not seam too be a release note declaring support. > > > > so while keystone auth support multi factor im not sure that osc actully does > > > > i specpec that the fild type is not correct and it is indeed been parsed as a string > > instead of a list of stirng field. > > it might be fixable via keystoneauth but it proably need osc support and testing. > > > > > The plugin p could not be found > > > > > > Note that "p" is the first letter of "password". It looks like the option parser > > handled "password,totp" as a string instead of as a list of strings. > > > > > > Version of openstack CLI is 5.4.0. > > > > > > Any idea ? > > > > > > Thanks ! > > > > > > Jean-François > > > > > > > > > > > > Thanks for your answer Sean. > > What can I do on my end to get things done ? well unfortunetly i do not work on keystone or osc i just saw your mail while i was waiting for some tests to finish running. with that said i have upstaed the subject to include both projects so hopefully that will get the attention of those that can help. > > Jean-François From b.petermann at syseleven.de Thu Jan 28 12:55:12 2021 From: b.petermann at syseleven.de (Bodo Petermann) Date: Thu, 28 Jan 2021 13:55:12 +0100 Subject: [neutron] Drivers meeting agenda - 29.01.2021 In-Reply-To: <20210128121145.5xtzhddefo5l6rwx@p1.localdomain> References: <20210128121145.5xtzhddefo5l6rwx@p1.localdomain> Message-ID: Hi, could we also add the RFE VPNaaS for OVN? Especially to discuss what may be blocking the spec to be accepted. [1] https://bugs.launchpad.net/neutron/+bug/1905391 Thank you Bodo Petermann (bpetermann) ------ SysEleven GmbH Boxhagener Straße 80 10245 Berlin T +49 30 233 2012 0 F +49 30 616 7555 0 http://www.syseleven.de http://www.facebook.com/SysEleven http://www.twitter.com/syseleven Aktueller System-Status immer unter: https://www.syseleven-status.net/ Firmensitz: Berlin Registergericht: AG Berlin Charlottenburg, HRB 108571 B Geschäftsführer: Marc Korthaus, Jens Ihlenfeld > Am 28.01.2021 um 13:11 schrieb Slawek Kaplonski : > > Hi, > > Agenda for tomorrow's drivers meeting is available at [1]. > We have 2 things to discuss: > > * 1 RFE - [RFE] [QoS] add qos rule type packet per second (pps) [2] > * 1 On demand topic - Do we want to apply for tag > 'assert:supports-api-interoperability' - see [3] for details. > > See You tomorrow on the meeting. > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > [2] https://bugs.launchpad.net/neutron/+bug/1912460 > [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019505.html > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jan 28 13:11:51 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Jan 2021 14:11:51 +0100 Subject: [neutron] Drivers meeting agenda - 29.01.2021 In-Reply-To: References: <20210128121145.5xtzhddefo5l6rwx@p1.localdomain> Message-ID: <20210128131151.7cnov2s5uqckak23@p1.localdomain> Hi, On Thu, Jan 28, 2021 at 01:55:12PM +0100, Bodo Petermann wrote: > Hi, > > could we also add the RFE VPNaaS for OVN? > Especially to discuss what may be blocking the spec to be accepted. Sure. I added it to the "On Demand" section. > > [1] https://bugs.launchpad.net/neutron/+bug/1905391 > > Thank you > Bodo Petermann (bpetermann) > > ------ > > SysEleven GmbH > Boxhagener Straße 80 > 10245 Berlin > > T +49 30 233 2012 0 > F +49 30 616 7555 0 > > http://www.syseleven.de > http://www.facebook.com/SysEleven > http://www.twitter.com/syseleven > > Aktueller System-Status immer unter: > https://www.syseleven-status.net/ > > > Firmensitz: Berlin > Registergericht: AG Berlin Charlottenburg, HRB 108571 B > Geschäftsführer: Marc Korthaus, Jens Ihlenfeld > > > Am 28.01.2021 um 13:11 schrieb Slawek Kaplonski : > > > > Hi, > > > > Agenda for tomorrow's drivers meeting is available at [1]. > > We have 2 things to discuss: > > > > * 1 RFE - [RFE] [QoS] add qos rule type packet per second (pps) [2] > > * 1 On demand topic - Do we want to apply for tag > > 'assert:supports-api-interoperability' - see [3] for details. > > > > See You tomorrow on the meeting. > > > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda > > [2] https://bugs.launchpad.net/neutron/+bug/1912460 > > [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019505.html > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Thu Jan 28 13:16:44 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Jan 2021 14:16:44 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> Message-ID: <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> Eugen Block wrote: >> I fear that ask is not giving us that much control on what's enabled >> and disabled in "read-only mode". I see there is still pagination to >> access later answers... Would you have an example of a question with >> the "see more comments" behavior ? > > One example would be here: > > https://ask.openstack.org/en/question/128442/how-to-setup-controller-node-ha-with-all-services/ Thanks. It seems the "see more comments" button is a variation on the "add a comment" button, so its action is unfortunately blocked by the read-only flag. -- Thierry From thierry at openstack.org Thu Jan 28 13:24:19 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Jan 2021 14:24:19 +0100 Subject: [ops][largescale-sig] How many compute nodes in a single cluster ? Message-ID: <533ac947-27cb-5ee8-ae7b-9553ca74ad8a@openstack.org> Hi everyone, As part of the Large Scale SIG[1] activities, I'd like to quickly poll our community on the following question: How many compute nodes do you feel comfortable fitting in a single-cluster deployment of OpenStack, before you need to scale it out to multiple regions/cells/.. ? Obviously this depends on a lot of deployment-dependent factors (type of activity, choice of networking...) so don't overthink it: a rough number is fine :) [1] https://wiki.openstack.org/wiki/Large_Scale_SIG Thanks in advance, -- Thierry Carrez (ttx) From eblock at nde.ag Thu Jan 28 13:32:46 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 28 Jan 2021 13:32:46 +0000 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> Message-ID: <20210128133246.Horde.RnH_lJU1DMBR3Y0TzH25wZ2@webmail.nde.ag> Alright, thanks for looking into it. Zitat von Thierry Carrez : > Eugen Block wrote: >>> I fear that ask is not giving us that much control on what's >>> enabled and disabled in "read-only mode". I see there is still >>> pagination to access later answers... Would you have an example of >>> a question with the "see more comments" behavior ? >> >> One example would be here: >> >> https://ask.openstack.org/en/question/128442/how-to-setup-controller-node-ha-with-all-services/ > > Thanks. It seems the "see more comments" button is a variation on > the "add a comment" button, so its action is unfortunately blocked > by the read-only flag. > > -- > Thierry From hberaud at redhat.com Thu Jan 28 13:40:00 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 28 Jan 2021 14:40:00 +0100 Subject: [all] Proposed Xena cycle schedule In-Reply-To: References: Message-ID: Hello, Just a quick heads up to inform you that we replaced the 23 week schedule by a 24 week schedule. https://review.opendev.org/c/openstack/releases/+/772367 @distros maintainers: Please feel free to approve the patch that best fits your calendar and feel free to suggest modifications. - https://review.opendev.org/c/openstack/releases/+/772367 (24w) - https://review.opendev.org/c/openstack/releases/+/772357 (25w) Thanks for reading. Le lun. 25 janv. 2021 à 18:40, Herve Beraud a écrit : > Hey everyone, > > The Wallaby cycle is going by fast, and it's already time to start > planning some of the early things for the Xena release. One of the > first steps for that is actually deciding on the release schedule. > > Typically we have done this based on when the next Summit event was > planned to take place. Due to several reasons, we don't have a date yet > for the second 2021 event. > > The current thinking is it will likely take place in October (nothing is > set, just an educated guess, so please don't use that for any other > planning). So for the sake of figuring out the release schedule, > we are proposing two schedules, one targeting a release date in mid of > september, one > targeting a release date in early October. Hopefully this will then align > well with event plans. > > I have two proposed release schedule up for review here: > > - https://review.opendev.org/c/openstack/releases/+/772367 (23w) > - https://review.opendev.org/c/openstack/releases/+/772357 (25w) > > Please feel free to comment on the patch if you see any major > issues that we may have not considered. > > Thanks! > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jan 28 14:00:50 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 28 Jan 2021 15:00:50 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> Message-ID: On Thu, Jan 28, 2021 at 2:16 PM Thierry Carrez wrote: > > Eugen Block wrote: > >> I fear that ask is not giving us that much control on what's enabled > >> and disabled in "read-only mode". I see there is still pagination to > >> access later answers... Would you have an example of a question with > >> the "see more comments" behavior ? > > > > One example would be here: > > > > https://ask.openstack.org/en/question/128442/how-to-setup-controller-node-ha-with-all-services/ > > Thanks. It seems the "see more comments" button is a variation on the > "add a comment" button, so its action is unfortunately blocked by the > read-only flag. I believe it's due to the fact that the whole JavaScript got killed by a syntax error in: askbot['messages']['readOnlyMessage'] = "The ask.openstack.org website will be read-only from now on. Please ask questions on the openstack-discuss mailing-list, stackoverflow.com for coding or serverfault.com for operations."; The double quote sign breaks the string improperly. Once that is fixed, the JavaScript functions should come back. I can't see the repo in opendev so can't send a patch there. Perhaps you can help? -yoctozepto From aditi.Dukle at ibm.com Thu Jan 28 08:10:48 2021 From: aditi.Dukle at ibm.com (aditi Dukle) Date: Thu, 28 Jan 2021 08:10:48 +0000 Subject: Integration tests failing on ppc64le master branch In-Reply-To: References: , <20210121131218.44gumx7dvwxqkoci@lyarwood-laptop.usersys.redhat.com>, <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: An HTML attachment was scrubbed... URL: From martin at chaconpiza.com Thu Jan 28 09:59:25 2021 From: martin at chaconpiza.com (Martin Chacon Piza) Date: Thu, 28 Jan 2021 10:59:25 +0100 Subject: [Release-job-failures] Release of openstack/monasca-tempest-plugin for ref refs/tags/2.2.0 failed In-Reply-To: References: <20210120231106.vt2hdgtdsyh2jurt@yuggoth.org> Message-ID: Hi Hervé, I just proposed the new release. https://review.opendev.org/c/openstack/releases/+/772845 I am not sure whether the version should be 2.2.1 Regards, Martin El jue, 21 de ene. de 2021 a la(s) 07:49, Herve Beraud (hberaud at redhat.com) escribió: > > > Le jeu. 21 janv. 2021 à 00:13, Jeremy Stanley a > écrit : > >> On 2021-01-20 11:18:56 +0100 (+0100), Herve Beraud wrote: >> > Le mer. 20 janv. 2021 à 11:01, Martin Chacon Piza < >> chacon.piza at gmail.com> a >> > écrit : >> > > Thanks for your note. This change will fix the problem >> > > >> https://review.opendev.org/c/openstack/monasca-tempest-plugin/+/771523 >> [...] >> > > Could you help us please to restart the monasca-tempest-plugin >> release? >> > >> > Sure, I asked the infra team to reenqueue the job, let's wait for that, >> do >> > not hesitate to join #openstack-infra to join the discussion. >> [...] >> >> I'm still catching up, sorry (it's been a busy day), but I have the >> same question which was posed by others in IRC when you initially >> brought it up. If the fix merged after the release request, isn't a >> new release request needed instead which references a later branch >> state containing the fix? Just trying to re-run release jobs with a >> ref from the old unfixed state of the repository will presumably >> only reproduce the same error, unless I'm misunderstanding the >> problem. >> > > Yes you're right, a new release should be done first to release the fix, > sorry. > @Martin: Can you propose a new release, the previous one version will stay > absent of the registry. > > -- >> Jeremy Stanley >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sdiaz at cv.fibercorp.com.ar Thu Jan 28 14:11:37 2021 From: sdiaz at cv.fibercorp.com.ar (Diaz, Santiago Miguel) Date: Thu, 28 Jan 2021 14:11:37 +0000 Subject: Ussuri Trove trovestack problem Message-ID: Hi. Im new in Openstack. When I try to create Trove guest image with the next command ./trovestack build-image mariadb ubuntu xenial true ubuntu exit 1 obtain. Reading the logs I found: 2021-01-28 13:30:13.217 | [error] The following package is needed by the script, but not installed: 2021-01-28 13:30:13.218 | apt-transport-https 2021-01-28 13:30:13.218 | Please install and rerun the script. I resolved it changing then next file: /trove/integration/scripts/files/elements/ubuntu-guest/pre-install.d/04-baseline-tools - apt-get --allow-unauthenticated install -y language-pack-en python-software-properties software-properties-common + apt-get --allow-unauthenticated install -y language-pack-en python-software-properties software-properties-common apt-transport-https I hope it helps AVISO LEGAL: El presente correo electrónico y la información contenida en el mismo, es información privada y confidencial, está dirigida únicamente a su destinatario y no puede ser revelada a terceros, ni utilizada inapropiadamente en interés del destinatario. Excepto que se haya establecido expresamente de otra forma, éste correo electrónico y la mencionada información privada y confidencial no constituye una oferta, ni una promesa, ni una propuesta y no será interpretado como aceptación, ni como una tratativa pre-contractual, acuerdo parcial, contrato preliminar y/o pre-contrato, ni como un compromiso vinculante y/o declaración de voluntad oficial de TELECOM ARGENTINA S.A. Si Usted no es el destinatario original de éste mensaje y por ese medio pudo acceder a dicha información, por favor, elimine el mensaje. La distribución o copia de este mensaje esta estrictamente prohibida. La transmisión de e-mails no garantiza que el correo electrónico sea seguro o libre de error. Por consiguiente, no manifestamos que esta información sea completa o precisa. Toda información está sujeta a alterarse sin previo aviso. This email and the information contained herein are proprietary and confidential, and intended for the recipient only and cannot be disclosed to third parties or used inappropriately in the interest of the recipient. Except it has been expressly stated otherwise, the email and said private and confidential information are not an offer or promise, or a proposal and will not be construed as acceptance or as a pre-contractual negotiation, partial agreement, preliminary contract and / or pre-contract or binding commitment and/or an official statement of intent of TELECOM ARGENTINA S.A. If you are not the intended recipient of this message and thereby gained access to such information, please remove the message. Distribution or copying of this message is strictly prohibited. Transmission of e-mails does not guarantee that e-mail is safe or free from error. Therefore, we do not represent that this information is complete or accurate. All information is subject to change without notice. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jan 28 14:55:37 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 28 Jan 2021 14:55:37 +0000 Subject: [nova]Integration tests failing on ppc64le master branch In-Reply-To: References: , <20210121131218.44gumx7dvwxqkoci@lyarwood-laptop.usersys.redhat.com> , <5E9MMQ.3INH7FY465VR3@est.tech> Message-ID: <9e5f7ea3aac81f7133760bc4f7ff3a19b5ff96f3.camel@redhat.com> On Thu, 2021-01-28 at 08:10 +0000, aditi Dukle wrote: > Hi Lee, > > I observed some recent failures in the integration tests run on Power machines. The failures indicate the VM failed to boot on ppc. Here are the > logs- https://oplab9.parqtec.unicamp.br/pub/ppc64el/openstack/nova/42/769942/2/check/tempest-dsvm-full-focal-py3/8811347/job-output.txt > Could you please help identify the cause? added nova to the subject. looking at the n-cpu logs the error is pretty clear Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: ERROR nova.virt.libvirt.guest [None req-ffe2d074-e828-4851- a8da-3d0aecdb5873 tempest-ServerDiskConfigTestJSON-1142390778 tempest-ServerDiskConfigTestJSON-1142390778-project] Error defining a guest with XML: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 97ddda4b-1b4b-4c2d-a77f-087faeae06ae Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: instance-00000021 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 131072 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 1 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: tempest-ServerDiskConfigTestJSON-server- 1335737010 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 2021-01-28 00:11:24 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 128 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 1 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 0 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 0 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 1 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: tempest-ServerDiskConfigTestJSON-1142390778-project Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: tempest-ServerDiskConfigTestJSON-1142390778 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: hvm Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: 1024 Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: /dev/urandom Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: Jan 28 00:11:24.743424 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: : libvirt.libvirtError: unsupported configuration: USB is disabled for this domain, but USB devices are present in the domain XML Jan 28 00:11:24.746856 zuulv3-devstack-focal-test-ppc64-0000031478 nova-compute[192435]: ERROR nova.virt.libvirt.driver [None req-ffe2d074-e828-4851- a8da-3d0aecdb5873 tempest-ServerDiskConfigTestJSON-1142390778 tempest-ServerDiskConfigTestJSON-1142390778-project] [instance: 97ddda4b-1b4b-4c2d-a77f- 087faeae06ae] Failed to start libvirt guest: libvirt.libvirtError: unsupported configuration: USB is disabled for this domain, but USB devices are present in the domain XML so this is proably a result of https://github.com/openstack/nova/commit/c34a17db6f1ed4469d963a4e6426594602939912 stephen can you take a look is there but im not seeing anything use the usb device im guessing we need to disable one that is auto added by libvirt on ppc aditi can you provide an xml form a vm booted on ppc normally before this change. specifcally the fully populated xml retrived form libvirt with virsh dumpxml so we can se what it adds > > Thanks, > Aditi Dukle >   >   > > ----- Original message ----- > > From: aditi Dukle/India/Contr/IBM > > To: lyarwood at redhat.com > > Cc: Michael J Turek/Poughkeepsie/IBM at IBM, openstack-discuss at lists.openstack.org, Sajauddin Mohammad/India/Contr/IBM at IBM > > Subject: Re: [EXTERNAL] Re: [nova] unit testing on ppc64le > > Date: Fri, Jan 22, 2021 12:43 PM > >   > > Hi Lee, > > > > Thanks for getting this fixed. I re-ran the job using the change https://review.opendev.org/c/openstack/nova/+/741545/ and we have 0 failures. > > Here are the test results - https://oplab9.parqtec.unicamp.br/pub/ppc64el/openstack/nova/periodic/openstack-tox-py39/2021-01-22-0709-66b8935/job- > > output.txt > > > > Thanks, > > Aditi Dukle > >   > > > ----- Original message ----- > > > From: Lee Yarwood > > > To: aditi Dukle > > > Cc: Michael J Turek , Sajauddin Mohammad , openstack-discuss at lists.openstack.org > > > Subject: [EXTERNAL] Re: [nova] unit testing on ppc64le > > > Date: Thu, Jan 21, 2021 6:42 PM > > >   > > > On 21-01-21 09:22:44, Lee Yarwood wrote: > > > > On Tue, 19 Jan 2021 at 20:28, aditi Dukle wrote: > > > > > > > > > > Hi Mike, > > > > > > > > > > I have started nova unit test jobs in the periodic pipeline(runs everyday at UTC- 12) for each openstack branch as follows: > > > > > periodic: > > > > >       jobs: > > > > >         - openstack-tox-py27: > > > > >             branches: > > > > >               - stable/ocata > > > > >               - stable/pike > > > > >               - stable/queens > > > > >               - stable/rocky > > > > >               - stable/stein > > > > >               - stable/train > > > > >         - openstack-tox-py36: > > > > >             branches: > > > > >               - stable/train > > > > >               - stable/ussuri > > > > >               - stable/victoria > > > > >         - openstack-tox-py37: > > > > >             branches: > > > > >               - stable/train > > > > >               - stable/ussuri > > > > >         - openstack-tox-py38: > > > > >             branches: > > > > >               - stable/victoria > > > > >         - openstack-tox-py39: > > > > >             branches: > > > > >               - master > > > > > > > > > > I have observed a few failures in the unit test cases mostly all related to volume drivers. Please have a look at the 14 test cases that are > > > > > failing in openstack-tox-py39 job( https://oplab9.parqtec.unicamp.br/pub/ppc64el/openstack/nova/periodic/openstack-tox-py39/2021-01-19-0058- > > > > > 38c70ae/job-output.txt ).  Most of the 14 failures report these errors: > > > > > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified NVME > > > > > InvalidConnectorProtocol: Invalid InitiatorConnector protocol specified SCALEIO > > > > > > > > > > I would need some help in understanding if these connectors(NVME,SCALEIO) are supported on ppc64. > > > > > > > > Right os-brick doesn't support these connectors on ppc64 but the real > > > > issue here is with the unit tests that don't mock out calls os-brick, > > > > an out of tree lib. I've filed a bug for this below: > > > > > > > > LibvirtNVMEVolumeDriverTestCase and LibvirtScaleIOVolumeDriverTestCase > > > > unit tests fail on ppc64 > > > > https://bugs.launchpad.net/nova/+bug/1912608  > > > > > > > > I'll push a fix for this shortly. > > > > > > libvirt: Stop NVMe and ScaleIO unit tests from calling os-brick > > > https://review.opendev.org/c/openstack/nova/+/771806/  > > >   > > > > In addition to these failures I also see stable rescue tests failing, > > > > these should be fixed by the following change: > > > > > > > > libvirt: Mock get_arch during some stable rescue unit tests > > > > https://review.opendev.org/c/openstack/nova/+/769916/  > > > > > > I also noticed some additional failures caused by the way in which the > > > libvirt virt driver loads its volume drivers at startup that also > > > attempt to load the underlying os-brick connector. As above this results > > > in volume drivers failing to load and being dropped. > > > > > > We've actually had a long standing TODO in the driver to move to loading > > > these drivers on-demand so I've proposed the following: > > > > > > libvirt: Load and cache volume drivers on-demand > > > https://review.opendev.org/c/openstack/nova/+/741545/  > > > > > > Can you rerun your tests using the above change and I'll try to address > > > any additional failures. > > >   > > > > Finally, in an effort to root out any further issues caused by tests > > > > looking up the arch of the test host I've pushed the following change > > > > to poison  nova.objects.fields.Architecture.from_host: > > > > > > > > tests: Poison nova.objects.fields.Architecture.from_host > > > > https://review.opendev.org/c/openstack/nova/+/769920  > > > > > > > > There's a huge amount of fallout from this that I'll try to address in > > > > the coming weeks ahead of M3. > > > > > > > > Hope this helps! > > > > > > > > Lee > > > > > > > > > ----- Original message ----- > > > > > From: aditi Dukle/India/Contr/IBM > > > > > To: Michael J Turek/Poughkeepsie/IBM at IBM > > > > > Cc: Sajauddin Mohammad/India/Contr/IBM at IBM, openstack-discuss at lists.openstack.org > > > > > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > > > > > Date: Tue, Jan 12, 2021 5:41 PM > > > > > > > > > > Hi Mike, > > > > > > > > > > I have created these unit test jobs - openstack-tox-py27, openstack-tox-py35, openstack-tox-py36, openstack-tox-py37, openstack-tox-py38, > > > > > openstack-tox-py39 > > > > > by referring to the upstream CI( https://opendev.org/openstack/openstack-zuul-jobs/src/branch/master/zuul.d/jobs.yaml ) and these jobs are > > > > > triggered for every patchset in the Openstack CI. > > > > > > > > > > I checked the code for old CI for Power, we didn't have any unit test jobs that were run for every patchset for nova. We had one "nova- > > > > > python27" job that was run in a periodic pipeline. So, I wanted to know if we need to run the unit test jobs on ppc for every patchset for > > > > > nova? and If yes, should these be reporting to the Openstack community? > > > > > > > > > > > > > > > Thanks, > > > > > Aditi Dukle > > > > > > > > > > ----- Original message ----- > > > > > From: Michael J Turek/Poughkeepsie/IBM > > > > > To: balazs.gibizer at est.tech, aditi Dukle/India/Contr/IBM at IBM, Sajauddin Mohammad/India/Contr/IBM at IBM > > > > > Cc: openstack-discuss at lists.openstack.org > > > > > Subject: Re: [EXTERNAL] [nova] unit testing on ppc64le > > > > > Date: Sat, Jan 9, 2021 12:52 AM > > > > > > > > > > Thanks for the heads up, > > > > > > > > > > We should have the capacity to add them. At one point I think we ran unit tests for nova but the job may have been culled in the move to > > > > > zuul v3. I've CC'd the maintainers of the CI, Aditi Dukle and Sajauddin Mohammad. > > > > > > > > > > Aditi and Sajauddin, could we add a job to pkvmci to run unit tests for nova? > > > > > > > > > > Michael Turek > > > > > Software Engineer > > > > > Power Cloud Department > > > > > 1 845 433 1290 Office > > > > > mjturek at us.ibm.com > > > > > He/Him/His > > > > > > > > > > IBM > > > > > > > > > > > > > > > > > > > > > > > > > ----- Original message ----- > > > > > From: Balazs Gibizer > > > > > To: OpenStack Discuss > > > > > Cc: mjturek at us.ibm.com > > > > > Subject: [EXTERNAL] [nova] unit testing on ppc64le > > > > > Date: Fri, Jan 8, 2021 7:59 AM > > > > > > > > > > Hi, > > > > > > > > > > We have a bugreport[1] showing that our unit tests are not passing on > > > > > ppc. In the upstream CI we don't have test capability to run our tests > > > > > on ppc. But we have the IBM Power KVM CI[2] that runs integration tests > > > > > on ppc. I'm wondering if IBM could extend the CI to run nova unit and > > > > > functional tests too. I've added Michael Turek (mjturek at us.ibm.com) to > > > > > CC. Michael is listed as the contact person for the CI. > > > > > > > > > > Cheers, > > > > > gibi > > > > > > > > > > [1]https://bugs.launchpad.net/nova/+bug/1909972  > > > > > [2]https://wiki.openstack.org/wiki/ThirdPartySystems/IBMPowerKVMCI  > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -- > > > Lee Yarwood                 A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76 > > >   > >   >   > From pierre at stackhpc.com Thu Jan 28 14:58:08 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 28 Jan 2021 15:58:08 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> Message-ID: On Thu, 28 Jan 2021 at 12:44, Thierry Carrez wrote: > At that point ask.openstack.org content will be lost, unless we somehow > make a static copy somewhere. The Internet archive has copies of > ask.openstack.org but they do not seem to run very deep. If anyone has > ideas on how we could preserve that content without spending too many > cycles on it, please share :) It might be possible without scraping the site. It appears you can iterate on all pages indexing questions with: https://ask.openstack.org/en/questions/scope:all/sort:activity-desc/page:/ Each question can be retrieved directly by its ID without having to know the last part of the URL. For example, https://ask.openstack.org/en/question/24115/ redirects to https://ask.openstack.org/en/question/24115/sample-data-of-objectringgz-containerringgz-and-accountringgz/ So, if an administrator could extract all question IDs from the database, you could feed the question URLs and the index pages to the Wayback Machine Save Page Now service, for example via this library: https://github.com/pastpages/savepagenow Although without a working search engine the usefulness of the archive is limited. From dtantsur at redhat.com Thu Jan 28 16:03:17 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 28 Jan 2021 17:03:17 +0100 Subject: [ironic] On redfish-virtual-media vs idrac-redfish-virtual-media In-Reply-To: References: Message-ID: Hi, This is how the patch would look like: https://review.opendev.org/c/openstack/ironic/+/772899 It even passes redfish and idrac unit tests unmodified (modulo incorrect mocking in test_boot). Dmitry On Mon, Jan 25, 2021 at 3:52 PM Dmitry Tantsur wrote: > Hi, > > On Mon, Jan 25, 2021 at 7:04 AM Pioso, Richard > wrote: > >> On Wed, Jan 20, 2021 at 9:56 AM Dmitry Tantsur >> wrote: >> > >> > Hi all, >> > >> > Now that we've gained some experience with using Redfish virtual >> > media I'd like to reopen the discussion about $subj. For the context, >> the >> > idrac-redfish-virtual-media boot interface appeared because Dell >> > machines need an additional action [1] to boot from virtual media. The >> > initial position on hardware interfaces was that anything requiring OEM >> > actions must go into a vendor hardware interface. I would like to >> propose >> > relaxing this (likely unwritten) rule. >> > >> > You see, this distinction causes a lot of confusion. Ironic supports >> > Redfish, ironic supports iDRAC, iDRAC supports Redfish, ironic supports >> > virtual media, Redfish supports virtual media, iDRAC supports virtual >> > media. BUT! You cannot use redfish-virtual-media with iDRAC. Just today >> > I had to explain the cause of it to a few people. It required diving >> into >> > how exactly Redfish works and how exactly ironic uses it, which is >> > something we want to protect our users from. >> >> Wow! Now I’m confused, too. AFAIU, the people you had to help decided to >> use the redfish driver, instead of the idrac driver. It is puzzling that >> they decided to do that considering the ironic driver composition reform >> [1] of a couple years ago. Recall that reform allows “having one vendor >> driver with options configurable per node instead of many drivers for every >> vendor” and had the following goals. >> > > When discussing the user's confusion we should not operate in terms of > ironic (especially since the problem happened in metal3 land, which > abstracts away ironic). As a user, when I see Redfish and virtual media, > and I know that Dell supports them, I can expect redfish-virtual-media to > work. The fact that it does not may cause serious perception problems. The > one I'm particularly afraid of is end users thinking "iDRAC is not Redfish > compliant". > > >> >> “- Make vendors in charge of defining a set of supported interface >> implementations in priority order. >> - Allow vendors to guarantee that unsupported interface implementations >> will not be used with hardware types they define. This is done by having a >> hardware type list all interfaces it supports.” >> >> The idrac driver is Dell Technologies’ vendor driver for systems with an >> iDRAC. It offers a one-stop shop for using ironic to manage its systems. >> Users can select among the hardware interfaces it supports. Each interface >> uses a single management protocol -- Redfish, WS-Man, and soon IPMI [2] -- >> to communicate with the BMC. While it supports the >> idrac-redfish-virtual-media boot interface, it does not support >> redfish-virtual-media. One cannot configure a node with the idrac driver to >> use redfish-virtual-media. >> > > I know, the problem is explaining to users why they can use the redfish > hardware type with Dell machines, but only partly. > > >> >> > >> > We already have a precedent [2] of adding vendor-specific handling to >> > a generic driver. >> >> That change was introduced about a month ago in the community’s >> vendor-independent ipmi driver. That was very understandable, since IPMI is >> a very mature management protocol and was introduced over 22 years ago. I >> cannot remember what I was doing back then :) As one would expect, the ipmi >> driver has experienced very little change over the past two-plus years. I >> count roughly two (2) substantive changes over that period. By contrast, >> the Redfish protocol is just over five (5) years old. Its >> vendor-independent driver, redfish, has been fertile ground for adding new, >> advanced features, such as BIOS settings configuration, firmware update, >> and RAID configuration, and fixing bugs. It fosters lots of change, too >> many for me to count. >> >> > I have proposed a patch [3] to block using redfish- >> > virtual-media for Dell hardware, but I grew to dislike this approach. It >> > does not have precedents in the ironic code base and it won't scale well >> > if we have to handle vendor differences for vendors that don't have >> > ironic drivers. >> >> Dell understands and is on board with ironic’s desire that vendors >> support the full functionality offered by the vendor-independent redfish >> driver. If the iDRAC is broken with regards to redfish-virtual-media, then >> we have a vested interest in fixing it. >> While that is worked, an alternative approach could be for our community >> to strengthen its promotion of the goals of the driver composition reform. >> That would leverage ironic’s long-standing ability to ensure people only >> use hardware interfaces which the vendor and its driver support. >> > > Yep. I don't necessarily disagree with that, but it poses issues for > layered products like metal3, where on each abstraction level a small > nuance is lost, and the end result is confusion and frustration. > > >> >> > >> > Based on all this I suggest relaxing the rule to the following: if a >> feature >> > supported by a generic hardware interface requires additional actions or >> > has a minor deviation from the standard, allow handling it in the >> generic >> > hardware interface. Meaning, redfish-virtual-media starts handling the >> > Dell case by checking the System manufacturer (via the recently added >> > detect_vendor call) and loading the OEM code if it matches "Dell". After >> > this idrac-redfish-virtual-media will stay empty (for future >> enhancements >> > and to make the patch backportable). >> >> That would cause the vendor-independent redfish driver to become >> dependent on sushy-oem-idrac, which is not under ironic governance. >> > > This itself is not a problem, most of the projects we depend on are not > under ironic governance. > > Also it won't be a hard dependency, only if we detect 'Dell' in > system.manufacturer. > > >> >> It is worth pointing out the sushy-oem-idrac library is necessary to get >> virtual media to work with Dell systems. It was first created for that >> purpose. It is not a workaround like those in sushy, which accommodate >> common, minor standards interpretation and implementation differences >> across vendors by sprinkling a bit of code here and there within the >> library, unbeknownst to ironic proper. >> >> We at Dell Technologies are concerned that the proposed rule change would >> result in a greater code review load on the ironic community. Since >> vendor-specific code would be in the generic hardware interface, much more >> care, eyes, and integration testing against physical hardware would be >> needed to ensure it does not break others. And our community is already >> concerned about its limited available review bandwidth [3]. Generally >> speaking, the vendor third-party CIs do not cover all drivers. Rather, each >> vendor only tests its own driver, and, in some cases, sushy. Therefore, >> changes to the vendor-independent redfish driver may introduce regressions >> in what has been working with various hardware and not be detected by >> automated testing before being merged. >> > > The change will, in fact, be tested by your 3rd party CI because it was > used by both the generic redfish hardware type and the idrac one. > > I guess a source of confusion may be this: I don't suggest the idrac > hardware type goes away, nor do I suggest we start copying its > Dell-specific features to redfish. > > >> >> Can we afford this additional review load, prospective slowing down of >> innovation with Redfish, and likely undetected regressions? Would that be >> best for our users when we could fix the problem in other ways, such as the >> one suggested above? >> >> Also consider that feedback to the DMTF to drive vendor consistency is >> critical, but the DMTF needs feedback on what is broken in order to push >> others to address a problem. Remember the one-time boot debacle when three >> vendors broke at the same time? Once folks went screaming to the DMTF about >> the issue, it quickly explained it to member companies, clarified the >> standard, and created a test case for that condition. Changing the driver >> model to accommodate everyone's variations will reduce that communication >> back to the DMTF, meaning the standard stalls and interoperability does not >> gain traction. >> > > I would welcome somebody raising to DMTF the issue that causes iDRAC to > need another action to boot from virtual media, I suspect other vendors may > have similar issues. That being said, our users are way too far away from > DMTF, and even we (Julia and myself, for example) don't have a direct way > of influencing it, only through you and other folks who help (thank you!). > > >> >> > >> > Thoughts? >> > >> >> TL;DR, we strongly recommend ironic not make this rule change. Clearly >> communicating users should use the vendor driver should simplify their >> experience and eliminate the confusion. >> >> The code as-is is factored well as a result of the 21st century approach >> the community has taken to date. Vendors can implement the driver OEM >> changes they need to accommodate their unique hardware and BMC >> requirements, with reduced concern about the risk of breaking other drivers >> or ironic itself. Ironic’s driver composition reform, sushy, and sushy’s >> OEM extension mechanism support that modern approach. Our goal is to >> continue to improve the iDRAC Redfish service’s compliance with the >> standard and eliminate the kind of OEM code Dmitry identified. >> >> Beware of unintended consequences, including >> >> - reduced quality, >> - slowed feature and bug fix velocity, >> > > I don't see how this happens, given that the code is merely copied from > one place to the other (with the 1st place inheriting it from its base > class). > > >> - stalled DMTF Redfish standard, >> - lost Redfish interoperability traction, and >> > > I'm afraid we're actually hurting Redfish adoption when we start > complicating its usage. Think, with IPMI everything "Just Works" (except > when it does not, but that happens much later), while for Redfish the users > need to be aware of... flavors of Redfish? Something that we (and DMTF) > don't even have a name for. > > Dmitry > > >> - increased code review load. >> >> > Dmitry >> > >> > [1] >> > https://opendev.org/openstack/ironic/src/commit/6ea73bdfbb53486cf9 >> > 905d21024d16cbf5829b2c/ironic/drivers/modules/drac/boot.py#L130 >> > [2] https://review.opendev.org/c/openstack/ironic/+/757198/ >> > [3] https://review.opendev.org/c/openstack/ironic/+/771619 >> > >> > -- >> > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> > Commercial register: Amtsgericht Muenchen, HRB 153243, >> > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, >> > Michael O'Neill >> >> I welcome your feedback. >> >> Rick >> >> [1] >> https://opendev.org/openstack/ironic-specs/src/branch/master/specs/approved/driver-composition-reform.rst >> [2] https://storyboard.openstack.org/#!/story/2008528 >> [3] https://etherpad.opendev.org/p/ironic-wallaby-midcycle >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Jan 28 17:08:19 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 28 Jan 2021 18:08:19 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> Message-ID: <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> Radosław Piliszek wrote: > [...] > The double quote sign breaks the string improperly. > Once that is fixed, the JavaScript functions should come back. > > I can't see the repo in opendev so can't send a patch there. > Perhaps you can help? The read-only message is set here: https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/templates/askbot/settings.py.erb#L370 Thanks! -- Thierry Carrez (ttx) From kennelson11 at gmail.com Thu Jan 28 17:24:35 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 28 Jan 2021 09:24:35 -0800 Subject: 2020 Open Infrastructure Foundation Annual Report Message-ID: Hello Everyone :) Despite being a very different year than most, the Open Infrastructure community, which has over 110,000 community members, made it a productive and successful year. One of the biggest milestones that happened in the global community last year was that OpenStack, one of the top three most active open source projects with 15 million cores in production, marked its 10th anniversary in 2020. The Open Infrastructure Foundation would like to extend a huge thanks to the global community for all of the work that went into 2020 and is continuing in 2021 to help people build and operate open infrastructure. Check out the OpenStack community’s achievements in 2020 from the OpenInfra Foundation Annual Report[1] and join us to build the next decade of open infrastructure! -Kendall Nelson (diablo_rojo) [1] https://www.openstack.org/annual-reports/2020-openstack-foundation-annual-report -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Jan 28 17:35:27 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 28 Jan 2021 19:35:27 +0200 Subject: [tripleo] next irc meeting Tuesday Feb 02 @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 02 February at 1400 UTC in #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to hilight at https://etherpad.opendev.org/p/tripleo-meeting-items This can include recently completed things, ongoing review requests, blocking issues, or anything else tripleo you'd like to share. Our last meeting was on Jan 19 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-01-19-14.00.html Hope you can make it on Tuesday, thanks, marios -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jan 28 18:03:02 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 28 Jan 2021 18:03:02 +0000 Subject: [keystone][osc]Strange behaviour of OSC in keystone MFA context In-Reply-To: <18d48a3d208317e9c9b220ff20ae2f46a2442ef0.camel@redhat.com> References: <27cda0ba41634425b5c4d688381d6107@elca.ch> <18d48a3d208317e9c9b220ff20ae2f46a2442ef0.camel@redhat.com> Message-ID: On Thu, 2021-01-28 at 12:38 +0000, Sean Mooney wrote: > On Thu, 2021-01-28 at 07:59 +0000, Taltavull Jean-Francois wrote: > > > -----Original Message----- > > > From: Sean Mooney > > > Sent: mardi, 26 janvier 2021 20:01 > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: Strange behaviour of OSC in keystone MFA context > > > > > > On Tue, 2021-01-26 at 17:46 +0000, Taltavull Jean-Francois wrote: > > > > Hello, > > > > > > > > I'm experiencing the following strange behavior of openstack CLI with os- > > > auth-methods option (most parameters are defined in clouds.yaml): > > > > > > > > $ openstack token issue --os-auth-type v3multifactor --os-auth-methods > > > > password,totp > > > > > > > --os-auth-methods does not appear to be a standard part of osc infact i cant > > > find it in any openstack repo with > > > > > > i think this is the implemtaions > > > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > > > 1/loading/_plugins/identity/v3.py#L303-L340 > > > > > > this presumable is where it generates teh optins > > > > > >   options.extend([ > > >             loading.Opt( > > >                 'auth_methods', > > >                 required=True, > > >                 help="Methods to authenticate with."), > > >         ]) > > > > > > > > > if i do openstack help --os-auth-type v3multifactor it does show up with the > > > following text > > > > > > --os-auth-methods > > >                         With v3multifactor: Methods to authenticate with. (Env: > > > OS_AUTH_METHODS) > > > > > > that does not say much but > > > > > > https://opendev.org/openstack/keystoneauth/src/branch/master/keystoneauth > > > 1/tests/unit/identity/test_identity_v3.py#L762-L800 > > > implies its a list > > > > > > with that said there are no test for multifactor as far as i can see like this one > > > https://opendev.org/openstack/python- > > > openstackclient/src/branch/master/openstackclient/tests/functional/common/t > > > est_args.py#L66-L79 > > > > > > there also does not seam too be a release note declaring support. > > > > > > so while keystone auth support multi factor im not sure that osc actully does > > > > > > i specpec that the fild type is not correct and it is indeed been parsed as a string > > > instead of a list of stirng field. > > > it might be fixable via keystoneauth but it proably need osc support and testing. > > > > > > > The plugin p could not be found > > > > > > > > Note that "p" is the first letter of "password". It looks like the option parser > > > handled "password,totp" as a string instead of as a list of strings. > > > > > > > > Version of openstack CLI is 5.4.0. > > > > > > > > Any idea ? > > > > > > > > Thanks ! > > > > > > > > Jean-François > > > > Thanks for your answer Sean. > > > > What can I do on my end to get things done ? > well unfortunetly i do not work on keystone or osc i just saw your mail while i was waiting for some tests to finish running. > > with that said i have upstaed the subject to include both projects so hopefully that will get the attention of those that can help. The definition for those opts can be found at [1]. As Sean thought it might be, that is using the default type defined in the parent 'Opt' class of 'str' [2]. We don't expose argparse's 'action' parameter that would allow us to use the 'append' action, so you'd have to fix this by parsing whatever the user provided after the fact. I suspect you could resolve the immediate issue by changing this line [3] from: self._methods = kwargs['auth_methods'] to: self._methods = kwargs['auth_methods'].split(',') However, I assume there's likely more to this issue. I don't have an environment to hand to validate this fix, unfortunately. If you do manage to test that change and it works, I'd be happy to help you in getting a patch proposed to 'keystoneauth'. Hope this helps, Stephen [1] https://github.com/openstack/keystoneauth/blob/4.3.0/keystoneauth1/loading/_plugins/identity/v3.py#L316-L330 [2] https://github.com/openstack/keystoneauth/blob/4.3.0/keystoneauth1/loading/opts.py#L65 [3] https://github.com/openstack/keystoneauth/blob/4.3.0/keystoneauth1/loading/_plugins/identity/v3.py#L338 > > > > Jean-François From kennelson11 at gmail.com Thu Jan 28 18:33:53 2021 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 28 Jan 2021 10:33:53 -0800 Subject: [All][StoryBoard] Angular.js Alternatives In-Reply-To: References: Message-ID: To circle back to this, the StoryBoard team has decided on Vue, given some contributors previous experience with it and a POC already started. Thank you everyone for your valuable input! We really do appreciate it! -Kendall (diablo_rojo) On Thu, Jan 21, 2021 at 1:23 PM Kendall Nelson wrote: > Hello Everyone! > > The StoryBoard team is looking at alternatives to Angular.js since its > going end of life. After some research, we've boiled all the options down > to two possibilities: > > Vue.js > > or > > React.js > > I am diving more deeply into researching those two options this week, but > any opinions or feedback on your experiences with either of them would be > helpful! > > Here is the etherpad with our research so far[3]. > > Feel free to add opinions there or in response to this thread! > > -Kendall Nelson (diablo_rojo) & The StoryBoard Team > > [1] https://vuejs.org/ > [2] https://reactjs.org/ > [3] https://etherpad.opendev.org/p/replace-angularjs-storyboard-research > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jan 28 19:35:19 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 28 Jan 2021 20:35:19 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> Message-ID: On Thu, Jan 28, 2021 at 6:09 PM Thierry Carrez wrote: > The read-only message is set here: > > https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/templates/askbot/settings.py.erb#L370 > > Thanks! Thanks, Thierry. Seems my searchitsu failed. I found one other issue so at least we will fix the rendering issue (as the syntax error has only unknown consequences). See [1]. [1] https://review.opendev.org/c/opendev/system-config/+/772937 -yoctozepto From zigo at debian.org Thu Jan 28 20:02:00 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 28 Jan 2021 21:02:00 +0100 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 Message-ID: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> Hi, Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: http://paste.openstack.org/show/802063/ (that's when doing a simple "openstack container list") I got the same problem with neutron-rpc-server: http://paste.openstack.org/show/802071/ (this happens when neutron-rpc-server tries to tell Nova that my new VM port is up) I didn't really look further, but I guess so many services are affected. This looks like general to OpenStack, and feels like yet-another-problem-with-eventlet, and looks very similar to this one: https://github.com/eventlet/eventlet/issues/677 even though that's here a Python 3.9 issue, not 3.7. I'd very much appreciate if some Eventlet specialist could look into this problem, because that's beyond my skills. Cheers, Thomas Goirand (zigo) From tonyliu0592 at hotmail.com Thu Jan 28 21:26:28 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Thu, 28 Jan 2021 21:26:28 +0000 Subject: [kolla][mariadb] failed to start a cluster Message-ID: Hi, When running kolla-ansible to deploy 3 controller nodes (Ussuri), mariadb on node-1 started fine, but when start mariadb on node-2 and node-3 to join node-1, node-1 became nodor/desync and node-2 and node-3 complained "Member 2.0 (os-control-2) requested state transfer from '*any*', but it is impossible to select State Transfer donor: Resource temporarily unavailable". Ansible failed at "RUNNING HANDLER [mariadb : Wait for MariaDB service port liveness]" when timeout. What could be wrong here? Thanks! Tony From anlin.kong at gmail.com Thu Jan 28 23:55:42 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 29 Jan 2021 12:55:42 +1300 Subject: Ussuri Trove trovestack problem In-Reply-To: References: Message-ID: Thanks for letting us know, do you mind proposing a patch for trove stable/ussuri branch? --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Fri, Jan 29, 2021 at 3:42 AM Diaz, Santiago Miguel < sdiaz at cv.fibercorp.com.ar> wrote: > Hi. Im new in Openstack. When I try to create Trove guest image with the > next command > > ./trovestack build-image mariadb ubuntu xenial true ubuntu > > exit 1 obtain. Reading the logs I found: > > 2021-01-28 13:30:13.217 | [error] The following package is needed by the > script, but not installed: > 2021-01-28 13:30:13.218 | apt-transport-https > 2021-01-28 13:30:13.218 | Please install and rerun the script. > > I resolved it changing then next file: > > > /trove/integration/scripts/files/elements/ubuntu-guest/pre-install.d/04-baseline-tools > > - apt-get --allow-unauthenticated install -y language-pack-en > python-software-properties software-properties-common > > + apt-get --allow-unauthenticated install -y language-pack-en > python-software-properties software-properties-common apt-transport-https > > I hope it helps > > AVISO LEGAL: > El presente correo electrónico y la información contenida > en el mismo, es información privada y confidencial, está dirigida > únicamente a su destinatario y no puede ser revelada a terceros, ni > utilizada inapropiadamente en interés del destinatario. Excepto que se haya > establecido expresamente de otra forma, éste correo electrónico y la > mencionada información privada y confidencial no constituye una oferta, ni > una promesa, ni una propuesta y no será interpretado como aceptación, ni > como una tratativa pre-contractual, acuerdo parcial, contrato preliminar > y/o pre-contrato, ni como un compromiso vinculante y/o declaración de > voluntad oficial de TELECOM ARGENTINA S.A. Si Usted no es el destinatario > original de éste mensaje y por ese medio pudo acceder a dicha información, > por favor, elimine el mensaje. La distribución o copia de este mensaje esta > estrictamente prohibida. La transmisión de e-mails no garantiza que el > correo electrónico sea seguro o libre de error. Por consiguiente, no > manifestamos que esta información sea completa o precisa. Toda información > está sujeta a alterarse sin previo aviso. > > This email and the information contained herein are > proprietary and confidential, and intended for the recipient only and > cannot be disclosed to third parties or used inappropriately in the > interest of the recipient. Except it has been expressly stated otherwise, > the email and said private and confidential information are not an offer or > promise, or a proposal and will not be construed as acceptance or as a > pre-contractual negotiation, partial agreement, preliminary contract and / > or pre-contract or binding commitment and/or an official statement of > intent of TELECOM ARGENTINA S.A. If you are not the intended recipient of > this message and thereby gained access to such information, please remove > the message. Distribution or copying of this message is strictly > prohibited. Transmission of e-mails does not guarantee that e-mail is safe > or free from error. Therefore, we do not represent that this information is > complete or accurate. All information is subject to change without notice. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri Jan 29 03:16:13 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 28 Jan 2021 21:16:13 -0600 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder Message-ID: Hey folks, As I'm sure some of the cinder folks are aware, I'm updating cinder policies to include support for some default personas keystone ships with. Some of those personas use system-scope (e.g., system-reader and system-admin) and I've already proposed a series of patches that describe what those changes look like from a policy perspective [0]. The question now is how we test those changes. To help guide that decision, I worked on three different testing approaches. The first was to continue testing policy using unit tests in cinder with mocked context objects. The second was to use DDT with keystonemiddleware mocked to remove a dependency on keystone. The third also used DDT, but included changes to update NoAuthMiddleware so that it wasn't as opinionated about authentication or authorization. I brought each approach in the cinder meeting this week where we discussed a fourth approach, doing everything in tempest. I summarized all of this in an etherpad [1] Up to yesterday morning, the only approach I hadn't tinkered with manually was tempest. I spent some time today figuring that out, resulting in a patch to cinderlib [2] to enable a protection test job, and cinder_tempest_plugin [3] that adds the plumbing and some example tests. In the process of implementing support for tempest testing, I noticed that service catalogs for system-scoped tokens don't contain cinder endpoints [4]. This is because the cinder endpoint contains endpoint templating in the URL [5], which keystone will substitute with the project ID of the token, if and only if the catalog is built for a project-scoped token. System and domain-scoped tokens do not have a reasonable project ID to use in this case, so the templating is skipped, resulting in a cinder service in the catalog without endpoints [6]. This cascades in the client, specifically tempest's volume client, because it can't find a suitable endpoint for request to the volume service [7]. Initially, my testing approaches were to provide examples for cinder developers to assess the viability of each approach before committing to a protection testing strategy. But, the tempest approach highlighted a larger issue for how we integrate system-scope support into cinder because of the assumption there will always be a project ID in the path (for the majority of the cinder API). I can think of two ways to approach the problem, but I'm hoping others have more. First, we remove project IDs from cinder's API path. This would be similar to how nova (and I assume other services) moved away from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become /v3/volumes). This would obviously require refactoring to remove any assumptions cinder has about project IDs being supplied on the request path. But, this would force all authorization information to come from the context object. Once a deployer removes the endpoint URL templating, the endpoints will populate in the cinder entry of the service catalog. Brian's been helping me understand this and we're unsure if this is something we could even do with a microversion. I think nova did it moving from /v2/ to /v2.0/, which was technically classified as a major bump? This feels like a moon shot. Second, we update cinder's clients, including tempest, to put the project ID on the URL. After we update the clients to append the project ID for cinder endpoints, we should be able to remove the URL templating in keystone, allowing cinder endpoints to appear in system-scoped service catalogs (just like the first approach). Clients can use the base URL from the catalog and append the admin project ID before putting the request on the wire. Even though the request has a project ID in the path, cinder would ignore it for system-specific APIs. This is already true for users with an admin role on a project because cinder will allow you to get volumes in one project if you have a token scoped to another with the admin role [8]. One potential side-effect is that cinder clients would need *a* project ID to build a request, potentially requiring another roundtrip to keystone. Thoughts? [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 [3] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 [4] http://paste.openstack.org/show/802117/ [5] http://paste.openstack.org/show/802097/ [6] https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py [7] http://paste.openstack.org/show/802092/ [8] http://paste.openstack.org/show/802118/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Fri Jan 29 04:20:15 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Fri, 29 Jan 2021 12:20:15 +0800 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server Message-ID: Hi, appreciate some advice on how instances in openstack get ip from external DHCP server. For example instance is attached to port eth1 (physical port) and this port is connected to home/office lan port and requests dhcp ip. How this can be achieved. ***User don't know the dhcp ip range/gw/dns that will be provided by the dhcp sever to that instance...instance just attach to eth1 and request ip.*** Similar like our pc/notebook request dhcp ip via wifi or lan port. How to establish this in openstack. Please advise and help me. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Fri Jan 29 06:37:04 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 29 Jan 2021 07:37:04 +0100 Subject: [rdo][ussuri][tripleo] use rsyslog by default for all OSP components and send them to undercloud Message-ID: Hi all, I would like to configure, during deployment, all OSP services save logs over rsyslog, not directly writing to files. I know OSP services support such. In most cases, services/modules write into files directly. Is there any TripleO option to set it all to logging over system logger? Also is there possibility, to send logs to centralised server, into undercloud, if to be precise? From there I want to send it to elastic, using rsyslog or send it to other log parser/analyser/storage. Have anyone met such articles/howtos/docs I could refer to tweak my config? -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Fri Jan 29 06:48:02 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 29 Jan 2021 07:48:02 +0100 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server In-Reply-To: References: Message-ID: Are you sure? you are going to danger zone :) dangerzone if you really need a RANDOM RANDOM IP, I would add network (maybe flat?) that would contain a subnet: 0.0.0.0/0 without DHCP (if such possible, if not, would contain 2 subnets: 0.0.0.0/1 and 128.0.0.0/1) and disable f=port security for that network. In the image you will use, you will need to remove cloud-init or add config option, on instance launch, that it would not autoconfigure network interface according to Cloud assigned IP. maybe for such task there are better options, such as simple KVM host with bridged network? On Fri, 29 Jan 2021 at 05:26, dangerzone ar wrote: > Hi, appreciate some advice on how instances in openstack get ip from > external DHCP server. For example instance is attached to port eth1 > (physical port) and this port is connected to home/office lan port and > requests dhcp ip. How this can be achieved. > ***User don't know the dhcp ip range/gw/dns that will be provided by the > dhcp sever to that instance...instance just attach to eth1 and request > ip.*** > Similar like our pc/notebook request dhcp ip via wifi or lan port. > How to establish this in openstack. Please advise and help me. > Thank you > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Jan 29 08:07:01 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 29 Jan 2021 09:07:01 +0100 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server In-Reply-To: References: Message-ID: <20210129080701.4tmtijbdjiwvn3ph@p1.localdomain> Hi, On Fri, Jan 29, 2021 at 12:20:15PM +0800, dangerzone ar wrote: > Hi, appreciate some advice on how instances in openstack get ip from > external DHCP server. For example instance is attached to port eth1 > (physical port) and this port is connected to home/office lan port and > requests dhcp ip. How this can be achieved. > ***User don't know the dhcp ip range/gw/dns that will be provided by the > dhcp sever to that instance...instance just attach to eth1 and request > ip.*** > Similar like our pc/notebook request dhcp ip via wifi or lan port. > How to establish this in openstack. Please advise and help me. > Thank you You need to disable port security on such instance. Otherwise Neutron will block traffic from such IP address which is unknown. Or You need to add this IP address which VM get to the allowed_address_pairs of the VM's port. Also, please keep in mind that You will have different IP associated to that VM in the Neutron, and that will be visible in OpenStack API and different one will be really used. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Fri Jan 29 08:27:18 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 29 Jan 2021 09:27:18 +0100 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> Message-ID: On Thu, Jan 28, 2021 at 8:35 PM Radosław Piliszek wrote: > > On Thu, Jan 28, 2021 at 6:09 PM Thierry Carrez wrote: > > The read-only message is set here: > > > > https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/templates/askbot/settings.py.erb#L370 > > > > Thanks! > > Thanks, Thierry. Seems my searchitsu failed. > > I found one other issue so at least we will fix the rendering issue > (as the syntax error has only unknown consequences). > > See [1]. > > [1] https://review.opendev.org/c/opendev/system-config/+/772937 And hooray: it fixed the 'see more comments' and made the message display when trying to add new. -yoctozepto From eblock at nde.ag Fri Jan 29 08:30:32 2021 From: eblock at nde.ag (Eugen Block) Date: Fri, 29 Jan 2021 08:30:32 +0000 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> Message-ID: <20210129083032.Horde.KW5YMuhDeFlHVc-jT2nByCR@webmail.nde.ag> Yeah I just tried that, awesome! :-) Thank you very much for the effort! Zitat von Radosław Piliszek : > On Thu, Jan 28, 2021 at 8:35 PM Radosław Piliszek > wrote: >> >> On Thu, Jan 28, 2021 at 6:09 PM Thierry Carrez >> wrote: >> > The read-only message is set here: >> > >> > >> https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/templates/askbot/settings.py.erb#L370 >> > >> > Thanks! >> >> Thanks, Thierry. Seems my searchitsu failed. >> >> I found one other issue so at least we will fix the rendering issue >> (as the syntax error has only unknown consequences). >> >> See [1]. >> >> [1] https://review.opendev.org/c/opendev/system-config/+/772937 > > And hooray: it fixed the 'see more comments' and made the message > display when trying to add new. > > -yoctozepto From bdobreli at redhat.com Fri Jan 29 11:03:46 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 29 Jan 2021 12:03:46 +0100 Subject: [rdo][ussuri][tripleo] use rsyslog by default for all OSP components and send them to undercloud In-Reply-To: References: Message-ID: <0cf7c509-b6f8-833d-4be3-4f95ab7b14e6@redhat.com> On 1/29/21 7:37 AM, Ruslanas Gžibovskis wrote: > Hi all, > > I would like to configure, during deployment, all OSP services save logs > over rsyslog, not directly writing to files. I know OSP services support > such. In most cases, services/modules write into files directly. Is > there any TripleO option to set it all to logging over system logger? > Also is there possibility, to send logs to centralised server, into > undercloud, if to be precise? From there I want to send it to elastic, > using rsyslog or send it to other log parser/analyser/storage. There is an approved spec for remote logging, but it was never prioritized for implementation, unfortunately. On the bright side, there is a bunch of *LoggingSource params in t-h-t, which allow to change the logging backend from files to (local only?) rsyslog. I could not find upstream docs for that, all I can refer to is [1] [0] https://review.opendev.org/c/openstack/tripleo-specs/+/523493 [1] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/logging_monitoring_and_troubleshooting_guide/logging > > Have anyone met such articles/howtos/docs I could refer to tweak my config? > -- > Ruslanas Gžibovskis > +370 6030 7030 -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Fri Jan 29 11:08:15 2021 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 29 Jan 2021 12:08:15 +0100 Subject: [rdo][ussuri][tripleo] use rsyslog by default for all OSP components and send them to undercloud In-Reply-To: <0cf7c509-b6f8-833d-4be3-4f95ab7b14e6@redhat.com> References: <0cf7c509-b6f8-833d-4be3-4f95ab7b14e6@redhat.com> Message-ID: <149a832e-0681-ee7d-0355-99eeb5202619@redhat.com> On 1/29/21 12:03 PM, Bogdan Dobrelya wrote: > On 1/29/21 7:37 AM, Ruslanas Gžibovskis wrote: >> Hi all, >> >> I would like to configure, during deployment, all OSP services save >> logs over rsyslog, not directly writing to files. I know OSP services >> support such. In most cases, services/modules write into files >> directly. Is there any TripleO option to set it all to logging over >> system logger? >> Also is there possibility, to send logs to centralised server, into >> undercloud, if to be precise? From there I want to send it to elastic, >> using rsyslog or send it to other log parser/analyser/storage. > > There is an approved spec for remote logging, but it was never > prioritized for implementation, unfortunately. On the bright side, there > is a bunch of *LoggingSource params in t-h-t, which allow to change the > logging backend from files to (local only?) rsyslog. I could not find It seems that remote rsyslog with ELS is also supported, for example you can set it, like this: parameter_defaults: RsyslogElasticsearchSetting: server: "exampleip:9200" usehttps: "on" > upstream docs for that, all I can refer to is [1] > > > [0] https://review.opendev.org/c/openstack/tripleo-specs/+/523493 > [1] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.0/html/logging_monitoring_and_troubleshooting_guide/logging > > >> >> Have anyone met such articles/howtos/docs I could refer to tweak my >> config? >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From thierry at openstack.org Fri Jan 29 11:16:55 2021 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 29 Jan 2021 12:16:55 +0100 Subject: [Release-job-failures] os-collect-config 11.0.2 and tripleo-ipsec 9.3.1 In-Reply-To: References: Message-ID: <2599f550-bffd-e2dd-b560-2f07f4500a1b@openstack.org> We had two release jobs failures around TripleO and stable/ussuri: > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/cb4de2810bd147e2bf1cd98d6a1160fa : SUCCESS in 46s > - release-openstack-python https://zuul.opendev.org/t/openstack/build/fb61c0814f114a2ea0c7298378e10e5f : FAILURE in 3m 07s > - announce-release https://zuul.opendev.org/t/openstack/build/None : SKIPPED > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/None : SKIPPED and > - openstack-upload-github-mirror https://zuul.opendev.org/t/openstack/build/fad34ae84b9d4a8c982ca4f75c01f0c9 : SUCCESS in 58s > - release-openstack-python https://zuul.opendev.org/t/openstack/build/0edc8da3c58842ed85b826f56153cde3 : FAILURE in 3m 25s > - announce-release https://zuul.opendev.org/t/openstack/build/None : SKIPPED > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/None : SKIPPED In both cases those jobs are coming from post-processing of release tags being pushed as a result of merging https://review.opendev.org/c/openstack/releases/+/772047 The error is that os-collect-config 11.0.0 and tripleo-ipsec 9.3.0 both did not have a stable/ussuri branch cut around ussuri release time at https://review.opendev.org/c/openstack/releases/+/728537 (probably escaped our watch as they are cycle-trailing). os-collect-config 11.0.1 was released on master about 3 months ago, which worked because 12.0.0 (victoria) was not cut yet. But now that victoria is cut, 11.0.2 (and tripleo-ipsec 9.3.1) triggered an error. ----11.0.0------11.0.1-------12.0.0/11.0.2/13.0.0-------master \ -----stable/victoria ----9.3.0------10.0.0/11.0.0/9.3.1-----master \ ----- stable/victoria Given that os-collect-config 12.0.0/11.0.2/13.0.0 are the same commit and tripleo-ipsec 10.0.0/11.0.0/9.3.1 are the same commit, this might be solved by just cutting stable/ussuri on that same commit and reenqueuing the tag references in Zuul to trigger the jobs. I proposed https://review.opendev.org/c/openstack/releases/+/772995 to that effect. -- Thierry Carrez (ttx) From marios at redhat.com Fri Jan 29 12:07:58 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 29 Jan 2021 14:07:58 +0200 Subject: [Release-job-failures] os-collect-config 11.0.2 and tripleo-ipsec 9.3.1 In-Reply-To: <2599f550-bffd-e2dd-b560-2f07f4500a1b@openstack.org> References: <2599f550-bffd-e2dd-b560-2f07f4500a1b@openstack.org> Message-ID: On Fri, Jan 29, 2021 at 1:18 PM Thierry Carrez wrote: > We had two release jobs failures around TripleO and stable/ussuri: > > > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/cb4de2810bd147e2bf1cd98d6a1160fa > : SUCCESS in 46s > > - release-openstack-python > https://zuul.opendev.org/t/openstack/build/fb61c0814f114a2ea0c7298378e10e5f > : FAILURE in 3m 07s > > - announce-release https://zuul.opendev.org/t/openstack/build/None : > SKIPPED > > - propose-update-constraints > https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > and > > > - openstack-upload-github-mirror > https://zuul.opendev.org/t/openstack/build/fad34ae84b9d4a8c982ca4f75c01f0c9 > : SUCCESS in 58s > > - release-openstack-python > https://zuul.opendev.org/t/openstack/build/0edc8da3c58842ed85b826f56153cde3 > : FAILURE in 3m 25s > > - announce-release https://zuul.opendev.org/t/openstack/build/None : > SKIPPED > > - propose-update-constraints > https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > In both cases those jobs are coming from post-processing of release tags > being pushed as a result of merging > https://review.opendev.org/c/openstack/releases/+/772047 > > The error is that os-collect-config 11.0.0 and tripleo-ipsec 9.3.0 both > did not have a stable/ussuri branch cut around ussuri release time at > https://review.opendev.org/c/openstack/releases/+/728537 (probably > escaped our watch as they are cycle-trailing). > > Thierry thanks for taking the time to dig into this and apologies for the missing branches. We noticed that was the case in the discussion around these repos in http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019751.html ; obviously we didn't realise this would cause the error you saw so sorry about that. > os-collect-config 11.0.1 was released on master about 3 months ago, > which worked because 12.0.0 (victoria) was not cut yet. But now that > victoria is cut, 11.0.2 (and tripleo-ipsec 9.3.1) triggered an error. > > ----11.0.0------11.0.1-------12.0.0/11.0.2/13.0.0-------master > \ > -----stable/victoria > > ----9.3.0------10.0.0/11.0.0/9.3.1-----master > \ > ----- stable/victoria > > Given that os-collect-config 12.0.0/11.0.2/13.0.0 are the same commit > and tripleo-ipsec 10.0.0/11.0.0/9.3.1 are the same commit, this might be > solved by just cutting stable/ussuri on that same commit and reenqueuing > the tag references in Zuul to trigger the jobs. > > I proposed https://review.opendev.org/c/openstack/releases/+/772995 to > that effect. > thank you for posting that. It makes sense to cut the missing ussuri branch on the latest ussuri release and bonus if that fixes the problem regards, marios > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jan 29 12:19:07 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 29 Jan 2021 12:19:07 +0000 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> Message-ID: <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> On Thu, 2021-01-28 at 21:02 +0100, Thomas Goirand wrote: > Hi, > > Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: > http://paste.openstack.org/show/802063/ > > (that's when doing a simple "openstack container list") > > I got the same problem with neutron-rpc-server: > http://paste.openstack.org/show/802071/ > > (this happens when neutron-rpc-server tries to tell Nova that my new VM > port is up) im surprised you got this far i was still expecting nova-compute to hang connecting to rabbit it did in decemebr has the 0.30.0 release of eventlet fixed that? > > I didn't really look further, but I guess so many services are affected. > This looks like general to OpenStack, and feels like > yet-another-problem-with-eventlet, and looks very similar to this one: > https://github.com/eventlet/eventlet/issues/677 > > even though that's here a Python 3.9 issue, not 3.7. > > I'd very much appreciate if some Eventlet specialist could look into > this problem, because that's beyond my skills. i dont think we have any on this list and as far as i know they still dont fully support python 3.9 in eventlet, if you get past this issue the next thing you will proably hit is the nova vnc proxy failing because websockify double monkey patches the ssl/websockt or something like that. > > Cheers, > > Thomas Goirand (zigo) > From zigo at debian.org Fri Jan 29 12:44:31 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 29 Jan 2021 13:44:31 +0100 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> Message-ID: On 1/29/21 1:19 PM, Sean Mooney wrote: > On Thu, 2021-01-28 at 21:02 +0100, Thomas Goirand wrote: >> Hi, >> >> Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: >> http://paste.openstack.org/show/802063/ >> >> (that's when doing a simple "openstack container list") >> >> I got the same problem with neutron-rpc-server: >> http://paste.openstack.org/show/802071/ >> >> (this happens when neutron-rpc-server tries to tell Nova that my new VM >> port is up) > im surprised you got this far i was still expecting nova-compute to hang connecting to rabbit > it did in decemebr has the 0.30.0 release of eventlet fixed that? The Debian package for Eventlet 0.26.1-4 contains these patches: https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec https://github.com/eventlet/eventlet/pull/664 https://github.com/eventlet/eventlet/pull/672 Cheers, Thomas From thierry at openstack.org Fri Jan 29 12:47:45 2021 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 29 Jan 2021 13:47:45 +0100 Subject: [largescale-sig] OpenStack DB Archiver In-Reply-To: <20200716133127.GA31915@sync> References: <20200716133127.GA31915@sync> Message-ID: <045a2dea-02f0-26ca-96d6-46d8cdbe2d16@openstack.org> Arnaud Morin wrote: > [...] > We were wondering if some other users would be interested in using the > tool, and maybe move it under the opendev governance? Resurrecting this thread, as OSops has now been revived under the auspices of the OpenStack Operation Docs and Tooling SIG. There are basically 3 potential ways forward for OSarchiver: 1- Keep it as-is on GitHub, and reference it where we can in OpenStack docs 2- Relicense it under Apache-2 and move it in a subdirectory under openstack/osops 3- Move it under its own repository under opendev and propose it as a new official OpenStack project (relicensing under Apache-2 will be necessary if accepted) Options (1) and (3) have the benefit of keeping it under its own repository. Options (2) and (3) have the benefit of counting towards an official OpenStack contribution. Options (1) and (2) have the benefit of not requiring TC approval. All other things being equal, if the end goal is to increase discoverability, option 3 is probably the best. Regards, -- Thierry Carrez From smooney at redhat.com Fri Jan 29 13:53:45 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 29 Jan 2021 13:53:45 +0000 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> Message-ID: <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> On Fri, 2021-01-29 at 13:44 +0100, Thomas Goirand wrote: > On 1/29/21 1:19 PM, Sean Mooney wrote: > > On Thu, 2021-01-28 at 21:02 +0100, Thomas Goirand wrote: > > > Hi, > > > > > > Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: > > > http://paste.openstack.org/show/802063/ > > > > > > (that's when doing a simple "openstack container list") > > > > > > I got the same problem with neutron-rpc-server: > > > http://paste.openstack.org/show/802071/ > > > > > > (this happens when neutron-rpc-server tries to tell Nova that my new VM > > > port is up) > > im surprised you got this far i was still expecting nova-compute to hang connecting to rabbit > > it did in decemebr has the 0.30.0 release of eventlet fixed that? > > The Debian package for Eventlet 0.26.1-4 contains these patches: > > https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec > https://github.com/eventlet/eventlet/pull/664 > https://github.com/eventlet/eventlet/pull/672 > https://github.com/eventlet/eventlet/pull/664 was not in the version i used in early decmber so that is likely what allows nova to not hang. the ssl issue i expect to still impact the vnc proxy but i might try it again and see. i think all 3 of those are in 0.30.0 > Cheers, > > Thomas > From smooney at redhat.com Fri Jan 29 14:22:27 2021 From: smooney at redhat.com (Sean Mooney) Date: Fri, 29 Jan 2021 14:22:27 +0000 Subject: [largescale-sig] OpenStack DB Archiver In-Reply-To: <045a2dea-02f0-26ca-96d6-46d8cdbe2d16@openstack.org> References: <20200716133127.GA31915@sync> <045a2dea-02f0-26ca-96d6-46d8cdbe2d16@openstack.org> Message-ID: <639c7a77f7812ad8897656404cb06cc67cf51609.camel@redhat.com> On Fri, 2021-01-29 at 13:47 +0100, Thierry Carrez wrote: > Arnaud Morin wrote: > > [...] > > We were wondering if some other users would be interested in using the > > tool, and maybe move it under the opendev governance? > > Resurrecting this thread, as OSops has now been revived under the > auspices of the OpenStack Operation Docs and Tooling SIG. > > There are basically 3 potential ways forward for OSarchiver: > > 1- Keep it as-is on GitHub, and reference it where we can in OpenStack docs > > 2- Relicense it under Apache-2 and move it in a subdirectory under > openstack/osops > > 3- Move it under its own repository under opendev and propose it as a > new official OpenStack project (relicensing under Apache-2 will be > necessary if accepted) > > Options (1) and (3) have the benefit of keeping it under its own > repository. Options (2) and (3) have the benefit of counting towards an > official OpenStack contribution. Options (1) and (2) have the benefit of > not requiring TC approval. > > All other things being equal, if the end goal is to increase > discoverability, option 3 is probably the best. not to detract form the converation on where to host it, but now that i have discoverd this via this thread i have one quetion. OSarchiver appears to be bypassing the shadow tabels whcih the project maintian to allow you to archive rows in the the project db in a different table. instad OSarchiver chooese to archive it in an external DB or file we have talked about wheter or not we can remove shaddow tables in nova enteirly a few times in the past but we did not want to break operators that actully use them but it appares OVH at least has developed there own alrenitive presumably becasue the proejcts own archive and purge functionality was not meetingyour need. would the option to disable shadow tabels or define a retention policy for delete rows be useful to operators or is this even a capablity that project coudl declare out of scope and delegate that to a new openstack porject e.g. opeiton 3 above to do instead? im not sure how suportable OSarchiver would be in our downstream product right now but with testing it might be somethign we could look at includign in the futrue. we currently rely on chron jobs to invoke nova-magne ecta to achive similar functionality to OSarchiver but if that chron job breaks its hard to detect and the delete rows can build up causing really slow db queries. as a seperate service with loggin i assuem this is simplere to monitor and alarm on if it fails since it provices one central point to manage the archival and deletion of rows so i kind of like this approch even if its direct db access right now would make it unsupportable in our product without veting the code and productising the repo via ooo integration. > > Regards, > From zigo at debian.org Fri Jan 29 14:55:22 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 29 Jan 2021 15:55:22 +0100 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> Message-ID: On 1/29/21 2:53 PM, Sean Mooney wrote: > On Fri, 2021-01-29 at 13:44 +0100, Thomas Goirand wrote: >> On 1/29/21 1:19 PM, Sean Mooney wrote: >>> On Thu, 2021-01-28 at 21:02 +0100, Thomas Goirand wrote: >>>> Hi, >>>> >>>> Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: >>>> http://paste.openstack.org/show/802063/ >>>> >>>> (that's when doing a simple "openstack container list") >>>> >>>> I got the same problem with neutron-rpc-server: >>>> http://paste.openstack.org/show/802071/ >>>> >>>> (this happens when neutron-rpc-server tries to tell Nova that my new VM >>>> port is up) >>> im surprised you got this far i was still expecting nova-compute to hang connecting to rabbit >>> it did in decemebr has the 0.30.0 release of eventlet fixed that? >> >> The Debian package for Eventlet 0.26.1-4 contains these patches: >> >> https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec >> https://github.com/eventlet/eventlet/pull/664 >> https://github.com/eventlet/eventlet/pull/672 >> > https://github.com/eventlet/eventlet/pull/664 was not in the version i used in early decmber Indeed. Neither the other 2. > so that is likely what allows nova to not hang. Yeah. > the ssl issue i expect to still impact the vnc proxy but i might try it again and see. > > i think all 3 of those are in 0.30.0 Eventlet being a very touchy dependency of OpenStack, we very much prefer cherry-picking patches whenever possible. However, I tried 0.30.0, and it didn't solve the SSL problem (so I'm back to our patched 0.26.1 Debian release). Cheers, Thomas Goirand (zigo) From skaplons at redhat.com Fri Jan 29 14:55:06 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 29 Jan 2021 15:55:06 +0100 Subject: [tc][all][ptl] Encouraging projects to apply for tag 'assert:supports-api-interoperability' In-Reply-To: <176fd648bd6.126a9266a1208370.2628135282566323654@ghanshyammann.com> References: <17671275da9.121a655b1251298.6157149575252344776@ghanshyammann.com> <176fd648bd6.126a9266a1208370.2628135282566323654@ghanshyammann.com> Message-ID: <20210129145506.6k2epi6jhbzm4mtp@p1.localdomain> Hi, On Wed, Jan 13, 2021 at 02:16:33PM -0600, Ghanshyam Mann wrote: > Bumping this email in case you missed this during holiday time. > > There are many projects that are eligible for this tag, requesting you to start the > application review to governance. After discussion with Neutron drivers team I just applied for this patch for Neutron project [1]. Please let me know if I should add/change something there. [1] https://review.opendev.org/c/openstack/governance/+/773090 > > -gmann > > > ---- On Thu, 17 Dec 2020 08:42:53 -0600 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > TC defined a tag for API interoperability (cover both stable and compatible APIs) called > > 'assert:supports-api-interoperability' which assert on API won’t break any users when they > > upgrade a cloud or start using their code on a new OpenStack cloud. > > > > Basically, Projects will not change (or remove) an API in a way that will break existing users > > of an API. We have updated the tag documentation to clarify its definition and requirements. > > > > If your projects follow the API interoperability guidelines[1] and some API versioning mechanism > > that does not need to be microversion then you should start thinking to apply for this tag. The > > complete requirements can be found here[2]. > > > > Currently, only nova has this tag but I am sure many projects are eligible for this, and TC encourage > > them to apply for this. > > > > [1] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > [2] https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html > > > > > > -gmann > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From skaplons at redhat.com Fri Jan 29 16:14:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 29 Jan 2021 17:14:08 +0100 Subject: [neutron] Secure RBAC api performance regression Message-ID: <20210129161408.dkond3b6xcmu4fvg@p1.localdomain> Hi, In Neutron we are merging more and more patches related to the secure rbac policies [1]. Recently we noticed that our neutron-rally-task job is failing very often [2]. I did some tests and here is what were my results: * on latest neutron, with about 12 patches related to secure rbac merged - list of 4k ports took about 1m 40s: $ time neutron port-list | wc -l neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. 4044 real 1m41,644s user 0m0,926s sys 0m0,087s * same test with all those patches reverted - and it took 8 seconds: $ sudo systemctl restart devstack at q-svc; sleep 5; time neutron port-list | wc -l neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. 4044 real 0m8,131s user 0m0,907s sys 0m0,120s After a bit of digging I found out that most of the slowdown came from calls of the method self._handle_deprecated_rule(default) in [3]. When I commented that one line I had results same like without secure-rbac patches: $ sudo systemctl restart devstack at q-svc; sleep 5; time neutron port-list | wc -l neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. 4044 real 0m7,875s user 0m0,931s sys 0m0,087s @Lance, can You maybe take a look at it and help me understand if we are doing something wrong with those new rules in Neutron? Or maybe there is a bug in oslo_policy? [1] https://review.opendev.org/q/topic:secure-rbac+project:openstack/neutron [2] https://bugs.launchpad.net/neutron/+bug/1913718 [3] https://github.com/openstack/oslo.policy/blob/e103baa002e54303b08630c436dfc7b0b8a013de/oslo_policy/policy.py#L641 -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Jan 29 16:25:12 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Jan 2021 10:25:12 -0600 Subject: [tc][all][ptl] Encouraging projects to apply for tag 'assert:supports-api-interoperability' In-Reply-To: <20210129145506.6k2epi6jhbzm4mtp@p1.localdomain> References: <17671275da9.121a655b1251298.6157149575252344776@ghanshyammann.com> <176fd648bd6.126a9266a1208370.2628135282566323654@ghanshyammann.com> <20210129145506.6k2epi6jhbzm4mtp@p1.localdomain> Message-ID: <1774ef67b2e.c7220e3755580.6453671741130154063@ghanshyammann.com> ---- On Fri, 29 Jan 2021 08:55:06 -0600 Slawek Kaplonski wrote ---- > Hi, > > On Wed, Jan 13, 2021 at 02:16:33PM -0600, Ghanshyam Mann wrote: > > Bumping this email in case you missed this during holiday time. > > > > There are many projects that are eligible for this tag, requesting you to start the > > application review to governance. > > After discussion with Neutron drivers team I just applied for this patch for > Neutron project [1]. Please let me know if I should add/change something there. Thanks, Slawek, I will check and let you know if anything needed. -gmann > > [1] https://review.opendev.org/c/openstack/governance/+/773090 > > > > -gmann > > > > > > ---- On Thu, 17 Dec 2020 08:42:53 -0600 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > TC defined a tag for API interoperability (cover both stable and compatible APIs) called > > > 'assert:supports-api-interoperability' which assert on API won’t break any users when they > > > upgrade a cloud or start using their code on a new OpenStack cloud. > > > > > > Basically, Projects will not change (or remove) an API in a way that will break existing users > > > of an API. We have updated the tag documentation to clarify its definition and requirements. > > > > > > If your projects follow the API interoperability guidelines[1] and some API versioning mechanism > > > that does not need to be microversion then you should start thinking to apply for this tag. The > > > complete requirements can be found here[2]. > > > > > > Currently, only nova has this tag but I am sure many projects are eligible for this, and TC encourage > > > them to apply for this. > > > > > > [1] https://specs.openstack.org/openstack/api-wg/guidelines/api_interoperability.html > > > [2] https://governance.openstack.org/tc/reference/tags/assert_supports-api-interoperability.html > > > > > > > > > -gmann > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From geguileo at redhat.com Fri Jan 29 17:23:47 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 29 Jan 2021 18:23:47 +0100 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder In-Reply-To: References: Message-ID: <20210129172347.7wi3cv3gnneb46dj@localhost> On 28/01, Lance Bragstad wrote: > Hey folks, > > As I'm sure some of the cinder folks are aware, I'm updating cinder > policies to include support for some default personas keystone ships with. > Some of those personas use system-scope (e.g., system-reader and > system-admin) and I've already proposed a series of patches that describe > what those changes look like from a policy perspective [0]. > > The question now is how we test those changes. To help guide that decision, > I worked on three different testing approaches. The first was to continue > testing policy using unit tests in cinder with mocked context objects. The > second was to use DDT with keystonemiddleware mocked to remove a dependency > on keystone. The third also used DDT, but included changes to update > NoAuthMiddleware so that it wasn't as opinionated about authentication or > authorization. I brought each approach in the cinder meeting this week > where we discussed a fourth approach, doing everything in tempest. I > summarized all of this in an etherpad [1] > > Up to yesterday morning, the only approach I hadn't tinkered with manually > was tempest. I spent some time today figuring that out, resulting in a > patch to cinderlib [2] to enable a protection test job, and > cinder_tempest_plugin [3] that adds the plumbing and some example tests. > > In the process of implementing support for tempest testing, I noticed that > service catalogs for system-scoped tokens don't contain cinder endpoints > [4]. This is because the cinder endpoint contains endpoint templating in > the URL [5], which keystone will substitute with the project ID of the > token, if and only if the catalog is built for a project-scoped token. > System and domain-scoped tokens do not have a reasonable project ID to use > in this case, so the templating is skipped, resulting in a cinder service > in the catalog without endpoints [6]. > > This cascades in the client, specifically tempest's volume client, because > it can't find a suitable endpoint for request to the volume service [7]. > > Initially, my testing approaches were to provide examples for cinder > developers to assess the viability of each approach before committing to a > protection testing strategy. But, the tempest approach highlighted a larger > issue for how we integrate system-scope support into cinder because of the > assumption there will always be a project ID in the path (for the majority > of the cinder API). I can think of two ways to approach the problem, but > I'm hoping others have more. > Hi Lance, Sorry to hear that the Cinder is giving you such trouble. > First, we remove project IDs from cinder's API path. > > This would be similar to how nova (and I assume other services) moved away > from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become > /v3/volumes). This would obviously require refactoring to remove any > assumptions cinder has about project IDs being supplied on the request > path. But, this would force all authorization information to come from the > context object. Once a deployer removes the endpoint URL templating, the > endpoints will populate in the cinder entry of the service catalog. Brian's > been helping me understand this and we're unsure if this is something we > could even do with a microversion. I think nova did it moving from /v2/ to > /v2.0/, which was technically classified as a major bump? This feels like a > moon shot. > In my opinion such a change should not be treated as a microversion and would require us to go into v4, which is not something that is feasible in the short term. > Second, we update cinder's clients, including tempest, to put the project > ID on the URL. > > After we update the clients to append the project ID for cinder endpoints, > we should be able to remove the URL templating in keystone, allowing cinder > endpoints to appear in system-scoped service catalogs (just like the first > approach). Clients can use the base URL from the catalog and append the I'm not familiar with keystone catalog entries, so maybe I'm saying something stupid, but couldn't we have multiple entries? A project-specific URL and another one for the project and system scoped requests? I know it sounds kind of hackish, but if we add them in the right order, first the project one and then the new one, it would probably be backward compatible, as older clients would get the first endpoint and new clients would be able to select the right one. > admin project ID before putting the request on the wire. Even though the > request has a project ID in the path, cinder would ignore it for > system-specific APIs. This is already true for users with an admin role on > a project because cinder will allow you to get volumes in one project if > you have a token scoped to another with the admin role [8]. One potential > side-effect is that cinder clients would need *a* project ID to build a > request, potentially requiring another roundtrip to keystone. What would happen in this additional roundtrip? Would we be converting provided project's name into its UUID? If that's the case then it wouldn't happen when UUIDs are being provided, so for cases where this extra request means a performance problem they could just provide the UUID. > > Thoughts? Truth is that I would love to see the Cinder API move into URLs without the project id as well as move out everything from contrib, but that doesn't seem like a realistic piece of work we can bite right now. So I think your second proposal is the way to go. Thanks for all the work you are putting into this. Cheers, Gorka. > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac > [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing > [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 > [3] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 > [4] http://paste.openstack.org/show/802117/ > [5] http://paste.openstack.org/show/802097/ > [6] > https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py > [7] http://paste.openstack.org/show/802092/ > [8] http://paste.openstack.org/show/802118/ From geguileo at redhat.com Fri Jan 29 17:29:23 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 29 Jan 2021 18:29:23 +0100 Subject: Backup drivers issue with the container parameter Message-ID: <20210129172923.3dk3wqfkfu6xtriy@localhost> Hi all, In the next Cinder meeting I'll bring a Backup driver issue up for discussion, and this email hopefully provides the necessary context to have a fruitful discussion. The issue is around the `container` optional parameter in backup creation, and its user and administrator unfriendliness. The interpretation of the `container` parameter is driver dependent, and it's being treated as: - A bucket in Google Cloud Storage and the new S3 driver - A container in Swift - A pool in Ceph - A directory in NFS and Posix Currently the only way to prevent cloud users from selecting a different `container` is by restricting what the storage user configured in Cinder backup can do. For Ceph we can make the storage user unable to access any other existing pools, for Swift, GCS, and S3 we can remove permissions to create buckets/containers from the storage user. This achieves the administrator's objective of not allowing them to change the `container`, but cloud users will have a bad experience, because the API will accept the request but the backup will go into `error` state and they won't see any additional information. And this solution is an all or nothing approach, as we cannot allow just some cloud users select the container while preventing others from doing so. For example we may want some cloud users to be able to do backups on a specific RBD pool that is replicated to a remote location. I think we can solve all these issues if we: - Create a policy for accepting the `container` parameter on the API (defaulting to allow for backward compatibility). - Add a new configuration option `backup_container_regex` to control acceptable values for the `container` (defaults to `.*` for backward compatibility). This option would be used by the backup manager (not the drivers themselves) on backup creation, and would result in a user message if the provided container was not empty and failed the regex check. I think this summarizes the situation and my view on the matter. Feedback is welcome here or in the next Cinder meeting. Cheers, Gorka. From gmann at ghanshyammann.com Fri Jan 29 18:43:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 29 Jan 2021 12:43:57 -0600 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder In-Reply-To: <20210129172347.7wi3cv3gnneb46dj@localhost> References: <20210129172347.7wi3cv3gnneb46dj@localhost> Message-ID: <1774f7582e2.126a1dcb261735.4477287504407985916@ghanshyammann.com> ---- On Fri, 29 Jan 2021 11:23:47 -0600 Gorka Eguileor wrote ---- > On 28/01, Lance Bragstad wrote: > > Hey folks, > > > > As I'm sure some of the cinder folks are aware, I'm updating cinder > > policies to include support for some default personas keystone ships with. > > Some of those personas use system-scope (e.g., system-reader and > > system-admin) and I've already proposed a series of patches that describe > > what those changes look like from a policy perspective [0]. > > > > The question now is how we test those changes. To help guide that decision, > > I worked on three different testing approaches. The first was to continue > > testing policy using unit tests in cinder with mocked context objects. The > > second was to use DDT with keystonemiddleware mocked to remove a dependency > > on keystone. The third also used DDT, but included changes to update > > NoAuthMiddleware so that it wasn't as opinionated about authentication or > > authorization. I brought each approach in the cinder meeting this week > > where we discussed a fourth approach, doing everything in tempest. I > > summarized all of this in an etherpad [1] > > > > Up to yesterday morning, the only approach I hadn't tinkered with manually > > was tempest. I spent some time today figuring that out, resulting in a > > patch to cinderlib [2] to enable a protection test job, and > > cinder_tempest_plugin [3] that adds the plumbing and some example tests. > > > > In the process of implementing support for tempest testing, I noticed that > > service catalogs for system-scoped tokens don't contain cinder endpoints > > [4]. This is because the cinder endpoint contains endpoint templating in > > the URL [5], which keystone will substitute with the project ID of the > > token, if and only if the catalog is built for a project-scoped token. > > System and domain-scoped tokens do not have a reasonable project ID to use > > in this case, so the templating is skipped, resulting in a cinder service > > in the catalog without endpoints [6]. > > > > This cascades in the client, specifically tempest's volume client, because > > it can't find a suitable endpoint for request to the volume service [7]. > > > > Initially, my testing approaches were to provide examples for cinder > > developers to assess the viability of each approach before committing to a > > protection testing strategy. But, the tempest approach highlighted a larger > > issue for how we integrate system-scope support into cinder because of the > > assumption there will always be a project ID in the path (for the majority > > of the cinder API). I can think of two ways to approach the problem, but > > I'm hoping others have more. > > > > Hi Lance, > > Sorry to hear that the Cinder is giving you such trouble. > > > First, we remove project IDs from cinder's API path. > > > > This would be similar to how nova (and I assume other services) moved away > > from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become > > /v3/volumes). This would obviously require refactoring to remove any > > assumptions cinder has about project IDs being supplied on the request > > path. But, this would force all authorization information to come from the > > context object. Once a deployer removes the endpoint URL templating, the > > endpoints will populate in the cinder entry of the service catalog. Brian's > > been helping me understand this and we're unsure if this is something we > > could even do with a microversion. I think nova did it moving from /v2/ to > > /v2.0/, which was technically classified as a major bump? This feels like a > > moon shot. > > > > In my opinion such a change should not be treated as a microversion and > would require us to go into v4, which is not something that is feasible > in the short term. We can do it by supporting both URL with and without project_id. Nova did the same way in Mitaka cycle and also bumped the microversion but just for notification. It was done in 2.18 microversion[1]. That way you can request compute API with or without project_id and later is recommended. I think the same approach Cinder can consider. [1] https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id16 -gmann > > > > Second, we update cinder's clients, including tempest, to put the project > > ID on the URL. > > > > After we update the clients to append the project ID for cinder endpoints, > > we should be able to remove the URL templating in keystone, allowing cinder > > endpoints to appear in system-scoped service catalogs (just like the first > > approach). Clients can use the base URL from the catalog and append the > > I'm not familiar with keystone catalog entries, so maybe I'm saying > something stupid, but couldn't we have multiple entries? A > project-specific URL and another one for the project and system scoped > requests? > > I know it sounds kind of hackish, but if we add them in the right order, > first the project one and then the new one, it would probably be > backward compatible, as older clients would get the first endpoint and > new clients would be able to select the right one. > > > admin project ID before putting the request on the wire. Even though the > > request has a project ID in the path, cinder would ignore it for > > system-specific APIs. This is already true for users with an admin role on > > a project because cinder will allow you to get volumes in one project if > > you have a token scoped to another with the admin role [8]. One potential > > side-effect is that cinder clients would need *a* project ID to build a > > request, potentially requiring another roundtrip to keystone. > > What would happen in this additional roundtrip? Would we be converting > provided project's name into its UUID? > > If that's the case then it wouldn't happen when UUIDs are being > provided, so for cases where this extra request means a performance > problem they could just provide the UUID. > > > > > Thoughts? > > Truth is that I would love to see the Cinder API move into URLs without > the project id as well as move out everything from contrib, but that > doesn't seem like a realistic piece of work we can bite right now. > > So I think your second proposal is the way to go. > > Thanks for all the work you are putting into this. > > Cheers, > Gorka. > > > > > > [0] https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac > > [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing > > [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 > > [3] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 > > [4] http://paste.openstack.org/show/802117/ > > [5] http://paste.openstack.org/show/802097/ > > [6] > > https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py > > [7] http://paste.openstack.org/show/802092/ > > [8] http://paste.openstack.org/show/802118/ > > > From lbragstad at gmail.com Fri Jan 29 21:06:29 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 29 Jan 2021 15:06:29 -0600 Subject: [dev][cinder][keystone] Properly consuming system-scope in cinder In-Reply-To: <20210129172347.7wi3cv3gnneb46dj@localhost> References: <20210129172347.7wi3cv3gnneb46dj@localhost> Message-ID: On Fri, Jan 29, 2021 at 11:24 AM Gorka Eguileor wrote: > On 28/01, Lance Bragstad wrote: > > Hey folks, > > > > As I'm sure some of the cinder folks are aware, I'm updating cinder > > policies to include support for some default personas keystone ships > with. > > Some of those personas use system-scope (e.g., system-reader and > > system-admin) and I've already proposed a series of patches that describe > > what those changes look like from a policy perspective [0]. > > > > The question now is how we test those changes. To help guide that > decision, > > I worked on three different testing approaches. The first was to continue > > testing policy using unit tests in cinder with mocked context objects. > The > > second was to use DDT with keystonemiddleware mocked to remove a > dependency > > on keystone. The third also used DDT, but included changes to update > > NoAuthMiddleware so that it wasn't as opinionated about authentication or > > authorization. I brought each approach in the cinder meeting this week > > where we discussed a fourth approach, doing everything in tempest. I > > summarized all of this in an etherpad [1] > > > > Up to yesterday morning, the only approach I hadn't tinkered with > manually > > was tempest. I spent some time today figuring that out, resulting in a > > patch to cinderlib [2] to enable a protection test job, and > > cinder_tempest_plugin [3] that adds the plumbing and some example tests. > > > > In the process of implementing support for tempest testing, I noticed > that > > service catalogs for system-scoped tokens don't contain cinder endpoints > > [4]. This is because the cinder endpoint contains endpoint templating in > > the URL [5], which keystone will substitute with the project ID of the > > token, if and only if the catalog is built for a project-scoped token. > > System and domain-scoped tokens do not have a reasonable project ID to > use > > in this case, so the templating is skipped, resulting in a cinder service > > in the catalog without endpoints [6]. > > > > This cascades in the client, specifically tempest's volume client, > because > > it can't find a suitable endpoint for request to the volume service [7]. > > > > Initially, my testing approaches were to provide examples for cinder > > developers to assess the viability of each approach before committing to > a > > protection testing strategy. But, the tempest approach highlighted a > larger > > issue for how we integrate system-scope support into cinder because of > the > > assumption there will always be a project ID in the path (for the > majority > > of the cinder API). I can think of two ways to approach the problem, but > > I'm hoping others have more. > > > > Hi Lance, > > Sorry to hear that the Cinder is giving you such trouble. > No worries - I think these types of issues just come with the territory when we're trying to make such large changes across different projects and APIs. :) > > > First, we remove project IDs from cinder's API path. > > > > This would be similar to how nova (and I assume other services) moved > away > > from project-specific URLs (e.g., /v3/%{project_id}s/volumes would become > > /v3/volumes). This would obviously require refactoring to remove any > > assumptions cinder has about project IDs being supplied on the request > > path. But, this would force all authorization information to come from > the > > context object. Once a deployer removes the endpoint URL templating, the > > endpoints will populate in the cinder entry of the service catalog. > Brian's > > been helping me understand this and we're unsure if this is something we > > could even do with a microversion. I think nova did it moving from /v2/ > to > > /v2.0/, which was technically classified as a major bump? This feels > like a > > moon shot. > > > > In my opinion such a change should not be treated as a microversion and > would require us to go into v4, which is not something that is feasible > in the short term. > > > > Second, we update cinder's clients, including tempest, to put the project > > ID on the URL. > > > > After we update the clients to append the project ID for cinder > endpoints, > > we should be able to remove the URL templating in keystone, allowing > cinder > > endpoints to appear in system-scoped service catalogs (just like the > first > > approach). Clients can use the base URL from the catalog and append the > > I'm not familiar with keystone catalog entries, so maybe I'm saying > something stupid, but couldn't we have multiple entries? A > project-specific URL and another one for the project and system scoped > requests? > > I know it sounds kind of hackish, but if we add them in the right order, > first the project one and then the new one, it would probably be > backward compatible, as older clients would get the first endpoint and > new clients would be able to select the right one. > I think that's an option. For example, if I add a fourth cinder service to my catalog without project ID templating in the endpoint URL [0], my system-scoped token contains endpoints for that cinder service [1]. Project-scoped tokens contain all four cinder services, and each service has endpoints populated since URL templating works with project IDs [2]. So, maybe the question becomes, what should the service type and name be for this new entry? I think the cinder clients would need to know this new service is a actually a cinder endpoint without the project ID in the URL. It'll also need to know it's on the hook for appending a project ID to the URL. [0] http://paste.openstack.org/show/802148/ [1] http://paste.openstack.org/show/802145/ (lines 66 - 78) [2] http://paste.openstack.org/show/802149/ > > admin project ID before putting the request on the wire. Even though the > > request has a project ID in the path, cinder would ignore it for > > system-specific APIs. This is already true for users with an admin role > on > > a project because cinder will allow you to get volumes in one project if > > you have a token scoped to another with the admin role [8]. One potential > > side-effect is that cinder clients would need *a* project ID to build a > > request, potentially requiring another roundtrip to keystone. > > What would happen in this additional roundtrip? Would we be converting > provided project's name into its UUID? > If cinder always assumes and validates a project_id exists in the URL path, then clients making requests, regardless of the scope (system, domain, or project), will need to find a project to use, or use a fake project. And I think the answer to that depends on how cinder wants to handle that validation in the API. I believe this is where that's handled now [3]. If the client sends a request with a system-scoped token in the X-Auth-Token header, cinder could detect that and ignore whatever the project ID is from the path. So maybe it can be fake? At least until we figure out if we can version the API to introduce a project-less URL path? [3] https://opendev.org/openstack/cinder/src/commit/500f5100c8531b4995537c4952382fed3b0e2c8c/cinder/api/openstack/wsgi.py > If that's the case then it wouldn't happen when UUIDs are being > provided, so for cases where this extra request means a performance > problem they could just provide the UUID. > > > > > Thoughts? > > Truth is that I would love to see the Cinder API move into URLs without > the project id as well as move out everything from contrib, but that > doesn't seem like a realistic piece of work we can bite right now. > > So I think your second proposal is the way to go. > > Thanks for all the work you are putting into this. > > Cheers, > Gorka. > > > > > > [0] > https://review.opendev.org/q/project:openstack/cinder+topic:secure-rbac > > [1] https://etherpad.opendev.org/p/cinder-secure-rbac-protection-testing > > [2] https://review.opendev.org/c/openstack/cinderlib/+/772770 > > [3] > https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/772915 > > [4] http://paste.openstack.org/show/802117/ > > [5] http://paste.openstack.org/show/802097/ > > [6] > > > https://opendev.org/openstack/keystone/src/commit/c239cc66615b41a0c09e031b3e268c82678bac12/keystone/catalog/backends/sql.py > > [7] http://paste.openstack.org/show/802092/ > > [8] http://paste.openstack.org/show/802118/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Sat Jan 30 02:55:52 2021 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 29 Jan 2021 20:55:52 -0600 Subject: [neutron] Secure RBAC api performance regression In-Reply-To: <20210129161408.dkond3b6xcmu4fvg@p1.localdomain> References: <20210129161408.dkond3b6xcmu4fvg@p1.localdomain> Message-ID: I'm able to recreate this locally. I updated the bug, but I didn't triage it. I left a comment with some thoughts on where the extra processing is taking place, but I need to parse more of neutron's policy enforcement engine. i don't think nova is doing per-attribute policy enforcement, which is where I think the extra cycles are in neutron. Thanks for raising the issue, Slawek. On Fri, Jan 29, 2021 at 10:14 AM Slawek Kaplonski wrote: > Hi, > > In Neutron we are merging more and more patches related to the secure rbac > policies [1]. Recently we noticed that our neutron-rally-task job is > failing > very often [2]. > > I did some tests and here is what were my results: > > * on latest neutron, with about 12 patches related to secure rbac merged - > list > of 4k ports took about 1m 40s: > > $ time neutron port-list | wc -l > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > 4044 > > real 1m41,644s > user 0m0,926s > sys 0m0,087s > > * same test with all those patches reverted - and it took 8 seconds: > > $ sudo systemctl restart devstack at q-svc; sleep 5; time neutron > port-list | wc -l > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > 4044 > > real 0m8,131s > user 0m0,907s > sys 0m0,120s > > > After a bit of digging I found out that most of the slowdown came from > calls of > the method self._handle_deprecated_rule(default) in [3]. > When I commented that one line I had results same like without secure-rbac > patches: > > $ sudo systemctl restart devstack at q-svc; sleep 5; time neutron > port-list | wc -l > neutron CLI is deprecated and will be removed in the future. Use > openstack CLI instead. > 4044 > > real 0m7,875s > user 0m0,931s > sys 0m0,087s > > > @Lance, can You maybe take a look at it and help me understand if we are > doing > something wrong with those new rules in Neutron? Or maybe there is a bug in > oslo_policy? > > [1] > https://review.opendev.org/q/topic:secure-rbac+project:openstack/neutron > [2] https://bugs.launchpad.net/neutron/+bug/1913718 > [3] > https://github.com/openstack/oslo.policy/blob/e103baa002e54303b08630c436dfc7b0b8a013de/oslo_policy/policy.py#L641 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Sat Jan 30 09:47:40 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Sat, 30 Jan 2021 11:47:40 +0200 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> Message-ID: <666451611999668@mail.yandex.ru> In the meanwhile we see that most of the services fail to interact with rabbitmq over self-signed SSL in case RDO packages are used even with Python 3.6. We don't see this happening when installing things with pip packages though. Both rdo and pip version of eventlet we used was 0.30.0. RDO started failing for us several days back with: ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) Not sure, maybe it's not related directly to eventlet, but sounds like it might be. 29.01.2021, 17:02, "Thomas Goirand" : > On 1/29/21 2:53 PM, Sean Mooney wrote: >>  On Fri, 2021-01-29 at 13:44 +0100, Thomas Goirand wrote: >>>  On 1/29/21 1:19 PM, Sean Mooney wrote: >>>>  On Thu, 2021-01-28 at 21:02 +0100, Thomas Goirand wrote: >>>>>  Hi, >>>>> >>>>>  Swift proxy fails with Python 3.9 under Debian Unstable with Python 3.9: >>>>>  http://paste.openstack.org/show/802063/ >>>>> >>>>>  (that's when doing a simple "openstack container list") >>>>> >>>>>  I got the same problem with neutron-rpc-server: >>>>>  http://paste.openstack.org/show/802071/ >>>>> >>>>>  (this happens when neutron-rpc-server tries to tell Nova that my new VM >>>>>  port is up) >>>>  im surprised you got this far i was still expecting nova-compute to hang connecting to rabbit >>>>  it did in decemebr has the 0.30.0 release of eventlet fixed that? >>> >>>  The Debian package for Eventlet 0.26.1-4 contains these patches: >>> >>>  https://github.com/eventlet/eventlet/commit/46fc185c8f92008c65aef2713fc1445bfc5f6fec >>>  https://github.com/eventlet/eventlet/pull/664 >>>  https://github.com/eventlet/eventlet/pull/672 >>  https://github.com/eventlet/eventlet/pull/664 was not in the version i used in early decmber > > Indeed. Neither the other 2. > >>  so that is likely what allows nova to not hang. > > Yeah. > >>  the ssl issue i expect to still impact the vnc proxy but i might try it again and see. >> >>  i think all 3 of those are in 0.30.0 > > Eventlet being a very touchy dependency of OpenStack, we very much > prefer cherry-picking patches whenever possible. However, I tried > 0.30.0, and it didn't solve the SSL problem (so I'm back to our patched > 0.26.1 Debian release). > > Cheers, > > Thomas Goirand (zigo) --  Kind Regards, Dmitriy Rabotyagov From zigo at debian.org Sat Jan 30 10:42:16 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 30 Jan 2021 11:42:16 +0100 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: <666451611999668@mail.yandex.ru> References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> <666451611999668@mail.yandex.ru> Message-ID: On 1/30/21 10:47 AM, Dmitriy Rabotyagov wrote: > In the meanwhile we see that most of the services fail to interact with rabbitmq over self-signed SSL in case RDO packages are used even with Python 3.6. > We don't see this happening when installing things with pip packages though. Both rdo and pip version of eventlet we used was 0.30.0. > > RDO started failing for us several days back with: > ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) > > Not sure, maybe it's not related directly to eventlet, but sounds like it might be. Does RDO has version 5.0.3 of AMQP and version 5.0.2 of Kombu? That's what I had to do in Debian to pass this stage. Though the next issue is what I wrote, when a service tries to validate a keystone token (ie: keystoneauth1 calls requests that calls urllib3, which in turns calls Python 3.9 SSL, and then crash with maximum recursion depth exceeded). I'm no 100% sure the problem is in Eventlet, but it really looks like it, as it's similar to another SSL crash we had in Python 3.7. Cheers, Thomas Goirand (zigo) From tonykarera at gmail.com Sat Jan 30 11:07:03 2021 From: tonykarera at gmail.com (Karera Tony) Date: Sat, 30 Jan 2021 13:07:03 +0200 Subject: Advise Request Message-ID: Dear Team, I deployed the Openstack Ussuri version and everything was working fine with the small cirros Image since its a test environment for now. I started facing an issue when I uploaded a bigger image (Ubuntu 16.04 Xenialand Ubuntu 18.04 Biomic), When I tried to create an instance using any of those images, I got the error below. Traceback (most recent call last): File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/compute/manager.py", line 1942, in _prep_block_device wait_func=self._await_block_device_map_created) File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/block_device.py", line 874, in attach_block_devices _log_and_attach(device) File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/block_device.py", line 871, in _log_and_attach bdm.attach(*attach_args, **attach_kwargs) File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/block_device.py", line 771, in attach wait_func=wait_func, image_id=self.image_id) File "/var/lib/kolla/venv/lib/python3.6/site-packages/nova/virt/block_device.py", line 381, in _create_volume availability_zone=av_zone, Can any one advise on how to proceed ? Regards Tony Karera -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Sat Jan 30 12:11:32 2021 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Sat, 30 Jan 2021 14:11:32 +0200 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> <666451611999668@mail.yandex.ru> Message-ID: <6241612007690@mail.yandex.ru> Yeah, they do: [root at centos-distro openstack-ansible]# rpm -qa | egrep "amqp|kombu" python3-kombu-5.0.2-1.el8.noarch python3-amqp-5.0.3-1.el8.noarch [root at centos-distro openstack-ansible]# But not sure about keystoneauth1 since I see this at the point in oslo.messaging. Full error in systemd looks like this: Jan 30 11:51:04 aio1 nova-conductor[97314]: 2021-01-30 11:51:04.543 97314 ERROR oslo.messaging._drivers.impl_rabbit [req-61609624-b577-475d-996e-bc8f9899eae0 - - - - -] Connection failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) 30.01.2021, 12:42, "Thomas Goirand" : > On 1/30/21 10:47 AM, Dmitriy Rabotyagov wrote: >>  In the meanwhile we see that most of the services fail to interact with rabbitmq over self-signed SSL in case RDO packages are used even with Python 3.6. >>  We don't see this happening when installing things with pip packages though. Both rdo and pip version of eventlet we used was 0.30.0. >> >>  RDO started failing for us several days back with: >>  ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) >> >>  Not sure, maybe it's not related directly to eventlet, but sounds like it might be. > > Does RDO has version 5.0.3 of AMQP and version 5.0.2 of Kombu? That's > what I had to do in Debian to pass this stage. > > Though the next issue is what I wrote, when a service tries to validate > a keystone token (ie: keystoneauth1 calls requests that calls urllib3, > which in turns calls Python 3.9 SSL, and then crash with maximum > recursion depth exceeded). I'm no 100% sure the problem is in Eventlet, > but it really looks like it, as it's similar to another SSL crash we had > in Python 3.7. > > Cheers, > > Thomas Goirand (zigo) --  Kind Regards, Dmitriy Rabotyagov From zigo at debian.org Sat Jan 30 15:12:26 2021 From: zigo at debian.org (Thomas Goirand) Date: Sat, 30 Jan 2021 16:12:26 +0100 Subject: [all] Eventlet broken again with SSL, this time under Python 3.9 In-Reply-To: <6241612007690@mail.yandex.ru> References: <6e817a0e-aaa7-9444-fca3-6c5ae8ed2ae7@debian.org> <6f653877fa1da9b2d191b3d3818307f9b29f60bb.camel@redhat.com> <7441d18e5313af6d76a28cabb3866e05dad6f6d5.camel@redhat.com> <666451611999668@mail.yandex.ru> <6241612007690@mail.yandex.ru> Message-ID: <25220124-ba3d-6b1a-6194-668ef7881e43@debian.org> On 1/30/21 1:11 PM, Dmitriy Rabotyagov wrote: > Yeah, they do: > [root at centos-distro openstack-ansible]# rpm -qa | egrep "amqp|kombu" > python3-kombu-5.0.2-1.el8.noarch > python3-amqp-5.0.3-1.el8.noarch > [root at centos-distro openstack-ansible]# > > But not sure about keystoneauth1 since I see this at the point in oslo.messaging. Full error in systemd looks like this: > Jan 30 11:51:04 aio1 nova-conductor[97314]: 2021-01-30 11:51:04.543 97314 ERROR oslo.messaging._drivers.impl_rabbit [req-61609624-b577-475d-996e-bc8f9899eae0 - - - - -] Connection failed: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:897) If I'm not mistaking (it's hard to remember, because of the amount of brokenness...), this happens with Eventlet 0.30.0, but not with the patched version of 0.26.1 in Sid/Testing. Are you using a self-signed certificate for the RabbitMQ cluster, meaning having a root CA set in the configuration file, like I do? (my installer maintains a PKI internal to the deployed clusters) Cheers, Thomas Goirand (zigo) From dangerzonen at gmail.com Sun Jan 31 03:54:28 2021 From: dangerzonen at gmail.com (dangerzone ar) Date: Sun, 31 Jan 2021 11:54:28 +0800 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server In-Reply-To: <20210129080701.4tmtijbdjiwvn3ph@p1.localdomain> References: <20210129080701.4tmtijbdjiwvn3ph@p1.localdomain> Message-ID: Hi Slawek and Ruslans, thanks for the response. In this case the instance is connected to an unknown dhcp network and requests ip from that network to get access to the lan/internet. Thus, there is no known ip range/gw/dns/subnet that configures on the neutron network. From Ruslanas in the email, propose to set subnet as 0.0.0.0/0 and disable port security ( on the instance itself or on the network that should be disabled???). The use case is I have a computer that has been configured as an openstack and with 2 nic ports. An instance created with attached to eth0 and eth1. This is a mobile mini computer that is used for demo purposes... The management openstack is set to network (eth0) 192.168.100.0/24 and the second port eth1 is the one that will be connected to the external dhcp network. For example I bring the computer to customer site A for a demo... and connect eth0 to notebook for local management and then connect eth1 to site A to get ip from dhcp server for my instance to access lan/internet from site A network. I understand if we have the details of IP range/gw/etc that can be defined as a network, but the scenario here....we don't know the network and requesting ip from unknown dhcp network. Hope it gives some ideas... Thanks On Fri, Jan 29, 2021 at 4:07 PM Slawek Kaplonski wrote: > Hi, > > On Fri, Jan 29, 2021 at 12:20:15PM +0800, dangerzone ar wrote: > > Hi, appreciate some advice on how instances in openstack get ip from > > external DHCP server. For example instance is attached to port eth1 > > (physical port) and this port is connected to home/office lan port and > > requests dhcp ip. How this can be achieved. > > ***User don't know the dhcp ip range/gw/dns that will be provided by the > > dhcp sever to that instance...instance just attach to eth1 and request > > ip.*** > > Similar like our pc/notebook request dhcp ip via wifi or lan port. > > How to establish this in openstack. Please advise and help me. > > Thank you > > You need to disable port security on such instance. Otherwise Neutron will > block traffic from such IP address which is unknown. > Or You need to add this IP address which VM get to the > allowed_address_pairs of > the VM's port. > Also, please keep in mind that You will have different IP associated to > that VM > in the Neutron, and that will be visible in OpenStack API and different > one will > be really used. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Sun Jan 31 08:46:03 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Sun, 31 Jan 2021 10:46:03 +0200 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server In-Reply-To: References: <20210129080701.4tmtijbdjiwvn3ph@p1.localdomain> Message-ID: Depends on the usage. If your instance will be the only one on that dhcp network, then on port/server would be better, but if you will need to create more instances on such dhcp usecase, then I would go for a network port security. On Sun, 31 Jan 2021, 05:54 dangerzone ar, wrote: > Hi Slawek and Ruslans, thanks for the response. In this case the instance > is connected to an unknown dhcp network and requests ip from that network > to get access to the lan/internet. Thus, there is no known ip > range/gw/dns/subnet that configures on the neutron network. From Ruslanas > in the email, propose to set subnet as 0.0.0.0/0 and disable port > security ( on the instance itself or on the network that should be > disabled???). > > The use case is I have a computer that has been configured as an openstack > and with 2 nic ports. An instance created with attached to eth0 and eth1. > This is a mobile mini computer that is used for demo purposes... > The management openstack is set to network (eth0) 192.168.100.0/24 and > the second port eth1 is the one that will be connected to the external dhcp > network. For example I bring the computer to customer site A for a demo... > and connect eth0 to notebook for local management and then connect eth1 to > site A to get ip from dhcp server for my instance to access lan/internet > from site A network. > I understand if we have the details of IP range/gw/etc that can be defined > as a network, but the scenario here....we don't know the network and > requesting ip from unknown dhcp network. > Hope it gives some ideas... Thanks > > > > > On Fri, Jan 29, 2021 at 4:07 PM Slawek Kaplonski > wrote: > >> Hi, >> >> On Fri, Jan 29, 2021 at 12:20:15PM +0800, dangerzone ar wrote: >> > Hi, appreciate some advice on how instances in openstack get ip from >> > external DHCP server. For example instance is attached to port eth1 >> > (physical port) and this port is connected to home/office lan port and >> > requests dhcp ip. How this can be achieved. >> > ***User don't know the dhcp ip range/gw/dns that will be provided by the >> > dhcp sever to that instance...instance just attach to eth1 and request >> > ip.*** >> > Similar like our pc/notebook request dhcp ip via wifi or lan port. >> > How to establish this in openstack. Please advise and help me. >> > Thank you >> >> You need to disable port security on such instance. Otherwise Neutron will >> block traffic from such IP address which is unknown. >> Or You need to add this IP address which VM get to the >> allowed_address_pairs of >> the VM's port. >> Also, please keep in mind that You will have different IP associated to >> that VM >> in the Neutron, and that will be visible in OpenStack API and different >> one will >> be really used. >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun Jan 31 11:59:08 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 31 Jan 2021 12:59:08 +0100 Subject: [DHCP][Provider Network] Instances get IP from external DHCP server In-Reply-To: References: <20210129080701.4tmtijbdjiwvn3ph@p1.localdomain> Message-ID: <20210131115908.4k4aynizpuzwks7w@p1.localdomain> Hi, On Sun, Jan 31, 2021 at 10:46:03AM +0200, Ruslanas Gžibovskis wrote: > Depends on the usage. > If your instance will be the only one on that dhcp network, then on > port/server would be better, but if you will need to create more instances > on such dhcp usecase, then I would go for a network port security. Exactly. If it is set on the network, it will be default value for all ports created on that network. See [1]. [1] https://docs.openstack.org/api-ref/network/v2/index.html#port-security > > On Sun, 31 Jan 2021, 05:54 dangerzone ar, wrote: > > > Hi Slawek and Ruslans, thanks for the response. In this case the instance > > is connected to an unknown dhcp network and requests ip from that network > > to get access to the lan/internet. Thus, there is no known ip > > range/gw/dns/subnet that configures on the neutron network. From Ruslanas > > in the email, propose to set subnet as 0.0.0.0/0 and disable port > > security ( on the instance itself or on the network that should be > > disabled???). > > > > The use case is I have a computer that has been configured as an openstack > > and with 2 nic ports. An instance created with attached to eth0 and eth1. > > This is a mobile mini computer that is used for demo purposes... > > The management openstack is set to network (eth0) 192.168.100.0/24 and > > the second port eth1 is the one that will be connected to the external dhcp > > network. For example I bring the computer to customer site A for a demo... > > and connect eth0 to notebook for local management and then connect eth1 to > > site A to get ip from dhcp server for my instance to access lan/internet > > from site A network. > > I understand if we have the details of IP range/gw/etc that can be defined > > as a network, but the scenario here....we don't know the network and > > requesting ip from unknown dhcp network. > > Hope it gives some ideas... Thanks > > > > > > > > > > On Fri, Jan 29, 2021 at 4:07 PM Slawek Kaplonski > > wrote: > > > >> Hi, > >> > >> On Fri, Jan 29, 2021 at 12:20:15PM +0800, dangerzone ar wrote: > >> > Hi, appreciate some advice on how instances in openstack get ip from > >> > external DHCP server. For example instance is attached to port eth1 > >> > (physical port) and this port is connected to home/office lan port and > >> > requests dhcp ip. How this can be achieved. > >> > ***User don't know the dhcp ip range/gw/dns that will be provided by the > >> > dhcp sever to that instance...instance just attach to eth1 and request > >> > ip.*** > >> > Similar like our pc/notebook request dhcp ip via wifi or lan port. > >> > How to establish this in openstack. Please advise and help me. > >> > Thank you > >> > >> You need to disable port security on such instance. Otherwise Neutron will > >> block traffic from such IP address which is unknown. > >> Or You need to add this IP address which VM get to the > >> allowed_address_pairs of > >> the VM's port. > >> Also, please keep in mind that You will have different IP associated to > >> that VM > >> in the Neutron, and that will be visible in OpenStack API and different > >> one will > >> be really used. > >> > >> -- > >> Slawek Kaplonski > >> Principal Software Engineer > >> Red Hat > >> > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From amy at demarco.com Sun Jan 31 21:37:58 2021 From: amy at demarco.com (Amy Marrich) Date: Sun, 31 Jan 2021 15:37:58 -0600 Subject: [Diversity] Diversity &Inclusion WG Meeting reminder Amy Marrich Mon, Jan 11, 7:49 AM to foundation, openstack-discuss, kata-dev, starlingx-discuss, airship-discuss, zuul-discuss The Diversity & Inclusion WG invites members of all OSF projects to our meeting today, at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Thanks, Amy (spotz) Message-ID: The Diversity & Inclusion WG invites members of all OSF projects to our meeting tomorrow, at 17:00 UTC in the #openstack-diversity channel. The agenda can be found at https://etherpad.openstack.org/p/diversity-wg-agenda. Please feel free to add any other topics you wish to discuss at the meeting. Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: