From aj at suse.com Sun Mar 1 17:05:30 2020 From: aj at suse.com (Andreas Jaeger) Date: Sun, 1 Mar 2020 18:05:30 +0100 Subject: Retiring openstack/faafo repository Message-ID: <6f829fe2-5dc6-2748-9744-baaaf49446bf@suse.com> This repo was part of the first-app document that was retired last year as part of the api-site retirement work. Thus, I'm retiring the openstack/faafo repository now. Patches are pushed with topic:retire-faafo, starting at https://review.opendev.org/710652 Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From yamamoto at midokura.com Mon Mar 2 02:33:57 2020 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 2 Mar 2020 11:33:57 +0900 Subject: [neutron] bug deputy report for 2020-02-24 week Message-ID: 2020-02-24 week Someone familiar with these topics need to investigate DHCP agent: https://bugs.launchpad.net/neutron/+bug/1864711 DHCP port rescheduling causes ports to grow, internal DNS to be broken WSGI: https://bugs.launchpad.net/neutron/+bug/1864418 has wrong with use apache to start neutron api in docker container L3 agent: https://bugs.launchpad.net/neutron/+bug/1864963 loosing connectivity to instance with FloatingIP randomly API server performance: https://bugs.launchpad.net/neutron/+bug/1865223 [scale issue] regression for security group list between Newton and Rocky+ RFE https://bugs.launchpad.net/neutron/+bug/1864841 Neutron -> Designate integration does not consider the dns pool for zone Medium https://bugs.launchpad.net/neutron/+bug/1864822 Openvswitch Agent - Connexion openvswitch DB Broken Low https://bugs.launchpad.net/neutron/+bug/1864374 ml2 ovs does not flush iptables switching to FW ovs OVN-specific stuff High https://bugs.launchpad.net/neutron/+bug/1864620 [OVN] neutron_tempest_plugin.scenario.test_security_groups.NetworkSecGroupTest.test_multiple_ports_portrange_remote often fails Medium https://bugs.launchpad.net/neutron/+bug/1864833 [OVN] Functional tests start with OVSDB binary 2.9 instead 2.12 https://bugs.launchpad.net/neutron/+bug/1864639 [OVN] UpdateLRouterPortCommand and AddLRouterPortCommand needs to specify network Wishlist https://bugs.launchpad.net/neutron/+bug/1864640 [Ussuri] Neutron API writes to the Southbound DB From eandersson at blizzard.com Mon Mar 2 03:03:51 2020 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 2 Mar 2020 03:03:51 +0000 Subject: [neutron] security group list regression In-Reply-To: References: , Message-ID: When we went from Mitaka to Rocky in August last year and we saw an exponential increase in api times for listing security group rules. I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, but I have brought it up on a few other occasions as well. Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime between liberty and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review with fixes is incoming. You can repro with a vanilla devstack install on master, and this script: #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') export OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost make_rules() { iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file <<EOF {... bugs.launchpad.net ________________________________ From: Slawek Kaplonski Sent: Saturday, February 29, 2020 12:44 AM To: James Denton Cc: openstack-discuss Subject: Re: [neutron] security group list regression Hi, I just replied in Your bug report. Can You try to apply patch https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ to see if that will help with this problem? > On 29 Feb 2020, at 02:41, James Denton wrote: > > Hello all, > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe regression in the time it takes the API to return the list of security groups. This environment has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security group list’ command to complete. I don’t have actual data from the same environment running Newton, but was able to replicate this behavior with the following lab environments running a mix of virtual and baremetal machines: > > Newton (VM) > Rocky (BM) > Stein (VM) > Train (BM) > > Number of sec grps vs time in seconds: > > # Newton Rocky Stein Train > 200 4.1 3.7 5.4 5.2 > 500 5.3 7 11 9.4 > 1000 7.2 12.4 19.2 16 > 2000 9.2 24.2 35.3 30.7 > 3000 12.1 36.5 52 44 > 4000 16.1 47.2 73 58.9 > 5000 18.4 55 90 69 > > As you can see (hopefully), the response time increased significantly between Newton and Rocky, and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other 'list' commands or is limited to secgroups. We're currently verifying on some intermediate releases to see where things went wonky. > > There are some similar recent reports out in the wild with little feedback: > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > I opened a bug here, too: > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you able to address them with any sort of tuning? > > Thanks in advance, > James > — Slawek Kaplonski Senior software engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Mar 2 06:26:34 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 2 Mar 2020 11:56:34 +0530 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: Hi Gaëtan, Glance team doesn't recommend to use OSC anymore. I will recommend you to check the same behaviour using python-glanceclient. Thanks & Best Regards, Abhishek Kekane On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor wrote: > > > > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: > > > > Hey Monty, > > > > If I download the image via the CLI, the checksum of the file matches > the checksum from the image details. > > If I download the image via "curl", the "Content-Md5" header matches the > image details but the file checksum doesn't. > > > > The files have the same size, this is really weird. > > WOW. > > I still don’t know the issue - but my unfounded hunch is that the curl > command is likely not doing something it should be. If OSC is producing a > file that matches the image details, that seems like the right choice for > now. > > Seriously fascinating though. > > > Gaëtan > > > > On 2020-02-28 17:00, Monty Taylor wrote: > >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: > >>> Hi guys, > >>> Does anyone know why the md5 checksum is different between the > "openstack image save" CLI and "curl" commands? > >>> During the image creation a checksum is computed to check the image > integrity, using the "openstack" CLI match the checksum generated but when > "curl" is used by following the API documentation[1] the checksum change at > every "download". > >>> Any idea? > >> That seems strange. I don’t know off the top of my head. I do know > >> Artem has patches up to switch OSC to using SDK for image operations. > >> https://review.opendev.org/#/c/699416/ > >> That said, I’d still expect current OSC checksums to be solid. Perhaps > >> there is some filtering/processing being done cloud-side in your > >> glance? If you download the image to a file and run a checksum on it - > >> does it match the checksum given by OSC on upload? Or the checksum > >> given by glance API on download? > >>> Thanks, > >>> Gaëtan > >>> [1] > https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Mar 2 08:03:53 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 2 Mar 2020 09:03:53 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: Hi Abhishek, Abhishek Kekane wrote: > Glance team doesn't recommend to use OSC anymore. > I will recommend you to check the same behaviour using python-glanceclient. Whoa, so OSC suddenly fell out of support? When did that happen? I thought OSC is the future, not the other way around... -yoctozepto From mark at stackhpc.com Mon Mar 2 09:54:03 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 2 Mar 2020 09:54:03 +0000 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > Hi Gaëtan, > > Glance team doesn't recommend to use OSC anymore. > I will recommend you to check the same behaviour using python-glanceclient. That's not cool - everyone has switched to OSC. It's also the first time I've heard of it. > > Thanks & Best Regards, > > Abhishek Kekane > > > On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor wrote: >> >> >> >> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: >> > >> > Hey Monty, >> > >> > If I download the image via the CLI, the checksum of the file matches the checksum from the image details. >> > If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. >> > >> > The files have the same size, this is really weird. >> >> WOW. >> >> I still don’t know the issue - but my unfounded hunch is that the curl command is likely not doing something it should be. If OSC is producing a file that matches the image details, that seems like the right choice for now. >> >> Seriously fascinating though. >> >> > Gaëtan >> > >> > On 2020-02-28 17:00, Monty Taylor wrote: >> >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: >> >>> Hi guys, >> >>> Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? >> >>> During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". >> >>> Any idea? >> >> That seems strange. I don’t know off the top of my head. I do know >> >> Artem has patches up to switch OSC to using SDK for image operations. >> >> https://review.opendev.org/#/c/699416/ >> >> That said, I’d still expect current OSC checksums to be solid. Perhaps >> >> there is some filtering/processing being done cloud-side in your >> >> glance? If you download the image to a file and run a checksum on it - >> >> does it match the checksum given by OSC on upload? Or the checksum >> >> given by glance API on download? >> >>> Thanks, >> >>> Gaëtan >> >>> [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data >> > >> >> From ralonsoh at redhat.com Mon Mar 2 11:26:33 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Mon, 02 Mar 2020 11:26:33 +0000 Subject: [neutron] security group list regression In-Reply-To: References: , Message-ID: Hello James: Just to make a quick summary of the status of the commented bugs/regressions: 1) https://bugs.launchpad.net/neutron/+bug/1810563: adding rules to security groups is slow That was addressed in https://review.opendev.org/#/c/633145/ and https://review.opendev.org/#/c/637407/, removing the O^2 check and using lazy loading. 2) https://bugzilla.redhat.com/show_bug.cgi?id=1788749: Neutron List networks API regression The last reply was marked as private. I've undone this and you can read now c#2. Testing with a similar scenario, I don't see any performance degradation between Queens and Train. 3) https://bugzilla.redhat.com/show_bug.cgi?id=1721273: Neutron API List Ports Performance regression That problem was solved in https://review.opendev.org/#/c/667981/ and https://review.opendev.org/#/c/667998/, by refactoring how the port QoS extension was reading and applying the QoS info in the port dict. 4) https://bugs.launchpad.net/neutron/+bug/1865223: regression for security group list between Newton and Rocky+ This is similar to https://bugs.launchpad.net/neutron/+bug/1863201. In this case, the regression was detected from R to S. The performance dropped from 3 secs to 110 secs (36x). That issue was addressed by https://review.opendev.org/#/c/708695/. But while 1865223 is talking about *SG list*, 1863201 is related to *SG rule list*. I would like to make this differentiation, because both retrieval commands are not related. In this bug (1863201), the performance degradation multiplies by x3 (N->Q) the initial time. This could be caused by the OVO integration (O->P: https://review.opendev.org/#/c/284738/). Instead of using the DB object now we make this call using the OVO object containing the DB register (something like a DB view). That's something I still need to check. Just to make a concretion: the patch 708695 improves the *SG rule* retrieval, not the SG list command. Another punctualization is that this patch will help in the case of having a balance between SG rules and SG. This patch will help to retrieve from the DB only those SG rules belonging to the project. If, as you state in https://bugs.launchpad.net/neutron/+bug/1865223/comments/4, most of those SG rules belong to the same project, there is little improvement there. As commented, I'm still looking at improving the SG OVO performance. Regards On Mon, 2020-03-02 at 03:03 +0000, Erik Olof Gunnar Andersson wrote: > When we went from Mitaka to Rocky in August last year and we saw an exponential increase in api > times for listing security group rules. > > I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, but I have > brought it up on a few other occasions as well. > Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime between liberty > and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review with > fixes is incoming. You can repro with a vanilla devstack install on master, and this script: > #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') export > OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost make_rules() { > iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file <<EOF > {... bugs.launchpad.net > > > From: Slawek Kaplonski > Sent: Saturday, February 29, 2020 12:44 AM > To: James Denton > Cc: openstack-discuss > Subject: Re: [neutron] security group list regression > > Hi, > > I just replied in Your bug report. Can You try to apply patch > https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ > to see if that will help with this problem? > > > On 29 Feb 2020, at 02:41, James Denton wrote: > > > > Hello all, > > > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe > regression in the time it takes the API to return the list of security groups. This environment > has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security > group list’ command to complete. I don’t have actual data from the same environment running > Newton, but was able to replicate this behavior with the following lab environments running a mix > of virtual and baremetal machines: > > > > Newton (VM) > > Rocky (BM) > > Stein (VM) > > Train (BM) > > > > Number of sec grps vs time in seconds: > > > > # Newton Rocky Stein Train > > 200 4.1 3.7 5.4 5.2 > > 500 5.3 7 11 9.4 > > 1000 7.2 12.4 19.2 16 > > 2000 9.2 24.2 35.3 30.7 > > 3000 12.1 36.5 52 44 > > 4000 16.1 47.2 73 58.9 > > 5000 18.4 55 90 69 > > > > As you can see (hopefully), the response time increased significantly between Newton and Rocky, > and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other > 'list' commands or is limited to secgroups. We're currently verifying on some intermediate > releases to see where things went wonky. > > > > There are some similar recent reports out in the wild with little feedback: > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > > > > I opened a bug here, too: > > > > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > > > > Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you > able to address them with any sort of tuning? > > > > Thanks in advance, > > James > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From james.denton at rackspace.com Mon Mar 2 14:27:49 2020 From: james.denton at rackspace.com (James Denton) Date: Mon, 2 Mar 2020 14:27:49 +0000 Subject: [neutron] security group list regression In-Reply-To: References: Message-ID: Thanks, Rodolfo. I'll take a look at each of these after coffee and clarify my position (if needed). James On 3/2/20, 6:27 AM, "Rodolfo Alonso" wrote: CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello James: Just to make a quick summary of the status of the commented bugs/regressions: 1) https://bugs.launchpad.net/neutron/+bug/1810563: adding rules to security groups is slow That was addressed in https://review.opendev.org/#/c/633145/ and https://review.opendev.org/#/c/637407/, removing the O^2 check and using lazy loading. 2) https://bugzilla.redhat.com/show_bug.cgi?id=1788749: Neutron List networks API regression The last reply was marked as private. I've undone this and you can read now c#2. Testing with a similar scenario, I don't see any performance degradation between Queens and Train. 3) https://bugzilla.redhat.com/show_bug.cgi?id=1721273: Neutron API List Ports Performance regression That problem was solved in https://review.opendev.org/#/c/667981/ and https://review.opendev.org/#/c/667998/, by refactoring how the port QoS extension was reading and applying the QoS info in the port dict. 4) https://bugs.launchpad.net/neutron/+bug/1865223: regression for security group list between Newton and Rocky+ This is similar to https://bugs.launchpad.net/neutron/+bug/1863201. In this case, the regression was detected from R to S. The performance dropped from 3 secs to 110 secs (36x). That issue was addressed by https://review.opendev.org/#/c/708695/. But while 1865223 is talking about *SG list*, 1863201 is related to *SG rule list*. I would like to make this differentiation, because both retrieval commands are not related. In this bug (1863201), the performance degradation multiplies by x3 (N->Q) the initial time. This could be caused by the OVO integration (O->P: https://review.opendev.org/#/c/284738/). Instead of using the DB object now we make this call using the OVO object containing the DB register (something like a DB view). That's something I still need to check. Just to make a concretion: the patch 708695 improves the *SG rule* retrieval, not the SG list command. Another punctualization is that this patch will help in the case of having a balance between SG rules and SG. This patch will help to retrieve from the DB only those SG rules belonging to the project. If, as you state in https://bugs.launchpad.net/neutron/+bug/1865223/comments/4, most of those SG rules belong to the same project, there is little improvement there. As commented, I'm still looking at improving the SG OVO performance. Regards On Mon, 2020-03-02 at 03:03 +0000, Erik Olof Gunnar Andersson wrote: > When we went from Mitaka to Rocky in August last year and we saw an exponential increase in api > times for listing security group rules. > > I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, but I have > brought it up on a few other occasions as well. > Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime between liberty > and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review with > fixes is incoming. You can repro with a vanilla devstack install on master, and this script: > #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') export > OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost make_rules() { > iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file <<EOF > {... bugs.launchpad.net > > > From: Slawek Kaplonski > Sent: Saturday, February 29, 2020 12:44 AM > To: James Denton > Cc: openstack-discuss > Subject: Re: [neutron] security group list regression > > Hi, > > I just replied in Your bug report. Can You try to apply patch > https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ > to see if that will help with this problem? > > > On 29 Feb 2020, at 02:41, James Denton wrote: > > > > Hello all, > > > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe > regression in the time it takes the API to return the list of security groups. This environment > has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security > group list’ command to complete. I don’t have actual data from the same environment running > Newton, but was able to replicate this behavior with the following lab environments running a mix > of virtual and baremetal machines: > > > > Newton (VM) > > Rocky (BM) > > Stein (VM) > > Train (BM) > > > > Number of sec grps vs time in seconds: > > > > # Newton Rocky Stein Train > > 200 4.1 3.7 5.4 5.2 > > 500 5.3 7 11 9.4 > > 1000 7.2 12.4 19.2 16 > > 2000 9.2 24.2 35.3 30.7 > > 3000 12.1 36.5 52 44 > > 4000 16.1 47.2 73 58.9 > > 5000 18.4 55 90 69 > > > > As you can see (hopefully), the response time increased significantly between Newton and Rocky, > and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other > 'list' commands or is limited to secgroups. We're currently verifying on some intermediate > releases to see where things went wonky. > > > > There are some similar recent reports out in the wild with little feedback: > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > > > > I opened a bug here, too: > > > > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > > > > Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you > able to address them with any sort of tuning? > > > > Thanks in advance, > > James > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From gaetan.trellu at incloudus.com Mon Mar 2 15:11:16 2020 From: gaetan.trellu at incloudus.com (gaetan.trellu at incloudus.com) Date: Mon, 02 Mar 2020 10:11:16 -0500 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: Abhishek, Thansk for your answer, I tried both CLIs (Train release) and the issue still the same. Paste of the "curl" command: http://paste.openstack.org/show/790197/ Result of the "md5sum" on the file created by the "curl": $ md5sum /tmp/kernel.glance c3726de8e03158305453f328d85e9957 /tmp/kernel.glance As Mark and Radoslaw, I'm quite surprised about OSC been deprecated. Do you have any release note about this? Thanks for your help. Gaëtan curl -g -i -X GET http://10.0.0.11:9292/v2/images/de39fc9c-b943-47e3-82c4-bd6d577a9577/file -H "Content-Type: application/octet-stream" -H "User-Agent: python-glanceclient" -H "X-Auth-Token: $token" --output /tmp/kernel.glance -v Note: Unnecessary use of -X or --request, GET is already inferred. * Expire in 0 ms for 6 (transfer 0x557679b1de80) * Trying 10.0.0.11... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x557679b1de80) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 10.0.0.11 (10.0.0.11) port 9292 (#0) > GET /v2/images/de39fc9c-b943-47e3-82c4-bd6d577a9577/file HTTP/1.1 > Host: 10.0.0.11:9292 > Accept: */* > Content-Type: application/octet-stream > User-Agent: python-glanceclient > X-Auth-Token: > gAAAAABeXRzKVS3uQIIv9t-wV7njIV-T9HIvcwFqcHNivrpyBlesDtgAj1kpWk59a20EJLUo8IeHpTdKgVFwhnAVvbSWHY-HQvxu5dwSFsK4A-7CzAOwdp3svSqxB-FdwWhsY_PElftMX4geA-y_YFZJamefZapiAv6g1gSm-BSv5GYQ0hj3yGY > 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0< HTTP/1.1 200 OK < Content-Type: application/octet-stream < Content-Md5: 26c6d5c3d8ba9fd4bc4d1ee5959a827c < Content-Length: 5631728 < X-Openstack-Request-Id: req-e7ba2455-780f-48a8-b6a2-1c6741d0e368 < Date: Mon, 02 Mar 2020 15:03:53 GMT < { [32768 bytes data] 100 5499k 100 5499k 0 0 4269k 0 0:00:01 0:00:01 --:--:-- 4269k * Connection #0 to host 10.0.0.11 left intact On 2020-03-02 04:54, Mark Goddard wrote: > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > wrote: >> >> Hi Gaëtan, >> >> Glance team doesn't recommend to use OSC anymore. >> I will recommend you to check the same behaviour using >> python-glanceclient. > > That's not cool - everyone has switched to OSC. It's also the first > time I've heard of it. > >> >> Thanks & Best Regards, >> >> Abhishek Kekane >> >> >> On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor >> wrote: >>> >>> >>> >>> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: >>> > >>> > Hey Monty, >>> > >>> > If I download the image via the CLI, the checksum of the file matches the checksum from the image details. >>> > If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. >>> > >>> > The files have the same size, this is really weird. >>> >>> WOW. >>> >>> I still don’t know the issue - but my unfounded hunch is that the >>> curl command is likely not doing something it should be. If OSC is >>> producing a file that matches the image details, that seems like the >>> right choice for now. >>> >>> Seriously fascinating though. >>> >>> > Gaëtan >>> > >>> > On 2020-02-28 17:00, Monty Taylor wrote: >>> >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: >>> >>> Hi guys, >>> >>> Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? >>> >>> During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". >>> >>> Any idea? >>> >> That seems strange. I don’t know off the top of my head. I do know >>> >> Artem has patches up to switch OSC to using SDK for image operations. >>> >> https://review.opendev.org/#/c/699416/ >>> >> That said, I’d still expect current OSC checksums to be solid. Perhaps >>> >> there is some filtering/processing being done cloud-side in your >>> >> glance? If you download the image to a file and run a checksum on it - >>> >> does it match the checksum given by OSC on upload? Or the checksum >>> >> given by glance API on download? >>> >>> Thanks, >>> >>> Gaëtan >>> >>> [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data >>> > >>> >>> From ltoscano at redhat.com Mon Mar 2 15:25:02 2020 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 02 Mar 2020 16:25:02 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: Message-ID: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > Hi Gaëtan, > > > > Glance team doesn't recommend to use OSC anymore. > > I will recommend you to check the same behaviour using > > python-glanceclient. > > That's not cool - everyone has switched to OSC. It's also the first > time I've heard of it. > Do we have proper microversion support then? This is a blocker for cinder. More generally I observed a disconection between the needs of a few teams (Cinder and Glance for sure) and OSC, with a real split on the community and no apparent interest in trying to bridge the gap, which is very sad. -- Luigi From flux.adam at gmail.com Mon Mar 2 15:27:08 2020 From: flux.adam at gmail.com (Adam Harwell) Date: Tue, 3 Mar 2020 00:27:08 +0900 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: I've heard this from members of the glance team for the past few (maybe 3?) summits at least, and every time I try to correct them, but it feels like talking to a brick wall. OSC is the future direction, it should be supported, and there's not even any ambiguity that I'm aware of, other than some strange refusal to accept reality on behalf of a few people... Yes, my tone here is definitely a little aggressive, but this has been an ongoing frustration of mine as I've been hearing this for a while and it's actively harmful and misleading, so I won't apologize for it. Some folks need to wake up. <_< --Adam On Mon, Mar 2, 2020, 19:00 Mark Goddard wrote: > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > > Hi Gaëtan, > > > > Glance team doesn't recommend to use OSC anymore. > > I will recommend you to check the same behaviour using > python-glanceclient. > > That's not cool - everyone has switched to OSC. It's also the first > time I've heard of it. > > > > > Thanks & Best Regards, > > > > Abhishek Kekane > > > > > > On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor > wrote: > >> > >> > >> > >> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: > >> > > >> > Hey Monty, > >> > > >> > If I download the image via the CLI, the checksum of the file matches > the checksum from the image details. > >> > If I download the image via "curl", the "Content-Md5" header matches > the image details but the file checksum doesn't. > >> > > >> > The files have the same size, this is really weird. > >> > >> WOW. > >> > >> I still don’t know the issue - but my unfounded hunch is that the curl > command is likely not doing something it should be. If OSC is producing a > file that matches the image details, that seems like the right choice for > now. > >> > >> Seriously fascinating though. > >> > >> > Gaëtan > >> > > >> > On 2020-02-28 17:00, Monty Taylor wrote: > >> >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: > >> >>> Hi guys, > >> >>> Does anyone know why the md5 checksum is different between the > "openstack image save" CLI and "curl" commands? > >> >>> During the image creation a checksum is computed to check the image > integrity, using the "openstack" CLI match the checksum generated but when > "curl" is used by following the API documentation[1] the checksum change at > every "download". > >> >>> Any idea? > >> >> That seems strange. I don’t know off the top of my head. I do know > >> >> Artem has patches up to switch OSC to using SDK for image operations. > >> >> https://review.opendev.org/#/c/699416/ > >> >> That said, I’d still expect current OSC checksums to be solid. > Perhaps > >> >> there is some filtering/processing being done cloud-side in your > >> >> glance? If you download the image to a file and run a checksum on it > - > >> >> does it match the checksum given by OSC on upload? Or the checksum > >> >> given by glance API on download? > >> >>> Thanks, > >> >>> Gaëtan > >> >>> [1] > https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > >> > > >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Mon Mar 2 15:41:53 2020 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 2 Mar 2020 09:41:53 -0600 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: > On Mar 2, 2020, at 9:27 AM, Adam Harwell wrote: > > I've heard this from members of the glance team for the past few (maybe 3?) summits at least, and every time I try to correct them, but it feels like talking to a brick wall. OSC is the future direction, it should be supported, and there's not even any ambiguity that I'm aware of, other than some strange refusal to accept reality on behalf of a few people... > > Yes, my tone here is definitely a little aggressive, but this has been an ongoing frustration of mine as I've been hearing this for a while and it's actively harmful and misleading, so I won't apologize for it. Some folks need to wake up. <_ I fully agree. Users should not use python-glanceclient, they should use OSC. If there are bugs in OSC, we will fix them. > --Adam > > On Mon, Mar 2, 2020, 19:00 Mark Goddard wrote: > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > > Hi Gaëtan, > > > > Glance team doesn't recommend to use OSC anymore. > > I will recommend you to check the same behaviour using python-glanceclient. > > That's not cool - everyone has switched to OSC. It's also the first > time I've heard of it. > > > > > Thanks & Best Regards, > > > > Abhishek Kekane > > > > > > On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor wrote: > >> > >> > >> > >> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: > >> > > >> > Hey Monty, > >> > > >> > If I download the image via the CLI, the checksum of the file matches the checksum from the image details. > >> > If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. > >> > > >> > The files have the same size, this is really weird. > >> > >> WOW. > >> > >> I still don’t know the issue - but my unfounded hunch is that the curl command is likely not doing something it should be. If OSC is producing a file that matches the image details, that seems like the right choice for now. > >> > >> Seriously fascinating though. > >> > >> > Gaëtan > >> > > >> > On 2020-02-28 17:00, Monty Taylor wrote: > >> >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: > >> >>> Hi guys, > >> >>> Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? > >> >>> During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". > >> >>> Any idea? > >> >> That seems strange. I don’t know off the top of my head. I do know > >> >> Artem has patches up to switch OSC to using SDK for image operations. > >> >> https://review.opendev.org/#/c/699416/ > >> >> That said, I’d still expect current OSC checksums to be solid. Perhaps > >> >> there is some filtering/processing being done cloud-side in your > >> >> glance? If you download the image to a file and run a checksum on it - > >> >> does it match the checksum given by OSC on upload? Or the checksum > >> >> given by glance API on download? > >> >>> Thanks, > >> >>> Gaëtan > >> >>> [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > >> > > >> > >> > From smooney at redhat.com Mon Mar 2 15:47:37 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 02 Mar 2020 15:47:37 +0000 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: On Mon, 2020-03-02 at 16:25 +0100, Luigi Toscano wrote: > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > Hi Gaëtan, > > > > > > Glance team doesn't recommend to use OSC anymore. > > > I will recommend you to check the same behaviour using > > > python-glanceclient. > > > > That's not cool - everyone has switched to OSC. It's also the first > > time I've heard of it. > > > > Do we have proper microversion support then? This is a blocker for cinder. osc support microverions but not the auto negociation. the microverion support is fully integrated with the help text generageion and you can specify the desired microverion with --os--api-version=X option when using osc. > > More generally I observed a disconection between the needs of a few teams > (Cinder and Glance for sure) and OSC, with a real split on the community and > no apparent interest in trying to bridge the gap, which is very sad. i know that it was hoped that using the sdk in osc would allow osc to inherit the auto negotiation capability but even without that it does fully support micorverions it just does not have the same behavior as the legacy project clients. i actully tought there was a cross project goal/resolution to move all project to osc a few releases ago so if glance are considering deprection there osc support i think that is problematic and need a much stonger justification then automatic micorverions support. i know there has been an ongoing effort form multiple cycles to stop using the project clients in all documentaiton where its possibel to use osc instead so it realy feels like this would be a regression. > From dtantsur at redhat.com Mon Mar 2 15:49:35 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 2 Mar 2020 16:49:35 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: Hi, On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > Hi Gaëtan, > > > > > > Glance team doesn't recommend to use OSC anymore. > > > I will recommend you to check the same behaviour using > > > python-glanceclient. > > > > That's not cool - everyone has switched to OSC. It's also the first > > time I've heard of it. > > > > Do we have proper microversion support then? This is a blocker for cinder. > The ironic team has been successfully hacking around the absence of a native microversion support for a while. We use ironicclient instead of openstacksdk, which makes things harder. If you use openstacksdk, it's easier to teach it microversions. In any case, I can provide some guidance if you'd like to. Dmitry > > More generally I observed a disconection between the needs of a few teams > (Cinder and Glance for sure) and OSC, with a real split on the community > and > no apparent interest in trying to bridge the gap, which is very sad. > > -- > Luigi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Mar 2 16:37:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 02 Mar 2020 16:37:43 +0000 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: On Mon, 2020-03-02 at 16:49 +0100, Dmitry Tantsur wrote: > Hi, > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > > Hi Gaëtan, > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > I will recommend you to check the same behaviour using > > > > python-glanceclient. > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > time I've heard of it. > > > > > > > Do we have proper microversion support then? This is a blocker for cinder. > > > > The ironic team has been successfully hacking around the absence of a > native microversion support for a while. We use ironicclient instead of > openstacksdk, which makes things harder. If you use openstacksdk, it's > easier to teach it microversions. In any case, I can provide some guidance > if you'd like to. > > Dmitry that is also problematic. by harcking around it it gives the ironic command a different behavior to the rest of osc. osc does support microverions it just does not support automatic versin negociation which is what you are hacking in. i do agree that it would be nice to have support for version negociation where by you could do somehting like --os-compute-api-version=auto to opt in to it but automatic microverions detetion does make it harder to do help text generation unless you make "openstack --cloud=my-cloud --os-compute-api-version=auto help server create" call out to keystone get the nova endpoint and then lookup its max microversion when you render the help text. with that said if adding --os-image-api-version=auto was enough to get the glance team to fully adopt osc then i think that would be better then partioning the community between osc and legacy client. osc should behave consistently for all projects however so adding negocaiton for ironic and not for other services is not a good thing imo but i guess you were able to do that as ironic is integrated as a plugin correct? > > > > > > More generally I observed a disconection between the needs of a few teams > > (Cinder and Glance for sure) and OSC, with a real split on the community > > and > > no apparent interest in trying to bridge the gap, which is very sad. > > > > -- > > Luigi > > > > > > > > From tim.bell at cern.ch Mon Mar 2 16:59:24 2020 From: tim.bell at cern.ch (Tim Bell) Date: Mon, 2 Mar 2020 17:59:24 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: <17374BC0-3B5F-49F0-A747-B4D04ABD64C1@cern.ch> > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: > > Hi, > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano > wrote: > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > wrote: > > > Hi Gaëtan, > > > > > > Glance team doesn't recommend to use OSC anymore. > > > I will recommend you to check the same behaviour using > > > python-glanceclient. > > > > That's not cool - everyone has switched to OSC. It's also the first > > time I've heard of it. > > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. There has been great progress with Manila to get shares included (https://review.opendev.org/#/c/642222/26/ ) and it would be a pity if we’re asking our end users to understand all of the different project names and inconsistent options/arguments/syntax. We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still encourage it to simplify the experience for OpenStack cloud consumers. Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Mar 2 17:07:00 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 2 Mar 2020 18:07:00 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <17374BC0-3B5F-49F0-A747-B4D04ABD64C1@cern.ch> References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> <17374BC0-3B5F-49F0-A747-B4D04ABD64C1@cern.ch> Message-ID: Folks, sorry to interrupt but I think we have diverged a bit too much from the subject. Only last Gaetan message is on topic here. Please switch to new subject to discuss OSC future. -yoctozepto pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): > > > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: > > Hi, > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: >> >> On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >> > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: >> > > Hi Gaëtan, >> > > >> > > Glance team doesn't recommend to use OSC anymore. >> > > I will recommend you to check the same behaviour using >> > > python-glanceclient. >> > >> > That's not cool - everyone has switched to OSC. It's also the first >> > time I've heard of it. >> > >> > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. > > There has been great progress with Manila to get shares included (https://review.opendev.org/#/c/642222/26/) and it would be a pity if we’re asking our end users to understand all of the different project names and inconsistent options/arguments/syntax. > > We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still encourage it to simplify the experience for OpenStack cloud consumers. > > Tim > > From cboylan at sapwetik.org Mon Mar 2 17:12:00 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Mar 2020 09:12:00 -0800 Subject: Virtualenv (and Tox) broken when run under python<3.6 Message-ID: A recent release of importlib-resources (1.1.0) no longer works on python2.7 or python3.5. The issue is they import typing's ContextManager which didn't exist until python3.6 [0]. This means that python2 jobs and python3.5 jobs are currently unhappy if they need virtualenv. Unfortunately, many of our jobs use tox which uses virtualenv. One workaround being investigated [1] is to install importlib-resources==1.0.2 which does not try to use typing's ContextManager. If this is confirmed to work we will want to consider adding this change to the base job so that all jobs don't have to fix it separately. Note the version of python here is the one used to run virtualenv not the version of python being installed into the virtualenv. This means python3.6 running virtualenv to create a python2 virtualenv should be fine. But python3.5 running virtualenv to create a python3.6 env would not be fine. [0] https://gitlab.com/python-devs/importlib_resources/issues/83 [1] https://review.opendev.org/#/c/710729/ Clark From cboylan at sapwetik.org Mon Mar 2 17:18:58 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Mar 2020 09:18:58 -0800 Subject: Virtualenv (and Tox) broken when run under python<3.6 In-Reply-To: References: Message-ID: On Mon, Mar 2, 2020, at 9:12 AM, Clark Boylan wrote: > A recent release of importlib-resources (1.1.0) no longer works on > python2.7 or python3.5. The issue is they import typing's > ContextManager which didn't exist until python3.6 [0]. This means that > python2 jobs and python3.5 jobs are currently unhappy if they need > virtualenv. Unfortunately, many of our jobs use tox which uses > virtualenv. I noticed after sending the first email that it wasn't clear why tox and virtualenv are effected by an importlib-resources release. Virtualenv depends on importlib-resources [2] and tox uses virtualenv by default to create venvs. [2] https://github.com/pypa/virtualenv/blob/20.0.7/setup.cfg#L48 From johnsomor at gmail.com Mon Mar 2 17:28:51 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 2 Mar 2020 09:28:51 -0800 Subject: [TaskFlow] running multiple engines on a shared thread In-Reply-To: References: Message-ID: Hi Sachin, Ok, I think I understand what you are trying better now. Do you have a link to your code for reference? The parallelism of tasks inside a flow are dictated by three things: 1. Are you using the parallel action engine? (i.e. setting engine='parallel' when loading) 2. Is the flow of type "unordered flow". 3. The executor defined for the parallel engine. Flows are graphs, so if the flow is defined as a linear flow, it will run each task in a serial manner. Likewise, if the engine is set to "serial" (the default), even the unordered flows will run sequentially. Michael On Thu, Feb 27, 2020 at 1:19 AM Sachin Laddha wrote: > > thanks Michael, > > I am aware that executor can be reused by different engines. > My query was regarding if multiple engines can share same thread for running the engines(and not the tasks of those engines). > > I tried run_iter which can be used to run multiple engines but the tasks of individual engines are run one after another. > Probably engine is holding on to the thread. > > This is limiting our ability run multiple workflows (i.e. engines) in parallel. > > My query is - is it possible to run multiple engines in parallel on same thread (using some asynchronous task execution) > > > On Thu, Feb 27, 2020 at 3:18 AM Michael Johnson wrote: >> >> Hi Sachin, >> >> I'm not 100% sure I understand your need, but I will attempt to answer >> and you can correct me if I am off base. >> >> Taskflow engines (you can create as many of these as you want) use an >> executor defined at flow load time. >> >> Here is a snippet from the Octavia code: >> self.executor = concurrent.futures.ThreadPoolExecutor( >> max_workers=CONF.task_flow.max_workers) >> eng = tf_engines.load( >> flow, >> engine=CONF.task_flow.engine, >> executor=self.executor, >> never_resolve=CONF.task_flow.disable_revert, >> **kwargs) >> >> The parts you are likely interested in are: >> >> 1. The executor. In this case we are using a >> concurrent.futures.ThreadPoolExecutor. We then set the max_workers >> setting to the number of threads we want in our taskflow engine thread >> pool. >> 2. During flow load, we define the engine to be 'parallel' (note: >> 'serial' is the default). This means that unordered flows will run in >> parallel as opposed to serially. >> 3. As noted in the documentation[1], You can share an executor between >> taskflow engines to share the thread pool. >> >> Finally, you want to use "unordered" flows or sub-flows to execute >> tasks concurrently. >> >> [1] https://docs.openstack.org/taskflow/latest/user/engines.html#parallel >> >> Michael >> >> On Wed, Feb 26, 2020 at 7:19 AM Sachin Laddha wrote: >> > >> > Hi, >> > >> > We are using taskflow to execute workflows. Each workflow is executed by a separate thread (using engine.run() method). This is limiting our capability to execute maximum number of workflows that can run in parallel. It is limited by the number of threads there in the thread pool. >> > >> > Most of the time, the workflow tasks are run by agents which could take some time to complete. Each engine is alive and runs on a dedicated thread. >> > >> > Is there any way to reuse or run multiple engines on one thread. The individual tasks of these engines can run in parallel. >> > >> > I came across iter_run method of the engine class. But not sure if that can be used for this purpose. >> > >> > Any help is highly appreciated. From Albert.Braden at synopsys.com Mon Mar 2 18:05:59 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 2 Mar 2020 18:05:59 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) Message-ID: As an openstack operator I was pretty ecstatic to hear that the assortment of clients would be replaced by a single client. I would be disappointed to find that a component would not integrate and would continue to use a separate client. This would be a step backward IMO. The discussion about microversions goes over my head, but I would hope to see the developers get together and solve the issue and continue working toward integration. -----Original Message----- From: Radosław Piliszek Sent: Monday, March 2, 2020 9:07 AM To: openstack-discuss Subject: Re: [glance] Different checksum between CLI and curl Folks, sorry to interrupt but I think we have diverged a bit too much from the subject. Only last Gaetan message is on topic here. Please switch to new subject to discuss OSC future. -yoctozepto pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): > > > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: > > Hi, > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: >> >> On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >> > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: >> > > Hi Gaëtan, >> > > >> > > Glance team doesn't recommend to use OSC anymore. >> > > I will recommend you to check the same behaviour using >> > > python-glanceclient. >> > >> > That's not cool - everyone has switched to OSC. It's also the first >> > time I've heard of it. >> > >> > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. > > There has been great progress with Manila to get shares included (https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= ) and it would be a pity if we’re asking our end users to understand all of the different project names and inconsistent options/arguments/syntax. > > We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still encourage it to simplify the experience for OpenStack cloud consumers. > > Tim > > From smooney at redhat.com Mon Mar 2 18:50:40 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 02 Mar 2020 18:50:40 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: Message-ID: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: > As an openstack operator I was pretty ecstatic to hear that the assortment of clients would be replaced by a single > client. I would be disappointed to find that a component would not integrate and would continue to use a separate > client. This would be a step backward IMO. > > The discussion about microversions goes over my head, but I would hope to see the developers get together and solve > the issue and continue working toward integration. just to summerisie it in a non technical way. the project specific cli had a convention where the client would ask the api what the newest micoverion it supported and defualt to that if the clinet suported it. that meant that the same command executed against two different clouds with different versions of openstakc deploy could have different behavior and different responces. so from an interoperablity point of view that is not great but from a usablity point of view the fact enduser dont have to care about microverions and the client would try to do the right thing made some things much simpler. the unifeid client (osc) chose to priorities interoperablity by defaulting to the oldest micorverions, so for nova that would be 2.0/2.1 meaning that if you execute the same command on two different cloud with different version of nova it will behave the same but if you want to use a feature intoduced in a later micorverion you have to explcitly request that via --os-compute-api-version or set that as a env var or in you cloud.yaml so really the difference is that osc requires the end user to be explictl about what micoversion to use and therefor be explict about the behavior of the api they expect (this is what we expect application that use the the api should do) where as the project client tried to just work and use the latest microverion which mostly workd excpet where we remove a feature in a later micorverions. for example we removed the force option on some move operation in nova because allowing forcing caused many harder to fix issues. i dont thnk the nova clinet would cap at the latest micorvierion that allowed forcing. so the poject client genreally did not guarantee that a command would work without specifcing a new micorverison it just that we remove things a hell of a lot less often then we add them. so as an end user that is the main difference between using osc vs glance clinet other then the fact i belive there is a bunch of stuff you can do with glance client that is missing in osc. parity is a spereate disucssion but it is vaild concern. -----Original Message----- > From: Radosław Piliszek > Sent: Monday, March 2, 2020 9:07 AM > To: openstack-discuss > Subject: Re: [glance] Different checksum between CLI and curl > > Folks, > > sorry to interrupt but I think we have diverged a bit too much from the subject. > Only last Gaetan message is on topic here. > Please switch to new subject to discuss OSC future. > > -yoctozepto > > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): > > > > > > > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: > > > > Hi, > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: > > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > > > Hi Gaëtan, > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > I will recommend you to check the same behaviour using > > > > > python-glanceclient. > > > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > > time I've heard of it. > > > > > > > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. > > > > There has been great progress with Manila to get shares included ( > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= > > ) and it would be a pity if we’re asking our end users to understand all of the different project names and > > inconsistent options/arguments/syntax. > > > > We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still > > encourage it to simplify the experience for OpenStack cloud consumers. > > > > Tim > > > > > > From dtantsur at redhat.com Mon Mar 2 19:01:47 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 2 Mar 2020 20:01:47 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: On Mon, Mar 2, 2020 at 5:37 PM Sean Mooney wrote: > On Mon, 2020-03-02 at 16:49 +0100, Dmitry Tantsur wrote: > > Hi, > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano > wrote: > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > wrote: > > > > > Hi Gaëtan, > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > I will recommend you to check the same behaviour using > > > > > python-glanceclient. > > > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > > time I've heard of it. > > > > > > > > > > Do we have proper microversion support then? This is a blocker for > cinder. > > > > > > > The ironic team has been successfully hacking around the absence of a > > native microversion support for a while. We use ironicclient instead of > > openstacksdk, which makes things harder. If you use openstacksdk, it's > > easier to teach it microversions. In any case, I can provide some > guidance > > if you'd like to. > > > > Dmitry > that is also problematic. > by harcking around it it gives the ironic command a different behavior to > the rest of osc. > osc does support microverions it just does not support automatic versin > negociation which is > what you are hacking in. > Right, and it's a hard requirement for the CLI to be remotely usable. > > i do agree that it would be nice to have support for version negociation > where by you could do somehting like > --os-compute-api-version=auto to opt in to it but automatic microverions > detetion does make it harder to do help > text generation unless you make "openstack --cloud=my-cloud > --os-compute-api-version=auto help server create" call out > to keystone get the nova endpoint and then lookup its max microversion > when you render the help text. > The "auto" must be a default. This is what the users expect: the CLI just working. Defaulting to anything else does them a huge disservice (been there, done that). > > with that said if adding --os-image-api-version=auto was enough to get the > glance team to fully adopt osc > then i think that would be better then partioning the community between > osc and legacy client. > osc should behave consistently for all projects however so adding > negocaiton for ironic and not for other services > is not a good thing imo but i guess you were able to do that as ironic is > integrated as a plugin correct? > Yep. We could not wait for OSC to implement it because the CLI is borderline unusable without this negotiation in place. I don't recall what prevented us from updating OSC, but I think there was a reason, probably not entirely technical. Dmitry > > > > > > > > > > > > More generally I observed a disconection between the needs of a few > teams > > > (Cinder and Glance for sure) and OSC, with a real split on the > community > > > and > > > no apparent interest in trying to bridge the gap, which is very sad. > > > > > > -- > > > Luigi > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Mar 2 19:42:45 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Mar 2020 11:42:45 -0800 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> Message-ID: <70af048d-cf62-4f27-84fe-e6ed7b959837@www.fastmail.com> On Mon, Mar 2, 2020, at 11:01 AM, Dmitry Tantsur wrote: > > > On Mon, Mar 2, 2020 at 5:37 PM Sean Mooney wrote: > > On Mon, 2020-03-02 at 16:49 +0100, Dmitry Tantsur wrote: > > > Hi, > > > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: > > > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: > > > > > > Hi Gaëtan, > > > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > > I will recommend you to check the same behaviour using > > > > > > python-glanceclient. > > > > > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > > > time I've heard of it. > > > > > > > > > > > > > Do we have proper microversion support then? This is a blocker for cinder. > > > > > > > > > > The ironic team has been successfully hacking around the absence of a > > > native microversion support for a while. We use ironicclient instead of > > > openstacksdk, which makes things harder. If you use openstacksdk, it's > > > easier to teach it microversions. In any case, I can provide some guidance > > > if you'd like to. > > > > > > Dmitry > > that is also problematic. > > by harcking around it it gives the ironic command a different behavior to the rest of osc. > > osc does support microverions it just does not support automatic versin negociation which is > > what you are hacking in. > > Right, and it's a hard requirement for the CLI to be remotely usable. > > > > i do agree that it would be nice to have support for version negociation where by you could do somehting like > > --os-compute-api-version=auto to opt in to it but automatic microverions detetion does make it harder to do help > > text generation unless you make "openstack --cloud=my-cloud --os-compute-api-version=auto help server create" call out > > to keystone get the nova endpoint and then lookup its max microversion when you render the help text. > > The "auto" must be a default. This is what the users expect: the CLI > just working. Defaulting to anything else does them a huge disservice > (been there, done that). As a user I strongly disagree. I don't want an API to magically start acting differently because the cloud side has upgraded. Those changes are opaque to me and I shouldn't need to know about them. Instead I should be able to opt into using new features when I know I need them. This is easily achieved by setting the desired microversion when you know you need it. > > > > with that said if adding --os-image-api-version=auto was enough to get the glance team to fully adopt osc > > then i think that would be better then partioning the community between osc and legacy client. > > osc should behave consistently for all projects however so adding negocaiton for ironic and not for other services > > is not a good thing imo but i guess you were able to do that as ironic is integrated as a plugin correct? > > Yep. We could not wait for OSC to implement it because the CLI is > borderline unusable without this negotiation in place. I don't recall > what prevented us from updating OSC, but I think there was a reason, > probably not entirely technical. > > Dmitry From cboylan at sapwetik.org Mon Mar 2 21:09:56 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Mar 2020 13:09:56 -0800 Subject: Virtualenv (and Tox) broken when run under python<3.6 In-Reply-To: References: Message-ID: <475d299c-ed47-44a3-a5b7-c286e7730134@www.fastmail.com> On Mon, Mar 2, 2020, at 9:12 AM, Clark Boylan wrote: > A recent release of importlib-resources (1.1.0) no longer works on > python2.7 or python3.5. The issue is they import typing's > ContextManager which didn't exist until python3.6 [0]. This means that > python2 jobs and python3.5 jobs are currently unhappy if they need > virtualenv. Unfortunately, many of our jobs use tox which uses > virtualenv. > > One workaround being investigated [1] is to install > importlib-resources==1.0.2 which does not try to use typing's > ContextManager. If this is confirmed to work we will want to consider > adding this change to the base job so that all jobs don't have to fix > it separately. We've landed a version of this workaround, https://review.opendev.org/710851, to the base job in opendev/base-jobs. By default this is the base job that all zuul jobs inherit from. This appears to fix use of virtualenv and tox within jobs that use the globally installed versions of these tools. If you run a nested version of the tools (DIB chroot, containers, etc) you'll need to address this issue within that separate context. > > Note the version of python here is the one used to run virtualenv not > the version of python being installed into the virtualenv. This means > python3.6 running virtualenv to create a python2 virtualenv should be > fine. But python3.5 running virtualenv to create a python3.6 env would > not be fine. > > [0] https://gitlab.com/python-devs/importlib_resources/issues/83 > [1] https://review.opendev.org/#/c/710729/ > > Clark > > From mnaser at vexxhost.com Mon Mar 2 21:45:47 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 2 Mar 2020 16:45:47 -0500 Subject: [all][tc] Moving PTL role to "Maintainers" Message-ID: Hi everyone: We're now in a spot where we have an increasing amount of projects that don't end up with a volunteer as PTL, even if the project has contributors .. no one wants to hold that responsibility alone for many reasons. With time, the PTL role has become far more overloaded with many extra responsibilities than what we define in our charter: https://governance.openstack.org/tc/reference/charter.html#project-team-leads I think it's time to re-evaluate the project leadership model that we have. I am thinking that perhaps it would make a lot of sense to move from a single PTL model to multiple maintainers. This would leave it up to the maintainers to decide how they want to sort the different requirements/liaisons/contact persons between them. The above is just a very basic idea, I don't intend to diving much more in depth for now as I'd like to hear about what the rest of the community thinks. Thanks, Mohammed From james.denton at rackspace.com Mon Mar 2 22:25:12 2020 From: james.denton at rackspace.com (James Denton) Date: Mon, 2 Mar 2020 22:25:12 +0000 Subject: [neutron] security group list regression In-Reply-To: References: Message-ID: <7DD0691D-19A3-4CDB-B377-F67829A86AD7@rackspace.com> Rodolfo, Thanks for continuing to push this on the ML and in the bug report. Happy to report that the client and SDK patches you provided have drastically reduced the SG list time from ~90-120s to ~12-14s within Stein and Train lab environments. One last thing... when you perform an 'openstack security group delete ', the initial lookup by name fails. In Train, the client falls back to using the 'name' parameter (/security-groups?name=). This lookup is quick and the security group is found and deleted. However, on Rocky/Stein (e.g. client 3.18.1), instead of searching by parameter, the client appears to perform a GET /security-groups without limiting the fields and takes a long time. 'openstack security group list' with patch: REQ: curl -g -i -X GET "http://10.0.236.150:9696/v2.0/security-groups?fields=set%28%5B%27description%27%2C+%27project_id%27%2C+%27id%27%2C+%27tags%27%2C+%27name%27%5D%29" -H "Accept: application/json" -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H "X-Auth-Token: {SHA256}3e747da939e8c4befe72d5ca7105971508bd56cdf36208ba6b960d1aee6d19b6" 'openstack security group delete ': Train (notice the name param): REQ: curl -g -i -X GET http://10.20.0.11:9696/v2.0/security-groups/train-test-1755 -H "User-Agent: openstacksdk/0.36.0 keystoneauth1/3.17.1 python-requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" http://10.20.0.11:9696 "GET /v2.0/security-groups/train-test-1755 HTTP/1.1" 404 125 REQ: curl -g -i -X GET "http://10.20.0.11:9696/v2.0/security-groups?name=train-test-1755" -H "Accept: application/json" -H "User-Agent: openstacksdk/0.36.0 keystoneauth1/3.17.1 python-requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" http://10.20.0.11:9696 "GET /v2.0/security-groups?name=train-test-1755 HTTP/1.1" 200 1365 Stein & below (notice lack of fields): REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups/stein-test-5189 -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H "X-Auth-Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" http://10.0.236.150:9696 "GET /v2.0/security-groups/stein-test-5189 HTTP/1.1" 404 125 REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups -H "Accept: application/json" -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H "X-Auth-Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" Haven't quite figured out where fields can be used to speed up the delete process on the older client, or if the newer client would be backwards-compatible (and how far back). Thanks, James On 3/2/20, 9:31 AM, "James Denton" wrote: CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Thanks, Rodolfo. I'll take a look at each of these after coffee and clarify my position (if needed). James On 3/2/20, 6:27 AM, "Rodolfo Alonso" wrote: CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello James: Just to make a quick summary of the status of the commented bugs/regressions: 1) https://bugs.launchpad.net/neutron/+bug/1810563: adding rules to security groups is slow That was addressed in https://review.opendev.org/#/c/633145/ and https://review.opendev.org/#/c/637407/, removing the O^2 check and using lazy loading. 2) https://bugzilla.redhat.com/show_bug.cgi?id=1788749: Neutron List networks API regression The last reply was marked as private. I've undone this and you can read now c#2. Testing with a similar scenario, I don't see any performance degradation between Queens and Train. 3) https://bugzilla.redhat.com/show_bug.cgi?id=1721273: Neutron API List Ports Performance regression That problem was solved in https://review.opendev.org/#/c/667981/ and https://review.opendev.org/#/c/667998/, by refactoring how the port QoS extension was reading and applying the QoS info in the port dict. 4) https://bugs.launchpad.net/neutron/+bug/1865223: regression for security group list between Newton and Rocky+ This is similar to https://bugs.launchpad.net/neutron/+bug/1863201. In this case, the regression was detected from R to S. The performance dropped from 3 secs to 110 secs (36x). That issue was addressed by https://review.opendev.org/#/c/708695/. But while 1865223 is talking about *SG list*, 1863201 is related to *SG rule list*. I would like to make this differentiation, because both retrieval commands are not related. In this bug (1863201), the performance degradation multiplies by x3 (N->Q) the initial time. This could be caused by the OVO integration (O->P: https://review.opendev.org/#/c/284738/). Instead of using the DB object now we make this call using the OVO object containing the DB register (something like a DB view). That's something I still need to check. Just to make a concretion: the patch 708695 improves the *SG rule* retrieval, not the SG list command. Another punctualization is that this patch will help in the case of having a balance between SG rules and SG. This patch will help to retrieve from the DB only those SG rules belonging to the project. If, as you state in https://bugs.launchpad.net/neutron/+bug/1865223/comments/4, most of those SG rules belong to the same project, there is little improvement there. As commented, I'm still looking at improving the SG OVO performance. Regards On Mon, 2020-03-02 at 03:03 +0000, Erik Olof Gunnar Andersson wrote: > When we went from Mitaka to Rocky in August last year and we saw an exponential increase in api > times for listing security group rules. > > I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, but I have > brought it up on a few other occasions as well. > Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime between liberty > and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review with > fixes is incoming. You can repro with a vanilla devstack install on master, and this script: > #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') export > OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost make_rules() { > iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file <<EOF > {... bugs.launchpad.net > > > From: Slawek Kaplonski > Sent: Saturday, February 29, 2020 12:44 AM > To: James Denton > Cc: openstack-discuss > Subject: Re: [neutron] security group list regression > > Hi, > > I just replied in Your bug report. Can You try to apply patch > https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ > to see if that will help with this problem? > > > On 29 Feb 2020, at 02:41, James Denton wrote: > > > > Hello all, > > > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe > regression in the time it takes the API to return the list of security groups. This environment > has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security > group list’ command to complete. I don’t have actual data from the same environment running > Newton, but was able to replicate this behavior with the following lab environments running a mix > of virtual and baremetal machines: > > > > Newton (VM) > > Rocky (BM) > > Stein (VM) > > Train (BM) > > > > Number of sec grps vs time in seconds: > > > > # Newton Rocky Stein Train > > 200 4.1 3.7 5.4 5.2 > > 500 5.3 7 11 9.4 > > 1000 7.2 12.4 19.2 16 > > 2000 9.2 24.2 35.3 30.7 > > 3000 12.1 36.5 52 44 > > 4000 16.1 47.2 73 58.9 > > 5000 18.4 55 90 69 > > > > As you can see (hopefully), the response time increased significantly between Newton and Rocky, > and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other > 'list' commands or is limited to secgroups. We're currently verifying on some intermediate > releases to see where things went wonky. > > > > There are some similar recent reports out in the wild with little feedback: > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > > > > I opened a bug here, too: > > > > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > > > > Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you > able to address them with any sort of tuning? > > > > Thanks in advance, > > James > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From whayutin at redhat.com Tue Mar 3 00:07:54 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 2 Mar 2020 17:07:54 -0700 Subject: [tripleo] centos-8 status Message-ID: Greetings everyone! First off, apologies for using hackmd in this case for our status. We will be moving to a fully non-hosted open source version asap. We have been making significant progress with CentOS-8 based jobs in TripleO Ussuri, our full status is here [1]. You will see tripleo-ci-centos-8 standalone jobs hitting your reviews. If you find any issues please just treat the same as any other bug and open a launchpad w/ "alert" set to the tag. Thanks for everybody's hard work and patience as we get the rest of the jobs running smoothly in the upstream. [1] https://hackmd.io/HrQd03c9SxOMtFPFrq50tg?view -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Mar 3 01:49:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 02 Mar 2020 19:49:42 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-10 Update Message-ID: <1709e159fda.12941c762339759.7997748923601929729@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-10 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * We already passed the deadline but still work is not completed on this. * Few tempest plugins are failing. * I request projects again to merge the passing patches asap. Project wise status and need reviews: ============================ Phase-1 status: All the OpenStack services have dropped the python2.7. Phase-2 status: * Few Tempest plugins are still not merged. I am debugging a few failing plugins with the project team. ** tempest plugins are passing and ready to merge. *** barbican-tempest-plugin: https://review.opendev.org/#/c/704083/ *** cyborg-tempest-plugin: https://review.opendev.org/#/c/704076/ *** magnum-tempest-plugin: https://review.opendev.org/#/c/704069/ *** congress-tempest-plugin: https://review.opendev.org/#/c/694437/ *** heat-tempest-plugin: https://review.opendev.org/#/c/704225/ *** kuryr-tempest-plugin: https://review.opendev.org/#/c/704072/ ** Failing plugins which need debugging and more work: *** trove-tempest-plugin: https://review.opendev.org/#/c/692041/ *** ironic-tempest-plugin: https://review.opendev.org/#/c/704093/ * Started pushing the required updates on deployment projects. ** Completed or no updates required: *** Openstack-Chef - not required *** Packaging-Rpm - Done *** Puppet Openstack- Done ** In progress: *** Openstack Charms *** Openstackansible - In-progress. centos7 jobs are failing on few projects. ** Waiting from projects team to know the status: *** Openstack-Helm (Helm charts for OpenStack services) *** Tripleo (Deployment service) * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open Phase-3 status: This is audit and requirement repo work which is not started yet. I will start this once all the phase-2 work mentioned above is completed. How you can help: ============== - Review the patches. Push the patches if I missed any repo. -gmann From akekane at redhat.com Tue Mar 3 05:00:45 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 3 Mar 2020 10:30:45 +0530 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: Hi All, Thank you for making this different thread, OSC is not up to date with the current glance features and neither it has shown any interest in doing so. >From glance prospective we also didn't have any bandwidth to work on adding these support to OSC. There is some major feature gap between current OSC and Glance and that's the reason why glance does not recommend to use OSC. 1. Support for new image import workflow 2. Support for hidden images 3. Support for multihash 4. Support for multiple stores If anyone is interested to take up this work it will be great. Thanks & Best Regards, Abhishek Kekane On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney wrote: > On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: > > As an openstack operator I was pretty ecstatic to hear that the > assortment of clients would be replaced by a single > > client. I would be disappointed to find that a component would not > integrate and would continue to use a separate > > client. This would be a step backward IMO. > > > > The discussion about microversions goes over my head, but I would hope > to see the developers get together and solve > > the issue and continue working toward integration. > just to summerisie it in a non technical way. > the project specific cli had a convention where the client would ask the > api what the newest micoverion it supported > and defualt to that if the clinet suported it. that meant that the same > command executed against two different clouds > with different versions of openstakc deploy could have different behavior > and different responces. so from an > interoperablity point of view that is not great but from a usablity point > of view the fact enduser dont have to care > about microverions and the client would try to do the right thing made > some things much simpler. > > the unifeid client (osc) chose to priorities interoperablity by defaulting > to the oldest micorverions, so for nova that > would be 2.0/2.1 meaning that if you execute the same command on two > different cloud with different version of nova it > will behave the same but if you want to use a feature intoduced in a later > micorverion you have to explcitly request > that via --os-compute-api-version or set that as a env var or in you > cloud.yaml > > so really the difference is that osc requires the end user to be explictl > about what micoversion to use and therefor be > explict about the behavior of the api they expect (this is what we expect > application that use the the api should do) > where as the project client tried to just work and use the latest > microverion which mostly workd excpet where we remove > a feature in a later micorverions. for example we removed the force option > on some move operation in nova because > allowing forcing caused many harder to fix issues. i dont thnk the nova > clinet would cap at the latest micorvierion that > allowed forcing. so the poject client genreally did not guarantee that a > command would work without specifcing a new > micorverison it just that we remove things a hell of a lot less often then > we add them. > > so as an end user that is the main difference between using osc vs glance > clinet other then the fact i belive there is a > bunch of stuff you can do with glance client that is missing in osc. > parity is a spereate disucssion but it is vaild > concern. > > -----Original Message----- > > From: Radosław Piliszek > > Sent: Monday, March 2, 2020 9:07 AM > > To: openstack-discuss > > Subject: Re: [glance] Different checksum between CLI and curl > > > > Folks, > > > > sorry to interrupt but I think we have diverged a bit too much from the > subject. > > Only last Gaetan message is on topic here. > > Please switch to new subject to discuss OSC future. > > > > -yoctozepto > > > > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): > > > > > > > > > > > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: > > > > > > Hi, > > > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano > wrote: > > > > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > wrote: > > > > > > Hi Gaëtan, > > > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > > I will recommend you to check the same behaviour using > > > > > > python-glanceclient. > > > > > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > > > time I've heard of it. > > > > > > > > > > > From the end user perspective, we’ve had positive feedback on the > convergence to OSC from our cloud consumers. > > > > > > There has been great progress with Manila to get shares included ( > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= > > > ) and it would be a pity if we’re asking our end users to understand > all of the different project names and > > > inconsistent options/arguments/syntax. > > > > > > We had hoped for a project goal to get everyone aligned on OSC but > there was not consensus on this, I’d still > > > encourage it to simplify the experience for OpenStack cloud consumers. > > > > > > Tim > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Mar 3 06:10:25 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 3 Mar 2020 07:10:25 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: On Tue, 3 Mar 2020, 06:08 Abhishek Kekane, wrote: > Hi All, > > Thank you for making this different thread, > > OSC is not up to date with the current glance features and neither it has > shown any interest in doing so. > From glance prospective we also didn't have any bandwidth to work on > adding these support to OSC. > That's honestly not true this days > > There is some major feature gap between current OSC and Glance and that's > the reason why glance does not recommend to use OSC. > That's still not reason to say please don't use it anymore. 1. Support for new image import workflow > Partially implemented by me and I continue working on that 2. Support for hidden images > Implemented 3. Support for multihash > 4. Support for multiple stores > I am relying on OSC and especially for image service trying to bring it in a more useful state, thus fixing huge parts in SDK. > If anyone is interested to take up this work it will be great. > > Thanks & Best Regards, > > Abhishek Kekane > > > On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney wrote: > >> On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: >> > As an openstack operator I was pretty ecstatic to hear that the >> assortment of clients would be replaced by a single >> > client. I would be disappointed to find that a component would not >> integrate and would continue to use a separate >> > client. This would be a step backward IMO. >> > >> > The discussion about microversions goes over my head, but I would hope >> to see the developers get together and solve >> > the issue and continue working toward integration. >> just to summerisie it in a non technical way. >> the project specific cli had a convention where the client would ask the >> api what the newest micoverion it supported >> and defualt to that if the clinet suported it. that meant that the same >> command executed against two different clouds >> with different versions of openstakc deploy could have different behavior >> and different responces. so from an >> interoperablity point of view that is not great but from a usablity point >> of view the fact enduser dont have to care >> about microverions and the client would try to do the right thing made >> some things much simpler. >> >> the unifeid client (osc) chose to priorities interoperablity by >> defaulting to the oldest micorverions, so for nova that >> would be 2.0/2.1 meaning that if you execute the same command on two >> different cloud with different version of nova it >> will behave the same but if you want to use a feature intoduced in a >> later micorverion you have to explcitly request >> that via --os-compute-api-version or set that as a env var or in you >> cloud.yaml >> >> so really the difference is that osc requires the end user to be explictl >> about what micoversion to use and therefor be >> explict about the behavior of the api they expect (this is what we expect >> application that use the the api should do) >> where as the project client tried to just work and use the latest >> microverion which mostly workd excpet where we remove >> a feature in a later micorverions. for example we removed the force >> option on some move operation in nova because >> allowing forcing caused many harder to fix issues. i dont thnk the nova >> clinet would cap at the latest micorvierion that >> allowed forcing. so the poject client genreally did not guarantee that a >> command would work without specifcing a new >> micorverison it just that we remove things a hell of a lot less often >> then we add them. >> >> so as an end user that is the main difference between using osc vs glance >> clinet other then the fact i belive there is a >> bunch of stuff you can do with glance client that is missing in osc. >> parity is a spereate disucssion but it is vaild >> concern. >> >> -----Original Message----- >> > From: Radosław Piliszek >> > Sent: Monday, March 2, 2020 9:07 AM >> > To: openstack-discuss >> > Subject: Re: [glance] Different checksum between CLI and curl >> > >> > Folks, >> > >> > sorry to interrupt but I think we have diverged a bit too much from the >> subject. >> > Only last Gaetan message is on topic here. >> > Please switch to new subject to discuss OSC future. >> > >> > -yoctozepto >> > >> > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): >> > > >> > > >> > > >> > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: >> > > >> > > Hi, >> > > >> > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano >> wrote: >> > > > >> > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >> > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane >> wrote: >> > > > > > Hi Gaëtan, >> > > > > > >> > > > > > Glance team doesn't recommend to use OSC anymore. >> > > > > > I will recommend you to check the same behaviour using >> > > > > > python-glanceclient. >> > > > > >> > > > > That's not cool - everyone has switched to OSC. It's also the >> first >> > > > > time I've heard of it. >> > > > > >> > > >> > > From the end user perspective, we’ve had positive feedback on the >> convergence to OSC from our cloud consumers. >> > > >> > > There has been great progress with Manila to get shares included ( >> > > >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= >> > > ) and it would be a pity if we’re asking our end users to understand >> all of the different project names and >> > > inconsistent options/arguments/syntax. >> > > >> > > We had hoped for a project goal to get everyone aligned on OSC but >> there was not consensus on this, I’d still >> > > encourage it to simplify the experience for OpenStack cloud consumers. >> > > >> > > Tim >> > > >> > > >> > >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Tue Mar 3 06:17:32 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 3 Mar 2020 11:47:32 +0530 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: Hi Artem, Thanks for sharing the update. The decision was collectively taken during last cycle by glance team, as we don't have enough people/resources to work on this front. I will be more than happy to change this if anyone comes forward and bridge the gaps. Thanks & Best Regards, Abhishek Kekane On Tue, Mar 3, 2020 at 11:40 AM Artem Goncharov wrote: > > > On Tue, 3 Mar 2020, 06:08 Abhishek Kekane, wrote: > >> Hi All, >> >> Thank you for making this different thread, >> >> OSC is not up to date with the current glance features and neither it has >> shown any interest in doing so. >> From glance prospective we also didn't have any bandwidth to work on >> adding these support to OSC. >> > > > That's honestly not true this days > >> >> There is some major feature gap between current OSC and Glance and that's >> the reason why glance does not recommend to use OSC. >> > > That's still not reason to say please don't use it anymore. > > 1. Support for new image import workflow >> > Partially implemented by me and I continue working on that > > 2. Support for hidden images >> > Implemented > > 3. Support for multihash >> > 4. Support for multiple stores >> > > I am relying on OSC and especially for image service trying to bring it in > a more useful state, thus fixing huge parts in SDK. > > >> If anyone is interested to take up this work it will be great. >> >> Thanks & Best Regards, >> >> Abhishek Kekane >> >> >> On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney wrote: >> >>> On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: >>> > As an openstack operator I was pretty ecstatic to hear that the >>> assortment of clients would be replaced by a single >>> > client. I would be disappointed to find that a component would not >>> integrate and would continue to use a separate >>> > client. This would be a step backward IMO. >>> > >>> > The discussion about microversions goes over my head, but I would hope >>> to see the developers get together and solve >>> > the issue and continue working toward integration. >>> just to summerisie it in a non technical way. >>> the project specific cli had a convention where the client would ask the >>> api what the newest micoverion it supported >>> and defualt to that if the clinet suported it. that meant that the same >>> command executed against two different clouds >>> with different versions of openstakc deploy could have different >>> behavior and different responces. so from an >>> interoperablity point of view that is not great but from a usablity >>> point of view the fact enduser dont have to care >>> about microverions and the client would try to do the right thing made >>> some things much simpler. >>> >>> the unifeid client (osc) chose to priorities interoperablity by >>> defaulting to the oldest micorverions, so for nova that >>> would be 2.0/2.1 meaning that if you execute the same command on two >>> different cloud with different version of nova it >>> will behave the same but if you want to use a feature intoduced in a >>> later micorverion you have to explcitly request >>> that via --os-compute-api-version or set that as a env var or in you >>> cloud.yaml >>> >>> so really the difference is that osc requires the end user to be >>> explictl about what micoversion to use and therefor be >>> explict about the behavior of the api they expect (this is what we >>> expect application that use the the api should do) >>> where as the project client tried to just work and use the latest >>> microverion which mostly workd excpet where we remove >>> a feature in a later micorverions. for example we removed the force >>> option on some move operation in nova because >>> allowing forcing caused many harder to fix issues. i dont thnk the nova >>> clinet would cap at the latest micorvierion that >>> allowed forcing. so the poject client genreally did not guarantee that a >>> command would work without specifcing a new >>> micorverison it just that we remove things a hell of a lot less often >>> then we add them. >>> >>> so as an end user that is the main difference between using osc vs >>> glance clinet other then the fact i belive there is a >>> bunch of stuff you can do with glance client that is missing in osc. >>> parity is a spereate disucssion but it is vaild >>> concern. >>> >>> -----Original Message----- >>> > From: Radosław Piliszek >>> > Sent: Monday, March 2, 2020 9:07 AM >>> > To: openstack-discuss >>> > Subject: Re: [glance] Different checksum between CLI and curl >>> > >>> > Folks, >>> > >>> > sorry to interrupt but I think we have diverged a bit too much from >>> the subject. >>> > Only last Gaetan message is on topic here. >>> > Please switch to new subject to discuss OSC future. >>> > >>> > -yoctozepto >>> > >>> > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): >>> > > >>> > > >>> > > >>> > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: >>> > > >>> > > Hi, >>> > > >>> > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano >>> wrote: >>> > > > >>> > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >>> > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane >>> wrote: >>> > > > > > Hi Gaëtan, >>> > > > > > >>> > > > > > Glance team doesn't recommend to use OSC anymore. >>> > > > > > I will recommend you to check the same behaviour using >>> > > > > > python-glanceclient. >>> > > > > >>> > > > > That's not cool - everyone has switched to OSC. It's also the >>> first >>> > > > > time I've heard of it. >>> > > > > >>> > > >>> > > From the end user perspective, we’ve had positive feedback on the >>> convergence to OSC from our cloud consumers. >>> > > >>> > > There has been great progress with Manila to get shares included ( >>> > > >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= >>> > > ) and it would be a pity if we’re asking our end users to >>> understand all of the different project names and >>> > > inconsistent options/arguments/syntax. >>> > > >>> > > We had hoped for a project goal to get everyone aligned on OSC but >>> there was not consensus on this, I’d still >>> > > encourage it to simplify the experience for OpenStack cloud >>> consumers. >>> > > >>> > > Tim >>> > > >>> > > >>> > >>> > >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Mar 3 09:48:57 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Tue, 03 Mar 2020 09:48:57 +0000 Subject: [neutron] security group list regression In-Reply-To: <7DD0691D-19A3-4CDB-B377-F67829A86AD7@rackspace.com> References: <7DD0691D-19A3-4CDB-B377-F67829A86AD7@rackspace.com> Message-ID: <4740f4822e7b571b40aa5dc549e3c59a2ee659c4.camel@redhat.com> Hello James: Yes, this is a known issue in OSclient: most of the "objects" (networks, subnets, routers, etc) to be retrieved, can usually can be retrieved by ID and by name. OSclient tries first to use the ID because is unique and a DB key. Then, instead of asking the server for a unique register (filtered by the name), the client retrieves the whole list and filters the results. But this problem was resolved in Train: https://review.opendev.org/#/c/637238/. Can you check, in openstacksdk, that you have this patch? At least in T. According to [1] and [2], "name" should be used as filter in the OSsdk "find" call. Regards. [1]https://review.opendev.org/#/c/637238/20/openstack/resource.py [2]https://github.com/openstack/openstacksdk/blob/master/openstack/network/v2/security_group.py#L29 On Mon, 2020-03-02 at 22:25 +0000, James Denton wrote: > Rodolfo, > > Thanks for continuing to push this on the ML and in the bug report. > > Happy to report that the client and SDK patches you provided have drastically reduced the SG list > time from ~90-120s to ~12-14s within Stein and Train lab environments. > > One last thing... when you perform an 'openstack security group delete ', the initial lookup > by name fails. In Train, the client falls back to using the 'name' parameter (/security- > groups?name=). This lookup is quick and the security group is found and deleted. However, on > Rocky/Stein (e.g. client 3.18.1), instead of searching by parameter, the client appears to perform > a GET /security-groups without limiting the fields and takes a long time. > > 'openstack security group list' with patch: > REQ: curl -g -i -X GET " > http://10.0.236.150:9696/v2.0/security-groups?fields=set%28%5B%27description%27%2C+%27project_id%27%2C+%27id%27%2C+%27tags%27%2C+%27name%27%5D%29 > " -H "Accept: application/json" -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python- > requests/2.21.0 CPython/2.7.17" -H "X-Auth-Token: > {SHA256}3e747da939e8c4befe72d5ca7105971508bd56cdf36208ba6b960d1aee6d19b6" > > 'openstack security group delete ': > > Train (notice the name param): > REQ: curl -g -i -X GET http://10.20.0.11:9696/v2.0/security-groups/train-test-1755 -H "User-Agent: > openstacksdk/0.36.0 keystoneauth1/3.17.1 python-requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: > {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" > http://10.20.0.11:9696 "GET /v2.0/security-groups/train-test-1755 HTTP/1.1" 404 125 > REQ: curl -g -i -X GET "http://10.20.0.11:9696/v2.0/security-groups?name=train-test-1755" -H > "Accept: application/json" -H "User-Agent: openstacksdk/0.36.0 keystoneauth1/3.17.1 python- > requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: > {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" > http://10.20.0.11:9696 "GET /v2.0/security-groups?name=train-test-1755 HTTP/1.1" 200 1365 > > Stein & below (notice lack of fields): > REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups/stein-test-5189 -H "User- > Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H "X-Auth- > Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" > http://10.0.236.150:9696 "GET /v2.0/security-groups/stein-test-5189 HTTP/1.1" 404 125 > REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups -H "Accept: application/json" > -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H > "X-Auth-Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" > > > Haven't quite figured out where fields can be used to speed up the delete process on the older > client, or if the newer client would be backwards-compatible (and how far back). > > Thanks, > James > > On 3/2/20, 9:31 AM, "James Denton" wrote: > > CAUTION: This message originated externally, please use caution when clicking on links or > opening attachments! > > > Thanks, Rodolfo. I'll take a look at each of these after coffee and clarify my position (if > needed). > > James > > On 3/2/20, 6:27 AM, "Rodolfo Alonso" wrote: > > CAUTION: This message originated externally, please use caution when clicking on links or > opening attachments! > > > Hello James: > > Just to make a quick summary of the status of the commented bugs/regressions: > > 1) https://bugs.launchpad.net/neutron/+bug/1810563: adding rules to security groups is > slow > That was addressed in https://review.opendev.org/#/c/633145/ and > https://review.opendev.org/#/c/637407/, removing the O^2 check and using lazy loading. > > > 2) https://bugzilla.redhat.com/show_bug.cgi?id=1788749: Neutron List networks API > regression > The last reply was marked as private. I've undone this and you can read now c#2. Testing > with a > similar scenario, I don't see any performance degradation between Queens and Train. > > > 3) https://bugzilla.redhat.com/show_bug.cgi?id=1721273: Neutron API List Ports Performance > regression > That problem was solved in https://review.opendev.org/#/c/667981/ and > https://review.opendev.org/#/c/667998/, by refactoring how the port QoS extension was > reading and > applying the QoS info in the port dict. > > > 4) https://bugs.launchpad.net/neutron/+bug/1865223: regression for security group list > between > Newton and Rocky+ > > This is similar to https://bugs.launchpad.net/neutron/+bug/1863201. In this case, the > regression was > detected from R to S. The performance dropped from 3 secs to 110 secs (36x). That issue > was > addressed by https://review.opendev.org/#/c/708695/. > > But while 1865223 is talking about *SG list*, 1863201 is related to *SG rule list*. I > would like to > make this differentiation, because both retrieval commands are not related. > > In this bug (1863201), the performance degradation multiplies by x3 (N->Q) the initial > time. This > could be caused by the OVO integration (O->P: https://review.opendev.org/#/c/284738/). > Instead of > using the DB object now we make this call using the OVO object containing the DB register > (something > like a DB view). That's something I still need to check. > > Just to make a concretion: the patch 708695 improves the *SG rule* retrieval, not the SG > list > command. Another punctualization is that this patch will help in the case of having a > balance > between SG rules and SG. This patch will help to retrieve from the DB only those SG rules > belonging > to the project. If, as you state in > https://bugs.launchpad.net/neutron/+bug/1865223/comments/4, most > of those SG rules belong to the same project, there is little improvement there. > > As commented, I'm still looking at improving the SG OVO performance. > > Regards > > > On Mon, 2020-03-02 at 03:03 +0000, Erik Olof Gunnar Andersson wrote: > > When we went from Mitaka to Rocky in August last year and we saw an exponential increase > in api > > times for listing security group rules. > > > > I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, > but I have > > brought it up on a few other occasions as well. > > Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime > between liberty > > and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review > with > > fixes is incoming. You can repro with a vanilla devstack install on master, and this > script: > > #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') > export > > OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost > make_rules() { > > iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file > <<EOF > > {... bugs.launchpad.net > > > > > > From: Slawek Kaplonski > > Sent: Saturday, February 29, 2020 12:44 AM > > To: James Denton > > Cc: openstack-discuss > > Subject: Re: [neutron] security group list regression > > > > Hi, > > > > I just replied in Your bug report. Can You try to apply patch > > > https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ > > to see if that will help with this problem? > > > > > On 29 Feb 2020, at 02:41, James Denton wrote: > > > > > > Hello all, > > > > > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty > severe > > regression in the time it takes the API to return the list of security groups. This > environment > > has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack > security > > group list’ command to complete. I don’t have actual data from the same environment > running > > Newton, but was able to replicate this behavior with the following lab environments > running a mix > > of virtual and baremetal machines: > > > > > > Newton (VM) > > > Rocky (BM) > > > Stein (VM) > > > Train (BM) > > > > > > Number of sec grps vs time in seconds: > > > > > > # Newton Rocky Stein Train > > > 200 4.1 3.7 5.4 5.2 > > > 500 5.3 7 11 9.4 > > > 1000 7.2 12.4 19.2 16 > > > 2000 9.2 24.2 35.3 30.7 > > > 3000 12.1 36.5 52 44 > > > 4000 16.1 47.2 73 58.9 > > > 5000 18.4 55 90 69 > > > > > > As you can see (hopefully), the response time increased significantly between Newton > and Rocky, > > and has grown slightly ever since. We don't know, yet, if this behavior can be seen with > other > > 'list' commands or is limited to secgroups. We're currently verifying on some > intermediate > > releases to see where things went wonky. > > > > > > There are some similar recent reports out in the wild with little feedback: > > > > > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > > > > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > > > > > > > I opened a bug here, too: > > > > > > > > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > > > > > > > Bottom line: Has anyone else experienced similar regressions in recent releases? If > so, were you > > able to address them with any sort of tuning? > > > > > > Thanks in advance, > > > James > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > > > > From alfredo.deluca at gmail.com Tue Mar 3 09:50:41 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 3 Mar 2020 10:50:41 +0100 Subject: [CINDER] Snapshots export Message-ID: Hi all. We have our env with Openstack (Train) and cinder with CEPH (nautilus) backend. We are creating automatic volumes snapshots and now we'd like to export them as a backup/restore plan. After exporting the snapshots we will use Acronis as backup tool. I couldn't find the right steps/commands to exports the snapshots. Any info? Cheers -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Tue Mar 3 10:11:04 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 3 Mar 2020 19:11:04 +0900 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: Thanks Thierry for the detail explanation. The horizon team will update the corresponding repos for new minor releases and follow the usual release process. One question: we have passed the milestone-2. Is it better to wait till Victoria dev cycle is open? Thanks, Akihiro On Fri, Feb 28, 2020 at 1:47 AM Thierry Carrez wrote: > > Thierry Carrez wrote: > > The way we've been handling this in the past was to ignore past releases > > (since they are not signed by the release team), and push a new one > > through the releases repository. It should replace the unofficial one in > > PyPI and make sure all is in order. > > Clarification with a practical example: > > xstatic-hogan 2.0.0.2 is on PyPI, but has no tag in the > openstack/xstatic-hogan repo, and no deliverable file in openstack/releases. > > Solution is to resync everything by proposing a 2.0.0.3 release that > will have tag, be in openstack/releases and have a matching upload on PyPI. > > This is done by: > > - bumping BUILD at > https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py# > > - adding a deliverables/_independent/xstatic-hogan.yaml file in > openstack/releases defining a tag for 2.0.0.3 > > - removing the "deprecated" line from > https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml#L542 > > Repeat for every affected package :) > > -- > Thierry Carrez (ttx) > From ignaziocassano at gmail.com Tue Mar 3 10:11:37 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 3 Mar 2020 11:11:37 +0100 Subject: [queens][neutron][fwaas_v2] PENDING_UPDATE Message-ID: Hello All, I installed firewall v2 on queens based on centos 7. I create a firewall group policy and a firewall group rulle with that policy. The firewall results ACTIVE AND UP and INACTIVE When I ttry to apply the firewall group to an instance port : openstack firewall group set --port c7c8be58-35de-47fe-87db-39bbd681db8b fwg1 It does not works and goes in pending update status. L3 agent log reports: Could not load neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver But the file esists I am using firewall_driver = openvswitch Please, whai is wrong ? I read Supports L2 firewalling (VM ports) was planned for ocata and I am on queens. Please, help me. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Tue Mar 3 12:33:29 2020 From: donny at fortnebula.com (Donny Davis) Date: Tue, 3 Mar 2020 07:33:29 -0500 Subject: [queens][neutron][fwaas_v2] PENDING_UPDATE In-Reply-To: References: Message-ID: I have never really used fwaas, but I do believe its targeted at routers. Security groups already do firewalling for the vm ports Donny Davis c: 805 814 6800 On Tue, Mar 3, 2020, 5:15 AM Ignazio Cassano wrote: > Hello All, I installed firewall v2 on queens based on centos 7. > I create a firewall group policy and a firewall group rulle with that > policy. > > The firewall results ACTIVE AND UP and INACTIVE > > When I ttry to apply the firewall group to an instance port : > openstack firewall group set --port c7c8be58-35de-47fe-87db-39bbd681db8b > fwg1 > > It does not works and goes in pending update status. > > > L3 agent log reports: > Could not load > neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver > > But the file esists > > I am using firewall_driver = openvswitch > > Please, whai is wrong ? > > I read Supports L2 firewalling (VM ports) was planned for ocata and I am > on queens. > > Please, help me. > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guimalufb at gmail.com Mon Mar 2 20:44:15 2020 From: guimalufb at gmail.com (Gui Maluf) Date: Mon, 2 Mar 2020 20:44:15 +0000 Subject: [Swift] Errno 13 Permission denied writing objects xattr while upgrading from Kilo to Queens Message-ID: Hi all, I'm struggling with something really wierd. 3 weeks ago I started upgrading my Keystone + Swift Ubuntu environment from Kilo to Queens. So I moved from Ubuntu 14.04 to 18.04. I can create new accounts and containers. But no objects. I think between Mitaka and Newton my storages started to throw Permission Denied error while writing object metadata. I saw that the piece of python code where I getting problem was changed in Rocky version and in hope of getting things fixed I've upgraded storage version. But the error persists. http://paste.openstack.org/show/790217/ I've check everything I could, user, permissions, mount options. But I still getting this error. I wrote a python script for creating files and writing metadata within the swift mount with swift user and everything works fines. Don't know what to do anymore. This is a "dev" environment with two storages only and a few disks. Since I'm planning to do in the production environment I'm quite scared if this happens again. Thanks in advance -- *guilherme* \n11 \t *maluf* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue Mar 3 12:41:11 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 3 Mar 2020 13:41:11 +0100 Subject: [queens][neutron][fwaas_v2] PENDING_UPDATE In-Reply-To: References: Message-ID: Hello Donny, please visit this link: it should work: https://superuser.openstack.org/articles/firewall-service-openstack/ Il giorno mar 3 mar 2020 alle ore 13:33 Donny Davis ha scritto: > I have never really used fwaas, but I do believe its targeted at routers. > > Security groups already do firewalling for the vm ports > > > > Donny Davis > c: 805 814 6800 > > On Tue, Mar 3, 2020, 5:15 AM Ignazio Cassano > wrote: > >> Hello All, I installed firewall v2 on queens based on centos 7. >> I create a firewall group policy and a firewall group rulle with that >> policy. >> >> The firewall results ACTIVE AND UP and INACTIVE >> >> When I ttry to apply the firewall group to an instance port : >> openstack firewall group set --port c7c8be58-35de-47fe-87db-39bbd681db8b >> fwg1 >> >> It does not works and goes in pending update status. >> >> >> L3 agent log reports: >> Could not load >> neutron_fwaas.services.firewall.drivers.linux.iptables_fwaas_v2.IptablesFwaasDriver >> >> But the file esists >> >> I am using firewall_driver = openvswitch >> >> Please, whai is wrong ? >> >> I read Supports L2 firewalling (VM ports) was planned for ocata and I am >> on queens. >> >> Please, help me. >> >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Tue Mar 3 13:01:04 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Tue, 3 Mar 2020 13:01:04 +0000 Subject: [nova] [neutron] multiple fixed_ip Message-ID: <20200303130104.GA29109@sync> Hello all, I was doing some tests to create a server using nova API. My objective is to create a server with one port but multiples IPs (one IPv4 and one IPv6). If I understand well the neutron API, I can create a port using the fixed_ips array parameter [1] Unfortunately, on nova side, it seems to only accept a string with only one ip (fixed_ip) [2] Is it mandatory for me to create the port with neutron? Or is there any trick that I missed on nova API side? Thanks! [1] https://docs.openstack.org/api-ref/network/v2/?expanded=create-port-detail#ports [2] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server -- Arnaud Morin From radoslaw.piliszek at gmail.com Tue Mar 3 13:23:52 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 3 Mar 2020 14:23:52 +0100 Subject: [nova] [neutron] multiple fixed_ip In-Reply-To: <20200303130104.GA29109@sync> References: <20200303130104.GA29109@sync> Message-ID: Hi Arnaud, Non-core here. Last time I checked you had to decide on one and then update with neutron (or first create the port with neutron and then give it to nova :-) ). Moreover, not sure if IPv6 goes through Nova directly or not (docs suggest still nah). -yoctozepto wt., 3 mar 2020 o 14:09 Arnaud Morin napisał(a): > > > Hello all, > > I was doing some tests to create a server using nova API. > My objective is to create a server with one port but multiples IPs (one > IPv4 and one IPv6). > > If I understand well the neutron API, I can create a port using the > fixed_ips array parameter [1] > > Unfortunately, on nova side, it seems to only accept a string with only > one ip (fixed_ip) [2] > > Is it mandatory for me to create the port with neutron? > Or is there any trick that I missed on nova API side? > > Thanks! > > > [1] https://docs.openstack.org/api-ref/network/v2/?expanded=create-port-detail#ports > [2] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server > > > > -- > Arnaud Morin > > From dtantsur at redhat.com Tue Mar 3 13:35:28 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 3 Mar 2020 14:35:28 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <70af048d-cf62-4f27-84fe-e6ed7b959837@www.fastmail.com> References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> <70af048d-cf62-4f27-84fe-e6ed7b959837@www.fastmail.com> Message-ID: On Mon, Mar 2, 2020 at 8:46 PM Clark Boylan wrote: > On Mon, Mar 2, 2020, at 11:01 AM, Dmitry Tantsur wrote: > > > > > > On Mon, Mar 2, 2020 at 5:37 PM Sean Mooney wrote: > > > On Mon, 2020-03-02 at 16:49 +0100, Dmitry Tantsur wrote: > > > > Hi, > > > > > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano > wrote: > > > > > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane < > akekane at redhat.com> wrote: > > > > > > > Hi Gaëtan, > > > > > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > > > I will recommend you to check the same behaviour using > > > > > > > python-glanceclient. > > > > > > > > > > > > That's not cool - everyone has switched to OSC. It's also the > first > > > > > > time I've heard of it. > > > > > > > > > > > > > > > > Do we have proper microversion support then? This is a blocker > for cinder. > > > > > > > > > > > > > The ironic team has been successfully hacking around the absence of > a > > > > native microversion support for a while. We use ironicclient > instead of > > > > openstacksdk, which makes things harder. If you use openstacksdk, > it's > > > > easier to teach it microversions. In any case, I can provide some > guidance > > > > if you'd like to. > > > > > > > > Dmitry > > > that is also problematic. > > > by harcking around it it gives the ironic command a different > behavior to the rest of osc. > > > osc does support microverions it just does not support automatic > versin negociation which is > > > what you are hacking in. > > > > Right, and it's a hard requirement for the CLI to be remotely usable. > > > > > > i do agree that it would be nice to have support for version > negociation where by you could do somehting like > > > --os-compute-api-version=auto to opt in to it but automatic > microverions detetion does make it harder to do help > > > text generation unless you make "openstack --cloud=my-cloud > --os-compute-api-version=auto help server create" call out > > > to keystone get the nova endpoint and then lookup its max > microversion when you render the help text. > > > > The "auto" must be a default. This is what the users expect: the CLI > > just working. Defaulting to anything else does them a huge disservice > > (been there, done that). > > As a user I strongly disagree. I don't want an API to magically start > acting differently because the cloud side has upgraded. Those changes are > opaque to me and I shouldn't need to know about them. Instead I should be > able to opt into using new features when I know I need them. This is easily > achieved by setting the desired microversion when you know you need it. > We're talking about CLI, not API. I agree with you when it comes to calling code, but CLI must just work. This is how all CLI in the world work: you either get a behavior or you get a clear failure. It's the other way around: if you want to fix the feature set, and you know what you're doing, you can set a specific version in your environment. And, Clark, you and I are not mere users even if we use our CLI regularly. Draw the border here: a regular user is someone who doesn't know what a microversion even IS, to say nothing about a way to find the required microversion for a feature. These are the users I've dealt with and they have all been frustrated by using microversions explicitly. For a probably clearer explanation let me refer you to the API SIG specification that covers how to expose microversions: https://specs.openstack.org/openstack/api-sig/guidelines/sdk-exposing-microversions.html (see specifically about high-level SDKs). Dmitry > > > > > > > with that said if adding --os-image-api-version=auto was enough to > get the glance team to fully adopt osc > > > then i think that would be better then partioning the community > between osc and legacy client. > > > osc should behave consistently for all projects however so adding > negocaiton for ironic and not for other services > > > is not a good thing imo but i guess you were able to do that as > ironic is integrated as a plugin correct? > > > > Yep. We could not wait for OSC to implement it because the CLI is > > borderline unusable without this negotiation in place. I don't recall > > what prevented us from updating OSC, but I think there was a reason, > > probably not entirely technical. > > > > Dmitry > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Mar 3 13:47:40 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 3 Mar 2020 14:47:40 +0100 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5594320.rlFBNPpyZN@whitebase.usersys.redhat.com> <70af048d-cf62-4f27-84fe-e6ed7b959837@www.fastmail.com> Message-ID: Folks, this is the *wrong* topic to discuss it. PS (basking in offtop): Thanks, Dmitry, I was looking for such a doc for jslib. -yoctozepto wt., 3 mar 2020 o 14:38 Dmitry Tantsur napisał(a): > > > > On Mon, Mar 2, 2020 at 8:46 PM Clark Boylan wrote: >> >> On Mon, Mar 2, 2020, at 11:01 AM, Dmitry Tantsur wrote: >> > >> > >> > On Mon, Mar 2, 2020 at 5:37 PM Sean Mooney wrote: >> > > On Mon, 2020-03-02 at 16:49 +0100, Dmitry Tantsur wrote: >> > > > Hi, >> > > > >> > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: >> > > > >> > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >> > > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: >> > > > > > > Hi Gaëtan, >> > > > > > > >> > > > > > > Glance team doesn't recommend to use OSC anymore. >> > > > > > > I will recommend you to check the same behaviour using >> > > > > > > python-glanceclient. >> > > > > > >> > > > > > That's not cool - everyone has switched to OSC. It's also the first >> > > > > > time I've heard of it. >> > > > > > >> > > > > >> > > > > Do we have proper microversion support then? This is a blocker for cinder. >> > > > > >> > > > >> > > > The ironic team has been successfully hacking around the absence of a >> > > > native microversion support for a while. We use ironicclient instead of >> > > > openstacksdk, which makes things harder. If you use openstacksdk, it's >> > > > easier to teach it microversions. In any case, I can provide some guidance >> > > > if you'd like to. >> > > > >> > > > Dmitry >> > > that is also problematic. >> > > by harcking around it it gives the ironic command a different behavior to the rest of osc. >> > > osc does support microverions it just does not support automatic versin negociation which is >> > > what you are hacking in. >> > >> > Right, and it's a hard requirement for the CLI to be remotely usable. >> > > >> > > i do agree that it would be nice to have support for version negociation where by you could do somehting like >> > > --os-compute-api-version=auto to opt in to it but automatic microverions detetion does make it harder to do help >> > > text generation unless you make "openstack --cloud=my-cloud --os-compute-api-version=auto help server create" call out >> > > to keystone get the nova endpoint and then lookup its max microversion when you render the help text. >> > >> > The "auto" must be a default. This is what the users expect: the CLI >> > just working. Defaulting to anything else does them a huge disservice >> > (been there, done that). >> >> As a user I strongly disagree. I don't want an API to magically start acting differently because the cloud side has upgraded. Those changes are opaque to me and I shouldn't need to know about them. Instead I should be able to opt into using new features when I know I need them. This is easily achieved by setting the desired microversion when you know you need it. > > > We're talking about CLI, not API. I agree with you when it comes to calling code, but CLI must just work. This is how all CLI in the world work: you either get a behavior or you get a clear failure. It's the other way around: if you want to fix the feature set, and you know what you're doing, you can set a specific version in your environment. > > And, Clark, you and I are not mere users even if we use our CLI regularly. Draw the border here: a regular user is someone who doesn't know what a microversion even IS, to say nothing about a way to find the required microversion for a feature. These are the users I've dealt with and they have all been frustrated by using microversions explicitly. > > For a probably clearer explanation let me refer you to the API SIG specification that covers how to expose microversions: https://specs.openstack.org/openstack/api-sig/guidelines/sdk-exposing-microversions.html (see specifically about high-level SDKs). > > Dmitry > >> >> >> > > >> > > with that said if adding --os-image-api-version=auto was enough to get the glance team to fully adopt osc >> > > then i think that would be better then partioning the community between osc and legacy client. >> > > osc should behave consistently for all projects however so adding negocaiton for ironic and not for other services >> > > is not a good thing imo but i guess you were able to do that as ironic is integrated as a plugin correct? >> > >> > Yep. We could not wait for OSC to implement it because the CLI is >> > borderline unusable without this negotiation in place. I don't recall >> > what prevented us from updating OSC, but I think there was a reason, >> > probably not entirely technical. >> > >> > Dmitry >> From sean.mcginnis at gmx.com Tue Mar 3 13:54:41 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Mar 2020 07:54:41 -0600 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: On 3/3/20 4:11 AM, Akihiro Motoki wrote: > Thanks Thierry for the detail explanation. > The horizon team will update the corresponding repos for new minor > releases and follow the usual release process. > One question: we have passed the milestone-2. Is it better to wait > till Victoria dev cycle is open? > > Thanks, > Akihiro We are past the deadline for inclusion in ussuri. But that said, these are things that are currently being used by the team, so I think it's a little misleading in its current state. I think we should get these new releases done in this cycle if possible. Part of this is also the assumption that these will be cycle based. I wonder if this are more appropriate as independent deliverables? That means they are not tied to a specific release cycle and can be released whenever there is something to be released. At least something to think about. https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > On Fri, Feb 28, 2020 at 1:47 AM Thierry Carrez wrote: >> Thierry Carrez wrote: >>> The way we've been handling this in the past was to ignore past releases >>> (since they are not signed by the release team), and push a new one >>> through the releases repository. It should replace the unofficial one in >>> PyPI and make sure all is in order. >> Clarification with a practical example: >> >> xstatic-hogan 2.0.0.2 is on PyPI, but has no tag in the >> openstack/xstatic-hogan repo, and no deliverable file in openstack/releases. >> >> Solution is to resync everything by proposing a 2.0.0.3 release that >> will have tag, be in openstack/releases and have a matching upload on PyPI. >> >> This is done by: >> >> - bumping BUILD at >> https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py# >> >> - adding a deliverables/_independent/xstatic-hogan.yaml file in >> openstack/releases defining a tag for 2.0.0.3 >> >> - removing the "deprecated" line from >> https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml#L542 >> >> Repeat for every affected package :) >> >> -- >> Thierry Carrez (ttx) >> From juliaashleykreger at gmail.com Tue Mar 3 15:06:54 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 3 Mar 2020 10:06:54 -0500 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: Thoughts below, thanks for bringing this up Mohammed! On Mon, Mar 2, 2020 at 4:47 PM Mohammed Naser wrote: > > Hi everyone: > > We're now in a spot where we have an increasing amount of projects > that don't end up with a volunteer as PTL, even if the project has > contributors .. no one wants to hold that responsibility alone for > many reasons. With time, the PTL role has become far more overloaded > with many extra responsibilities than what we define in our charter: > > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. > I think this is vital, however at the same time the projects need to reconsider what their commitments are. I feel like most of the liaison model was for us to handle community scale and relay information, and that essentially stopped being effective as teams began to scale back the pool of active contributors and time that can be focused on supporting projects. In other words, does it still make sense to have a release liaison? oslo liaison? etc. Can we not move to a collaborative model instead of putting single points of contact in place? See: https://wiki.openstack.org/wiki/CrossProjectLiaisons > The above is just a very basic idea, I don't intend to diving much > more in depth for now as I'd like to hear about what the rest of the > community thinks. > > Thanks, > Mohammed > Off hand, I feel like my initial mental response was "Noooo!". Upon thinking of this and talking to Mohammed some, I think it is a necessary evolutionary step. As a burned out PTL who cares, I wonder "who will step up after me" and carry what I perceive as the organizational and co-ordination overhead, standing on stage, and running meetings. Nothing prevents any contributor from running a community meeting, standing on a stage and giving a talk or project update! We are a community, and single points of contact just lead community members to burnout. Possibly what we are lacking is a "Time for a meeting!" bot. From skaplons at redhat.com Tue Mar 3 15:26:33 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 3 Mar 2020 16:26:33 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <1A2372A7-6A75-4C89-80A6-8146F82B5413@redhat.com> Hi, > On 2 Mar 2020, at 22:45, Mohammed Naser wrote: > > Hi everyone: > > We're now in a spot where we have an increasing amount of projects > that don't end up with a volunteer as PTL, even if the project has > contributors .. no one wants to hold that responsibility alone for > many reasons. With time, the PTL role has become far more overloaded > with many extra responsibilities than what we define in our charter: > > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. I’m afraid that in such maintainers group there will be still a need to have some kind of leader who will propose/ask others to be liaisons or take some other roles. So it will be still some kind of PTL but maybe with different name and/or elected in different way. Otherwise it may end up that everyone will look for others to do something. If responsibility for something is on many people then in fact nobody is responsible for that. > > The above is just a very basic idea, I don't intend to diving much > more in depth for now as I'd like to hear about what the rest of the > community thinks. > > Thanks, > Mohammed > — Slawek Kaplonski Senior software engineer Red Hat From jean-philippe at evrard.me Tue Mar 3 15:44:58 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 03 Mar 2020 16:44:58 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: > Off hand, I feel like my initial mental response was "Noooo!". Upon > thinking of this and talking to Mohammed some, I think it is a > necessary evolutionary step. As a burned out PTL who cares, I wonder > "who will step up after me" and carry what I perceive as the > organizational and co-ordination overhead, standing on stage, and > running meetings. Nothing prevents any contributor from running a > community meeting, standing on a stage and giving a talk or project > update! We are a community, and single points of contact just lead > community members to burnout. > > Possibly what we are lacking is a "Time for a meeting!" bot. > I am not sure to understand what you are proposing. Wasn't the liaison's system meant for avoiding burnout by delegating tasks, while staying clear on duties? It avoids the back and forth of communication to some maintainer, solving the question "who is handling that?". It still allows delegation. IMO, there was never a limitation of the amount of liaisons for a single "kind" of liaison. You could have 2 ppl working on the releases, 2 on the bugs, etc. Don't get me wrong: on the "drop of the PTL" story, I was more in the "we should drop this" clan. When I discussed it last time with Mohammed (and others, but it was loooooong ago), I didn't focus on the liaisons. But before side-tracking this thread, I would like to understand what are the pain points in the current model (explicitly! examples!), and how moving away from PTLs and liaisons will help the team of maintainers. At first sight, it looks like team duties will be vague. There are various levels of success on self-organizing teams. Regards, JP From corvus at inaugust.com Tue Mar 3 16:04:28 2020 From: corvus at inaugust.com (James E. Blair) Date: Tue, 03 Mar 2020 08:04:28 -0800 Subject: [kuryr] Job running open resolver Message-ID: <87zhcxpgqb.fsf@meyer.lemoncheese.net> Hi, The openstack-infra team received a report from one of our infrastructure donors that a gate job run by Kuryr is running a DNS resolver open to the Internet. This is dangerous as, if discovered, it can be used as part of DNS reflection attacks. The community and our infrastructure donors share an interest in avoiding misuse of our resources. Would you please look into whether this job is perhaps opening its iptables ports too liberally, and whether that can be avoided? The job is kuryr-kubernetes-tempest-containerized-ovn, and the build which triggered the alerting system is this one: https://zuul.opendev.org/t/openstack/build/166301f57b21402d8d8443bb1e17f970 Thanks, Jim From openstack at nemebean.com Tue Mar 3 16:12:54 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Mar 2020 10:12:54 -0600 Subject: oslotest no longer pulls in stestr Message-ID: Hi all, We just released [0], which removes stestr as a requirement for oslotest. As a result, if you were relying on oslotest to pull in stestr you are most likely broken now. We did check for this situation before making the change, but it seems we missed at least one project so I'm sending this out in case anyone else is affected. The fix is to explicitly list stestr in your test-requirements (oslotest doesn't actually need stestr, so it's not the right place for the requirement to live). Reply here or ping us in #openstack-oslo with any questions. Thanks. -Ben 0: https://review.opendev.org/#/c/615826 From hberaud at redhat.com Tue Mar 3 16:47:20 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 3 Mar 2020 17:47:20 +0100 Subject: [idea] voting procedure and the choice's engineering Message-ID: Hello openstacker, I proposed a new openstack/idea to open discussion about how we should decide, when, and for what purpose, and the methods and tools to help us in this task. https://review.opendev.org/#/c/710107/ This document is not a "well finished document" but more a draft to open the debate and to help us to make the decisions better. I referenced some methods and tools. If some of you are interested by this topic then do not hesitate to leave comments and push changes on this document. Communities are driven by choices and decisions, Openstack is a community, let's decide how to choose on Openstack. -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Mar 3 16:56:28 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Tue, 3 Mar 2020 16:56:28 +0000 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <1583254584.12170.11@est.tech> On Tue, Mar 3, 2020 at 16:44, Jean-Philippe Evrard wrote: >> Off hand, I feel like my initial mental response was "Noooo!". Upon >> thinking of this and talking to Mohammed some, I think it is a >> necessary evolutionary step. As a burned out PTL who cares, I >> wonder >> "who will step up after me" and carry what I perceive as the >> organizational and co-ordination overhead, standing on stage, and >> running meetings. Nothing prevents any contributor from running a >> community meeting, standing on a stage and giving a talk or project >> update! We are a community, and single points of contact just lead >> community members to burnout. >> >> Possibly what we are lacking is a "Time for a meeting!" bot. >> > > I am not sure to understand what you are proposing. > > Wasn't the liaison's system meant for avoiding burnout by delegating > tasks, while staying clear on duties? It avoids the back and forth of > communication to some maintainer, solving the question "who is > handling > that?". It still allows delegation. IMO, there was never a limitation > of the amount of liaisons for a single "kind" of liaison. You could > have 2 ppl working on the releases, 2 on the bugs, etc. > > Don't get me wrong: on the "drop of the PTL" story, I was more in the > "we should drop this" clan. When I discussed it last time with > Mohammed > (and others, but it was loooooong ago), I didn't focus on the > liaisons. > But before side-tracking this thread, I would like to understand what > are the pain points in the current model (explicitly! examples!), and > how moving away from PTLs and liaisons will help the team of > maintainers. At first sight, it looks like team duties will be vague. > There are various levels of success on self-organizing teams. My context: We have a shortage of PTL candidates in Nova but we still have a core team. I think the real problem is that contributors think that being a PTL is a huge extra burden. I haven't been a PTL yet but I share this view. I think being a Nova PTL is a sizable amount of work. E.g. the PLT is the liaison by default if nobody steps up. And in Nova, according to the wiki, most of the liaison spots are filled by people who already left the community. So a nova PTL has a lot of hats by default. It could be that those hats does not need real work to be fulfilled. Still the list is long. So for me a better solution would be to rationalize (review, clarify) the list of expectations on the project teams. Then let the project teams commit to it either in a single person (a PTL) or by the whole team sharing the responsibilities between each other some explicit way. I can even accept that the project team explicitly rejects some of the responsibilities due to shortage of bandwidth in the team. For me explicitly not doing something is better than simply ignoring that such responsibility exists. I think Mohammed's proposal helps in a sense that removes the need to _find a single person as PTL_ in a situation where nobody wants to be a PTL. Basically removes the Nova core team from the wait-for-a-PTL-candidate state where we are in now. And in the same time it allows the core team to start discussing how to fulfill every responsibilities as a team. Cheers, gibi > > > Regards, > JP > > From Albert.Braden at synopsys.com Tue Mar 3 17:07:55 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Mar 2020 17:07:55 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: Sean, thank you for explaining this. I think I get it now. -----Original Message----- From: Sean Mooney Sent: Monday, March 2, 2020 10:51 AM To: Albert Braden ; openstack-discuss Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: > As an openstack operator I was pretty ecstatic to hear that the assortment of clients would be replaced by a single > client. I would be disappointed to find that a component would not integrate and would continue to use a separate > client. This would be a step backward IMO. > > The discussion about microversions goes over my head, but I would hope to see the developers get together and solve > the issue and continue working toward integration. just to summerisie it in a non technical way. the project specific cli had a convention where the client would ask the api what the newest micoverion it supported and defualt to that if the clinet suported it. that meant that the same command executed against two different clouds with different versions of openstakc deploy could have different behavior and different responces. so from an interoperablity point of view that is not great but from a usablity point of view the fact enduser dont have to care about microverions and the client would try to do the right thing made some things much simpler. the unifeid client (osc) chose to priorities interoperablity by defaulting to the oldest micorverions, so for nova that would be 2.0/2.1 meaning that if you execute the same command on two different cloud with different version of nova it will behave the same but if you want to use a feature intoduced in a later micorverion you have to explcitly request that via --os-compute-api-version or set that as a env var or in you cloud.yaml so really the difference is that osc requires the end user to be explictl about what micoversion to use and therefor be explict about the behavior of the api they expect (this is what we expect application that use the the api should do) where as the project client tried to just work and use the latest microverion which mostly workd excpet where we remove a feature in a later micorverions. for example we removed the force option on some move operation in nova because allowing forcing caused many harder to fix issues. i dont thnk the nova clinet would cap at the latest micorvierion that allowed forcing. so the poject client genreally did not guarantee that a command would work without specifcing a new micorverison it just that we remove things a hell of a lot less often then we add them. so as an end user that is the main difference between using osc vs glance clinet other then the fact i belive there is a bunch of stuff you can do with glance client that is missing in osc. parity is a spereate disucssion but it is vaild concern. From gr at ham.ie Tue Mar 3 17:11:03 2020 From: gr at ham.ie (Graham Hayes) Date: Tue, 3 Mar 2020 17:11:03 +0000 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <2ae9b18a-0366-af33-38bb-eb1e2c789928@ham.ie> On 02/03/2020 21:45, Mohammed Naser wrote: > Hi everyone: > > We're now in a spot where we have an increasing amount of projects > that don't end up with a volunteer as PTL, even if the project has > contributors .. no one wants to hold that responsibility alone for > many reasons. With time, the PTL role has become far more overloaded > with many extra responsibilities than what we define in our charter: > > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. > > The above is just a very basic idea, I don't intend to diving much > more in depth for now as I'd like to hear about what the rest of the > community thinks. > > Thanks, > Mohammed > Yeah, this is a tough spot. When we have talked about this in the past, we have theorized the role could be stripped back to "Project Liaison to the TC". As noted in other replies, the worry is that there is a lot of work that goes to the PTL by default currently. We should look at this work, and if is it not bringing value, just remove it. If it is bringing value, how do we ensure that someone does it? My consistent worry with the removal of the PTL single point of contact, is that without it, this work will get missed. From mdemaced at redhat.com Tue Mar 3 17:14:28 2020 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Tue, 3 Mar 2020 18:14:28 +0100 Subject: [kuryr] Job running open resolver In-Reply-To: <87zhcxpgqb.fsf@meyer.lemoncheese.net> References: <87zhcxpgqb.fsf@meyer.lemoncheese.net> Message-ID: Hi James, Thank you for reporting it. We will take a look at it. Best, Maysa. On Tue, Mar 3, 2020 at 5:11 PM James E. Blair wrote: > Hi, > > The openstack-infra team received a report from one of our > infrastructure donors that a gate job run by Kuryr is running a DNS > resolver open to the Internet. This is dangerous as, if discovered, it > can be used as part of DNS reflection attacks. The community and our > infrastructure donors share an interest in avoiding misuse of our > resources. > > Would you please look into whether this job is perhaps opening its > iptables ports too liberally, and whether that can be avoided? > > The job is kuryr-kubernetes-tempest-containerized-ovn, and the build > which triggered the alerting system is this one: > > https://zuul.opendev.org/t/openstack/build/166301f57b21402d8d8443bb1e17f970 > > Thanks, > > Jim > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Mar 3 15:05:26 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 03 Mar 2020 15:05:26 +0000 Subject: [all] oslotest 4.0 drops the moxstubout and functional modules, stestr dependency Message-ID: Per $subject, the 'oslotest.moxstubout' modules has been removed in oslotest 4.0 and we no longer include 'stestr' in our list of dependencies. I think I've resolved all issues in the 'openstack/' namespaced projects, but if not the remedies are to either switch to mock or explicitly included 'stestr' in your 'test-requirements.txt' file, respectively. We've also removed the 'oslotest.functional' module, but there do not appear to have been any users of this module. Stephen From hberaud at redhat.com Tue Mar 3 17:19:27 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 3 Mar 2020 18:19:27 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: <2ae9b18a-0366-af33-38bb-eb1e2c789928@ham.ie> References: <2ae9b18a-0366-af33-38bb-eb1e2c789928@ham.ie> Message-ID: Le mar. 3 mars 2020 à 18:13, Graham Hayes a écrit : > On 02/03/2020 21:45, Mohammed Naser wrote: > > Hi everyone: > > > > We're now in a spot where we have an increasing amount of projects > > that don't end up with a volunteer as PTL, even if the project has > > contributors .. no one wants to hold that responsibility alone for > > many reasons. With time, the PTL role has become far more overloaded > > with many extra responsibilities than what we define in our charter: > > > > > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > > > I think it's time to re-evaluate the project leadership model that we > > have. I am thinking that perhaps it would make a lot of sense to move > > from a single PTL model to multiple maintainers. This would leave it > > up to the maintainers to decide how they want to sort the different > > requirements/liaisons/contact persons between them. > > > > The above is just a very basic idea, I don't intend to diving much > > more in depth for now as I'd like to hear about what the rest of the > > community thinks. > > > > Thanks, > > Mohammed > > > > Yeah, this is a tough spot. > > When we have talked about this in the past, we have theorized the role > could be stripped back to "Project Liaison to the TC". As noted in other > replies, the worry is that there is a lot of work that goes to the PTL > by default currently. > > We should look at this work, and if is it not bringing value, just > remove it. > > If it is bringing value, how do we ensure that someone does it? > > My consistent worry with the removal of the PTL single point > of contact, is that without it, this work will get missed. > I agree the best way to miss something is to spread responsibility between members, everybody thinks that others are watchful. -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Tue Mar 3 17:19:39 2020 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Tue, 3 Mar 2020 11:19:39 -0600 Subject: [Swift] Errno 13 Permission denied writing objects xattr while upgrading from Kilo to Queens In-Reply-To: References: Message-ID: When we were debugging this issue in IRC it appeared that the issue was related to the introduction of O_TMPFILE support in Swift. Can you confirm that everything is still working properly when you force swift not to use o_tmpfile/linkat? The linkat detection has improved in subsequent releases of Swift, but it's still not clear the latest version would be properly workaround whatever issue your setup is having. When I google for O_TMPFILE and EPERM I see that there was bugs filed against glibc shortly after O_TMPFILE support was introduced to xfs. Can you share the output of "uname -a" and "ldd --version"? Perhaps we could discover or develop a base box image that can reproduce the error. On Tue, Mar 3, 2020 at 6:39 AM Gui Maluf wrote: > Hi all, > > I'm struggling with something really wierd. 3 weeks ago I started > upgrading my Keystone + Swift Ubuntu environment from Kilo to Queens. So I > moved from Ubuntu 14.04 to 18.04. > > I can create new accounts and containers. But no objects. I think between > Mitaka and Newton my storages started to throw Permission Denied error > while writing object metadata. > > I saw that the piece of python code where I getting problem was changed in > Rocky version and in hope of getting things fixed I've upgraded storage > version. But the error persists. > > http://paste.openstack.org/show/790217/ > > I've check everything I could, user, permissions, mount options. But I > still getting this error. > > I wrote a python script for creating files and writing metadata within the > swift mount with swift user and everything works fines. > > Don't know what to do anymore. This is a "dev" environment with two > storages only and a few disks. > Since I'm planning to do in the production environment I'm quite scared if > this happens again. > > Thanks in advance > > -- > *guilherme* \n11 > \t *maluf* > -- Clay Gerrard Wyatt Elementary Dad's Club Chair 2019-20 210 788 9431 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Tue Mar 3 17:23:50 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Tue, 03 Mar 2020 18:23:50 +0100 Subject: [kuryr] Job running open resolver In-Reply-To: <87zhcxpgqb.fsf@meyer.lemoncheese.net> References: <87zhcxpgqb.fsf@meyer.lemoncheese.net> Message-ID: <3833a1f53e5db846d6af212784050c60c81e4e42.camel@redhat.com> On Tue, 2020-03-03 at 08:04 -0800, James E. Blair wrote: > Hi, > > The openstack-infra team received a report from one of our > infrastructure donors that a gate job run by Kuryr is running a DNS > resolver open to the Internet. This is dangerous as, if discovered, it > can be used as part of DNS reflection attacks. The community and our > infrastructure donors share an interest in avoiding misuse of our > resources. > > Would you please look into whether this job is perhaps opening its > iptables ports too liberally, and whether that can be avoided? > > The job is kuryr-kubernetes-tempest-containerized-ovn, and the build > which triggered the alerting system is this one: > > https://zuul.opendev.org/t/openstack/build/166301f57b21402d8d8443bb1e17f970 Hi, The patch that disables the DNS is in review [1]. We'll come up with a way to run it locally, at the moment it should be safe for us to just disable it. [1] https://review.opendev.org/#/c/711069/ Thanks, Michał > Thanks, > > Jim > From Albert.Braden at synopsys.com Tue Mar 3 17:28:36 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Mar 2020 17:28:36 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. I think I agree with the suggestion that a --os-compute-api-version=auto option might be a good solution to this conflict. Does anyone want to explain why this isn’t a good idea? From: Abhishek Kekane Sent: Monday, March 2, 2020 10:18 PM To: Artem Goncharov Cc: Sean Mooney ; Albert Braden ; openstack-discuss Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) Hi Artem, Thanks for sharing the update. The decision was collectively taken during last cycle by glance team, as we don't have enough people/resources to work on this front. I will be more than happy to change this if anyone comes forward and bridge the gaps. Thanks & Best Regards, Abhishek Kekane On Tue, Mar 3, 2020 at 11:40 AM Artem Goncharov > wrote: On Tue, 3 Mar 2020, 06:08 Abhishek Kekane, > wrote: Hi All, Thank you for making this different thread, OSC is not up to date with the current glance features and neither it has shown any interest in doing so. From glance prospective we also didn't have any bandwidth to work on adding these support to OSC. That's honestly not true this days There is some major feature gap between current OSC and Glance and that's the reason why glance does not recommend to use OSC. That's still not reason to say please don't use it anymore. 1. Support for new image import workflow Partially implemented by me and I continue working on that 2. Support for hidden images Implemented 3. Support for multihash 4. Support for multiple stores I am relying on OSC and especially for image service trying to bring it in a more useful state, thus fixing huge parts in SDK. If anyone is interested to take up this work it will be great. Thanks & Best Regards, Abhishek Kekane On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney > wrote: On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: > As an openstack operator I was pretty ecstatic to hear that the assortment of clients would be replaced by a single > client. I would be disappointed to find that a component would not integrate and would continue to use a separate > client. This would be a step backward IMO. > > The discussion about microversions goes over my head, but I would hope to see the developers get together and solve > the issue and continue working toward integration. just to summerisie it in a non technical way. the project specific cli had a convention where the client would ask the api what the newest micoverion it supported and defualt to that if the clinet suported it. that meant that the same command executed against two different clouds with different versions of openstakc deploy could have different behavior and different responces. so from an interoperablity point of view that is not great but from a usablity point of view the fact enduser dont have to care about microverions and the client would try to do the right thing made some things much simpler. the unifeid client (osc) chose to priorities interoperablity by defaulting to the oldest micorverions, so for nova that would be 2.0/2.1 meaning that if you execute the same command on two different cloud with different version of nova it will behave the same but if you want to use a feature intoduced in a later micorverion you have to explcitly request that via --os-compute-api-version or set that as a env var or in you cloud.yaml so really the difference is that osc requires the end user to be explictl about what micoversion to use and therefor be explict about the behavior of the api they expect (this is what we expect application that use the the api should do) where as the project client tried to just work and use the latest microverion which mostly workd excpet where we remove a feature in a later micorverions. for example we removed the force option on some move operation in nova because allowing forcing caused many harder to fix issues. i dont thnk the nova clinet would cap at the latest micorvierion that allowed forcing. so the poject client genreally did not guarantee that a command would work without specifcing a new micorverison it just that we remove things a hell of a lot less often then we add them. so as an end user that is the main difference between using osc vs glance clinet other then the fact i belive there is a bunch of stuff you can do with glance client that is missing in osc. parity is a spereate disucssion but it is vaild concern. -----Original Message----- > From: Radosław Piliszek > > Sent: Monday, March 2, 2020 9:07 AM > To: openstack-discuss > > Subject: Re: [glance] Different checksum between CLI and curl > > Folks, > > sorry to interrupt but I think we have diverged a bit too much from the subject. > Only last Gaetan message is on topic here. > Please switch to new subject to discuss OSC future. > > -yoctozepto > > pon., 2 mar 2020 o 18:03 Tim Bell > napisał(a): > > > > > > > > On 2 Mar 2020, at 16:49, Dmitry Tantsur > wrote: > > > > Hi, > > > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano > wrote: > > > > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > wrote: > > > > > Hi Gaëtan, > > > > > > > > > > Glance team doesn't recommend to use OSC anymore. > > > > > I will recommend you to check the same behaviour using > > > > > python-glanceclient. > > > > > > > > That's not cool - everyone has switched to OSC. It's also the first > > > > time I've heard of it. > > > > > > > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. > > > > There has been great progress with Manila to get shares included ( > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= > > ) and it would be a pity if we’re asking our end users to understand all of the different project names and > > inconsistent options/arguments/syntax. > > > > We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still > > encourage it to simplify the experience for OpenStack cloud consumers. > > > > Tim > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Mar 3 17:49:53 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Mar 2020 11:49:53 -0600 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> On 3/3/20 11:28 AM, Albert Braden wrote: > > Am I understanding correctly that the Openstack community decided to > focus on the unified client, and to deprecate the individual clients, > and that the Glance team did not agree with this decision, and that > the Glance team is now having a pissing match with the rest of the > community, and is unilaterally deciding to continue developing the > Glance client and refusing to work on the unified client, or is > something different going on? I would ask everyone involved to > remember that we operators are down here, and the yellow rain falling > on our heads does not smell very good. > I definitely would not characterize it that way. With trying not to put too much personal bias into it, here's what I would say the situation is: - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away - Glance is a very small team with very, very limited resources - The OSC team is a very small team with very, very limited resources - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Tue Mar 3 18:04:31 2020 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Tue, 3 Mar 2020 18:04:31 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: On Tue, Mar 3, 2020 at 6:14 AM Artem Goncharov wrote: > > > On Tue, 3 Mar 2020, 06:08 Abhishek Kekane, wrote: > >> Hi All, >> >> Thank you for making this different thread, >> >> OSC is not up to date with the current glance features and neither it has >> shown any interest in doing so. >> From glance prospective we also didn't have any bandwidth to work on >> adding these support to OSC. >> > > > That's honestly not true this days > It's very much true that we do not have cycles for it. If you have found the time now after we've been complaining about the issues without any concrete actions for cycles, great for those who wants to use it. > >> There is some major feature gap between current OSC and Glance and that's >> the reason why glance does not recommend to use OSC. >> > > That's still not reason to say please don't use it anymore. > But it very much is. Tells quite a bit about the communication within the community that this is the first time we hear you actively working on those bits and making progress. Yet the osc is still lacking good year+ behind the feature parity and if the _demand_ is to use osc, "I'm just one person and have only so much time for this" is not good enough. Don't get me wrong, kudos to you to actually taking it on, but too little too late I guess. If 95-100% of user issues with client gets resolved by "Have you tried to use the native glanceclient instead?" and the response is "Yes, it works, thanks." it very much tells that we should not be supporting and promoting the tooling that is not under our control and just does not work. (BTW we do encourage all those users to take their osc issues to the osc team to get fixed, yet we get these raised to us every so often.) This really got to the point where we had that very same discussion in multiple OpenStack summits in a row after the call was made that everything should move to osc and every time we got the same response "We know there are problems and we will look into it." After so many cycles and the gap growing not shrinking just got us to the point of moving forwards (or reverting back to something that actually works for our users). BTW we did announce this and it was discussed in PTG. > > 1. Support for new image import workflow >> > Partially implemented by me and I continue working on that > > 2. Support for hidden images >> > Implemented > > 3. Support for multihash >> > 4. Support for multiple stores >> > > I am relying on OSC and especially for image service trying to bring it in > a more useful state, thus fixing huge parts in SDK. > That's great and obviously you have the freedom to choose the client you prefer to use. Just like we have a moral responsibility to our users to provide them reference client that is up to date, works and the issues raised gets attention. This is all beyond the personal preference, which I get very mixed feedback of depending to whom I talk to. If I send the mail to the mailing list I get those same handful of people yelling right away how unified client is the only way to go and even thinking anything else is heresy. When I talk with people in the field, customers and users in the hallway tracks the message is much more mixed. The osc target audience prefers or just uses GUI instead, then there is a good portion of people who really don't care as they use some automation suite anyways, there is the old school guys who prefers a tool for a job as in the dedicated clients(I have to admit for disclaimer I belong to this group myself) and then there is a group of people who really don't care as long as the client they use every now and then just works. So honestly outside of those few voices in this mailing list I very rarely hear the demand of unified client and much more get the request to provide something that works, which was the major driver for our decision. Harsh, absolutely; justified, I'd like to think so. And this is just my personal experience with Glance, we're in a blessed situation of not jumping into the microversions bandwagon which seems to be totally hidden can of worms from us in this topic. For all the rest of you interested about the topic, next time you start demanding us to go to osc again, please put your money where your mouth is first and help those guys to deliver. Best, Erno "jokke" Kuvaja > >> If anyone is interested to take up this work it will be great. >> >> Thanks & Best Regards, >> >> Abhishek Kekane >> >> >> On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney wrote: >> >>> On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: >>> > As an openstack operator I was pretty ecstatic to hear that the >>> assortment of clients would be replaced by a single >>> > client. I would be disappointed to find that a component would not >>> integrate and would continue to use a separate >>> > client. This would be a step backward IMO. >>> > >>> > The discussion about microversions goes over my head, but I would hope >>> to see the developers get together and solve >>> > the issue and continue working toward integration. >>> just to summerisie it in a non technical way. >>> the project specific cli had a convention where the client would ask the >>> api what the newest micoverion it supported >>> and defualt to that if the clinet suported it. that meant that the same >>> command executed against two different clouds >>> with different versions of openstakc deploy could have different >>> behavior and different responces. so from an >>> interoperablity point of view that is not great but from a usablity >>> point of view the fact enduser dont have to care >>> about microverions and the client would try to do the right thing made >>> some things much simpler. >>> >>> the unifeid client (osc) chose to priorities interoperablity by >>> defaulting to the oldest micorverions, so for nova that >>> would be 2.0/2.1 meaning that if you execute the same command on two >>> different cloud with different version of nova it >>> will behave the same but if you want to use a feature intoduced in a >>> later micorverion you have to explcitly request >>> that via --os-compute-api-version or set that as a env var or in you >>> cloud.yaml >>> >>> so really the difference is that osc requires the end user to be >>> explictl about what micoversion to use and therefor be >>> explict about the behavior of the api they expect (this is what we >>> expect application that use the the api should do) >>> where as the project client tried to just work and use the latest >>> microverion which mostly workd excpet where we remove >>> a feature in a later micorverions. for example we removed the force >>> option on some move operation in nova because >>> allowing forcing caused many harder to fix issues. i dont thnk the nova >>> clinet would cap at the latest micorvierion that >>> allowed forcing. so the poject client genreally did not guarantee that a >>> command would work without specifcing a new >>> micorverison it just that we remove things a hell of a lot less often >>> then we add them. >>> >>> so as an end user that is the main difference between using osc vs >>> glance clinet other then the fact i belive there is a >>> bunch of stuff you can do with glance client that is missing in osc. >>> parity is a spereate disucssion but it is vaild >>> concern. >>> >>> -----Original Message----- >>> > From: Radosław Piliszek >>> > Sent: Monday, March 2, 2020 9:07 AM >>> > To: openstack-discuss >>> > Subject: Re: [glance] Different checksum between CLI and curl >>> > >>> > Folks, >>> > >>> > sorry to interrupt but I think we have diverged a bit too much from >>> the subject. >>> > Only last Gaetan message is on topic here. >>> > Please switch to new subject to discuss OSC future. >>> > >>> > -yoctozepto >>> > >>> > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): >>> > > >>> > > >>> > > >>> > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: >>> > > >>> > > Hi, >>> > > >>> > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano >>> wrote: >>> > > > >>> > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >>> > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane >>> wrote: >>> > > > > > Hi Gaëtan, >>> > > > > > >>> > > > > > Glance team doesn't recommend to use OSC anymore. >>> > > > > > I will recommend you to check the same behaviour using >>> > > > > > python-glanceclient. >>> > > > > >>> > > > > That's not cool - everyone has switched to OSC. It's also the >>> first >>> > > > > time I've heard of it. >>> > > > > >>> > > >>> > > From the end user perspective, we’ve had positive feedback on the >>> convergence to OSC from our cloud consumers. >>> > > >>> > > There has been great progress with Manila to get shares included ( >>> > > >>> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= >>> > > ) and it would be a pity if we’re asking our end users to >>> understand all of the different project names and >>> > > inconsistent options/arguments/syntax. >>> > > >>> > > We had hoped for a project goal to get everyone aligned on OSC but >>> there was not consensus on this, I’d still >>> > > encourage it to simplify the experience for OpenStack cloud >>> consumers. >>> > > >>> > > Tim >>> > > >>> > > >>> > >>> > >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Tue Mar 3 18:20:59 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Mar 2020 18:20:59 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: Sean, thank you for clarifying that. Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. From: Sean McGinnis Sent: Tuesday, March 3, 2020 9:50 AM To: openstack-discuss at lists.openstack.org Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) On 3/3/20 11:28 AM, Albert Braden wrote: Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. I definitely would not characterize it that way. With trying not to put too much personal bias into it, here's what I would say the situation is: - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away - Glance is a very small team with very, very limited resources - The OSC team is a very small team with very, very limited resources - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Mar 3 18:34:16 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 3 Mar 2020 10:34:16 -0800 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: <2ae9b18a-0366-af33-38bb-eb1e2c789928@ham.ie> Message-ID: Putting my former PTL hat on: I agree with Slawek. I think the PTL role is still important. That said, the list of responsibilities can get daunting. I would lean toward re-affirming the "Project Team Leads" statement you linked and highlight, in the new contributor guides, that the other tasks can be delegated. Maybe we should also re-word that statement to clarify or soften the "manage day-to-day operations" part. I think over the history of OpenStack we have had "hands on" PTLs and more "delegate" PTLs, both supporting healthy projects. The lack of a clear "boundary" for the PTL role has probably lead to Fear-Uncertainty-and-Doubt. My hope is that the new contributor guide goal will help clarify the role. This may also highlight tasks that we can remove (deprecate ask.openstack.org anyone?). Michael On Tue, Mar 3, 2020 at 9:22 AM Herve Beraud wrote: > > > > Le mar. 3 mars 2020 à 18:13, Graham Hayes a écrit : >> >> On 02/03/2020 21:45, Mohammed Naser wrote: >> > Hi everyone: >> > >> > We're now in a spot where we have an increasing amount of projects >> > that don't end up with a volunteer as PTL, even if the project has >> > contributors .. no one wants to hold that responsibility alone for >> > many reasons. With time, the PTL role has become far more overloaded >> > with many extra responsibilities than what we define in our charter: >> > >> > https://governance.openstack.org/tc/reference/charter.html#project-team-leads >> > >> > I think it's time to re-evaluate the project leadership model that we >> > have. I am thinking that perhaps it would make a lot of sense to move >> > from a single PTL model to multiple maintainers. This would leave it >> > up to the maintainers to decide how they want to sort the different >> > requirements/liaisons/contact persons between them. >> > >> > The above is just a very basic idea, I don't intend to diving much >> > more in depth for now as I'd like to hear about what the rest of the >> > community thinks. >> > >> > Thanks, >> > Mohammed >> > >> >> Yeah, this is a tough spot. >> >> When we have talked about this in the past, we have theorized the role >> could be stripped back to "Project Liaison to the TC". As noted in other >> replies, the worry is that there is a lot of work that goes to the PTL >> by default currently. >> >> We should look at this work, and if is it not bringing value, just >> remove it. >> >> If it is bringing value, how do we ensure that someone does it? >> >> My consistent worry with the removal of the PTL single point >> of contact, is that without it, this work will get missed. > > > I agree the best way to miss something is to spread responsibility between members, everybody thinks that others are watchful. > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From openstack at nemebean.com Tue Mar 3 18:48:40 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Mar 2020 12:48:40 -0600 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: On 3/3/20 9:44 AM, Jean-Philippe Evrard wrote: >> Off hand, I feel like my initial mental response was "Noooo!". Upon >> thinking of this and talking to Mohammed some, I think it is a >> necessary evolutionary step. As a burned out PTL who cares, I wonder >> "who will step up after me" and carry what I perceive as the >> organizational and co-ordination overhead, standing on stage, and >> running meetings. Nothing prevents any contributor from running a >> community meeting, standing on a stage and giving a talk or project >> update! We are a community, and single points of contact just lead >> community members to burnout. >> >> Possibly what we are lacking is a "Time for a meeting!" bot. >> > > I am not sure to understand what you are proposing. > > Wasn't the liaison's system meant for avoiding burnout by delegating > tasks, while staying clear on duties? It avoids the back and forth of > communication to some maintainer, solving the question "who is handling > that?". It still allows delegation. IMO, there was never a limitation > of the amount of liaisons for a single "kind" of liaison. You could > have 2 ppl working on the releases, 2 on the bugs, etc. Yeah, I always saw the liaison system as a way to spread the PTL workload, not as something to increase it. It was also a way to distribute work across the team without falling prey to the "someone else will do it" problem because you still have specific people assigned to specific tasks. I see the Neutron bug deputy and Keystone L1 positions similarly, and I think they're good things. As to whether all of the liaisons are still needed, I will admit there are only a handful of projects that really still have Oslo liaisons. I'm not sure eliminating it entirely benefits anyone though, it just means now I have to ping the PTL when I need to coordinate something with a project. Or if we eliminate the PTL then I have to throw issues into the IRC void and hope someone responds. When we have cross-project changes that involve a lot of projects that means inevitably someone won't respond and we have to start delivering ultimatums. Maybe that's okay, but it slows things down (we have to wait for a response, even if we don't get one) and always feels kind of unfriendly. I guess what I'm saying is that eliminating the liaison position doesn't mean we stop needing a point of contact from a project. I suspect the same thing applies to the release team. > > Don't get me wrong: on the "drop of the PTL" story, I was more in the > "we should drop this" clan. When I discussed it last time with Mohammed > (and others, but it was loooooong ago), I didn't focus on the liaisons. > But before side-tracking this thread, I would like to understand what > are the pain points in the current model (explicitly! examples!), and > how moving away from PTLs and liaisons will help the team of > maintainers. At first sight, it looks like team duties will be vague. > There are various levels of success on self-organizing teams. I've been skeptical of this before and I won't reiterate all of those arguments again unless it proves necessary, but I'm also curious what this would solve that PTL delegation can't already. Is it just a perception thing? Maybe we re-brand the PTL position "Project Delegator" or something to make it clear to potential candidates that they don't have to be responsible for everything that goes on in a project? Unless we feel there are things PTLs are doing that don't need to be done, in which case we should clearly stop doing those things, the workload remains the same no matter what we call it, "maintainers" or "PTL and liaisons". > > > Regards, > JP > > From radoslaw.piliszek at gmail.com Tue Mar 3 18:52:49 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 3 Mar 2020 19:52:49 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: <2ae9b18a-0366-af33-38bb-eb1e2c789928@ham.ie> Message-ID: I agree with folks before me. As my predecessors have stated, throwing all folks into "maintainers" group/team would kill the responsibility relation. I believe we could get away with splitting the PTL role into multiple subroles and actually let each project decide on what these would be to satisfy that project's needs (but give some guidelines too maybe?). Also, it would make sense to allow assigning these on primary and secondary terms. Both would get "privileges", everyone is obviously responsible for itself, but primary is responsible if they are currently available and the role is POC-like. This way we could both lessen the load on PTL and officially allow vices. That said, these would need to get assigned and /me not sure how to best approach this atm. PS: Deprecate ask.o.o all the way. :-) -yoctozepto wt., 3 mar 2020 o 19:42 Michael Johnson napisał(a): > > Putting my former PTL hat on: > > I agree with Slawek. I think the PTL role is still important. > > That said, the list of responsibilities can get daunting. > > I would lean toward re-affirming the "Project Team Leads" statement > you linked and highlight, in the new contributor guides, that the > other tasks can be delegated. Maybe we should also re-word that > statement to clarify or soften the "manage day-to-day operations" > part. > > I think over the history of OpenStack we have had "hands on" PTLs and > more "delegate" PTLs, both supporting healthy projects. > > The lack of a clear "boundary" for the PTL role has probably lead to > Fear-Uncertainty-and-Doubt. My hope is that the new contributor guide > goal will help clarify the role. > This may also highlight tasks that we can remove (deprecate > ask.openstack.org anyone?). > > Michael From tim.bell at cern.ch Tue Mar 3 18:55:38 2020 From: tim.bell at cern.ch (Tim Bell) Date: Tue, 3 Mar 2020 19:55:38 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> > On 3 Mar 2020, at 19:20, Albert Braden wrote: > > Sean, thank you for clarifying that. > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient ) BTW, I also would vote for =auto as the default. Tim > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > > From: Sean McGinnis > Sent: Tuesday, March 3, 2020 9:50 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > On 3/3/20 11:28 AM, Albert Braden wrote: > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > I definitely would not characterize it that way. > > With trying not to put too much personal bias into it, here's what I would say the situation is: > > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > - Glance is a very small team with very, very limited resources > - The OSC team is a very small team with very, very limited resources > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.bell at cern.ch Tue Mar 3 19:00:35 2020 From: tim.bell at cern.ch (Tim Bell) Date: Tue, 3 Mar 2020 20:00:35 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> Message-ID: > On 3 Mar 2020, at 19:55, Tim Bell wrote: > > > >> On 3 Mar 2020, at 19:20, Albert Braden > wrote: >> >> Sean, thank you for clarifying that. >> >> Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? >> >> > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. BTW, I found the etherpad from Berlin (https://etherpad.openstack.org/p/BER-t-series-goals ) and the associated mailing list discussion at http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient ) > > BTW, I also would vote for =auto as the default. > > Tim > >> We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. >> >> From: Sean McGinnis > >> Sent: Tuesday, March 3, 2020 9:50 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) >> >> On 3/3/20 11:28 AM, Albert Braden wrote: >> Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. >> I definitely would not characterize it that way. >> >> With trying not to put too much personal bias into it, here's what I would say the situation is: >> >> - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away >> - Glance is a very small team with very, very limited resources >> - The OSC team is a very small team with very, very limited resources >> - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI >> - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) >> - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Mar 3 19:00:49 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 3 Mar 2020 20:00:49 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> Message-ID: Folks, I think we should clarify the problem we are discussing. The discussion started when the response to "there could be a bug in OSC" was "we are not recommending OSC". I don't feel like OSC lagging behind per-project-specific client in terms of features is a bad thing considering the workload. On the other hand, suddenly making OSC "not recommended" for a project already supported by / supporting OSC is a <>. (TM) I feel like there should be an official on how we are going to handle the clients situation and just document it. -yoctozepto wt., 3 mar 2020 o 19:11 Erno Kuvaja napisał(a): > > On Tue, Mar 3, 2020 at 6:14 AM Artem Goncharov wrote: >> >> >> >> On Tue, 3 Mar 2020, 06:08 Abhishek Kekane, wrote: >>> >>> Hi All, >>> >>> Thank you for making this different thread, >>> >>> OSC is not up to date with the current glance features and neither it has shown any interest in doing so. >>> From glance prospective we also didn't have any bandwidth to work on adding these support to OSC. >> >> >> >> That's honestly not true this days > > > It's very much true that we do not have cycles for it. If you have found the time now after we've been complaining about the issues without any concrete actions for cycles, great for those who wants to use it. >>> >>> >>> There is some major feature gap between current OSC and Glance and that's the reason why glance does not recommend to use OSC. >> >> >> That's still not reason to say please don't use it anymore. > > > But it very much is. > > Tells quite a bit about the communication within the community that this is the first time we hear you actively working on those bits and making progress. Yet the osc is still lacking good year+ behind the feature parity and if the _demand_ is to use osc, "I'm just one person and have only so much time for this" is not good enough. Don't get me wrong, kudos to you to actually taking it on, but too little too late I guess. > > If 95-100% of user issues with client gets resolved by "Have you tried to use the native glanceclient instead?" and the response is "Yes, it works, thanks." it very much tells that we should not be supporting and promoting the tooling that is not under our control and just does not work. (BTW we do encourage all those users to take their osc issues to the osc team to get fixed, yet we get these raised to us every so often.) This really got to the point where we had that very same discussion in multiple OpenStack summits in a row after the call was made that everything should move to osc and every time we got the same response "We know there are problems and we will look into it." After so many cycles and the gap growing not shrinking just got us to the point of moving forwards (or reverting back to something that actually works for our users). BTW we did announce this and it was discussed in PTG. >> >> >>> 1. Support for new image import workflow >> >> Partially implemented by me and I continue working on that >> >>> 2. Support for hidden images >> >> Implemented >> >>> 3. Support for multihash >>> >>> 4. Support for multiple stores >> >> >> I am relying on OSC and especially for image service trying to bring it in a more useful state, thus fixing huge parts in SDK. > > > That's great and obviously you have the freedom to choose the client you prefer to use. Just like we have a moral responsibility to our users to provide them reference client that is up to date, works and the issues raised gets attention. > > This is all beyond the personal preference, which I get very mixed feedback of depending to whom I talk to. If I send the mail to the mailing list I get those same handful of people yelling right away how unified client is the only way to go and even thinking anything else is heresy. When I talk with people in the field, customers and users in the hallway tracks the message is much more mixed. The osc target audience prefers or just uses GUI instead, then there is a good portion of people who really don't care as they use some automation suite anyways, there is the old school guys who prefers a tool for a job as in the dedicated clients(I have to admit for disclaimer I belong to this group myself) and then there is a group of people who really don't care as long as the client they use every now and then just works. So honestly outside of those few voices in this mailing list I very rarely hear the demand of unified client and much more get the request to provide something that works, which was the major driver for our decision. > > Harsh, absolutely; justified, I'd like to think so. And this is just my personal experience with Glance, we're in a blessed situation of not jumping into the microversions bandwagon which seems to be totally hidden can of worms from us in this topic. > > For all the rest of you interested about the topic, next time you start demanding us to go to osc again, please put your money where your mouth is first and help those guys to deliver. > > Best, > Erno "jokke" Kuvaja > >> >>> >>> If anyone is interested to take up this work it will be great. >>> >>> Thanks & Best Regards, >>> >>> Abhishek Kekane >>> >>> >>> On Tue, Mar 3, 2020 at 12:24 AM Sean Mooney wrote: >>>> >>>> On Mon, 2020-03-02 at 18:05 +0000, Albert Braden wrote: >>>> > As an openstack operator I was pretty ecstatic to hear that the assortment of clients would be replaced by a single >>>> > client. I would be disappointed to find that a component would not integrate and would continue to use a separate >>>> > client. This would be a step backward IMO. >>>> > >>>> > The discussion about microversions goes over my head, but I would hope to see the developers get together and solve >>>> > the issue and continue working toward integration. >>>> just to summerisie it in a non technical way. >>>> the project specific cli had a convention where the client would ask the api what the newest micoverion it supported >>>> and defualt to that if the clinet suported it. that meant that the same command executed against two different clouds >>>> with different versions of openstakc deploy could have different behavior and different responces. so from an >>>> interoperablity point of view that is not great but from a usablity point of view the fact enduser dont have to care >>>> about microverions and the client would try to do the right thing made some things much simpler. >>>> >>>> the unifeid client (osc) chose to priorities interoperablity by defaulting to the oldest micorverions, so for nova that >>>> would be 2.0/2.1 meaning that if you execute the same command on two different cloud with different version of nova it >>>> will behave the same but if you want to use a feature intoduced in a later micorverion you have to explcitly request >>>> that via --os-compute-api-version or set that as a env var or in you cloud.yaml >>>> >>>> so really the difference is that osc requires the end user to be explictl about what micoversion to use and therefor be >>>> explict about the behavior of the api they expect (this is what we expect application that use the the api should do) >>>> where as the project client tried to just work and use the latest microverion which mostly workd excpet where we remove >>>> a feature in a later micorverions. for example we removed the force option on some move operation in nova because >>>> allowing forcing caused many harder to fix issues. i dont thnk the nova clinet would cap at the latest micorvierion that >>>> allowed forcing. so the poject client genreally did not guarantee that a command would work without specifcing a new >>>> micorverison it just that we remove things a hell of a lot less often then we add them. >>>> >>>> so as an end user that is the main difference between using osc vs glance clinet other then the fact i belive there is a >>>> bunch of stuff you can do with glance client that is missing in osc. parity is a spereate disucssion but it is vaild >>>> concern. >>>> >>>> -----Original Message----- >>>> > From: Radosław Piliszek >>>> > Sent: Monday, March 2, 2020 9:07 AM >>>> > To: openstack-discuss >>>> > Subject: Re: [glance] Different checksum between CLI and curl >>>> > >>>> > Folks, >>>> > >>>> > sorry to interrupt but I think we have diverged a bit too much from the subject. >>>> > Only last Gaetan message is on topic here. >>>> > Please switch to new subject to discuss OSC future. >>>> > >>>> > -yoctozepto >>>> > >>>> > pon., 2 mar 2020 o 18:03 Tim Bell napisał(a): >>>> > > >>>> > > >>>> > > >>>> > > On 2 Mar 2020, at 16:49, Dmitry Tantsur wrote: >>>> > > >>>> > > Hi, >>>> > > >>>> > > On Mon, Mar 2, 2020 at 4:29 PM Luigi Toscano wrote: >>>> > > > >>>> > > > On Monday, 2 March 2020 10:54:03 CET Mark Goddard wrote: >>>> > > > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane wrote: >>>> > > > > > Hi Gaëtan, >>>> > > > > > >>>> > > > > > Glance team doesn't recommend to use OSC anymore. >>>> > > > > > I will recommend you to check the same behaviour using >>>> > > > > > python-glanceclient. >>>> > > > > >>>> > > > > That's not cool - everyone has switched to OSC. It's also the first >>>> > > > > time I've heard of it. >>>> > > > > >>>> > > >>>> > > From the end user perspective, we’ve had positive feedback on the convergence to OSC from our cloud consumers. >>>> > > >>>> > > There has been great progress with Manila to get shares included ( >>>> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.opendev.org_-23_c_642222_26_&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=gfnHFJM7fXXAlOxyUenF0xGqH3gNiec3LxN-Gd5Ey-o&s=SYi8yPy9Dz0CgrkT5P6rTzs3141Gj4K9zO4Ht3GTYAk&e= >>>> > > ) and it would be a pity if we’re asking our end users to understand all of the different project names and >>>> > > inconsistent options/arguments/syntax. >>>> > > >>>> > > We had hoped for a project goal to get everyone aligned on OSC but there was not consensus on this, I’d still >>>> > > encourage it to simplify the experience for OpenStack cloud consumers. >>>> > > >>>> > > Tim >>>> > > >>>> > > >>>> > >>>> > >>>> >>>> From johnsomor at gmail.com Tue Mar 3 19:06:47 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 3 Mar 2020 11:06:47 -0800 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: Erno and I have had this discussion in the hallway at the PTGs before, so my response should be no surprise. The Octavia client is exclusively an OpenStack Client (OSC) plugin. This is partly because python-neutronclient (octavia was a neutron sub-project at the time) was already deprecated, but also because we saw the advantages and much improved user experience with OSC. We also exclusively use OSC for our devstack plugin scripts, etc. This includes interacting with glance[1]. Personally I have also moved to exclusively using OSC for my development work. I load/delete/show/tag images in glance on a daily basis. From my perspective, the basic features of glance work well, if not better due to the standardized output filtering/formatting support. So, I am an advocate for the OpenStack Client work and have contributed to it[2]. I also understand that glance has some development resource constraints. So, I have a few questions for the glance team: 1. Do we have RFEs for the feature gaps between the python-glanceclient and OSC? 2. Do we have a worklist that prioritizes these RFEs in order of use? 3. Do we have open stories for any OSC issues that may impact the above RFEs? If so, can you reply to this list with those links? I think there are folks here offering to help or enlist help to resolve these issues. Michael [1] https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L48 [2] https://review.opendev.org/#/c/662859/ On Tue, Mar 3, 2020 at 10:24 AM Albert Braden wrote: > > Sean, thank you for clarifying that. > > > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > > > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > > > > From: Sean McGinnis > Sent: Tuesday, March 3, 2020 9:50 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > > > On 3/3/20 11:28 AM, Albert Braden wrote: > > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > > I definitely would not characterize it that way. > > With trying not to put too much personal bias into it, here's what I would say the situation is: > > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > - Glance is a very small team with very, very limited resources > - The OSC team is a very small team with very, very limited resources > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > From Albert.Braden at synopsys.com Tue Mar 3 19:16:35 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 3 Mar 2020 19:16:35 +0000 Subject: Trouble with large RAM VMs - (formerly RE: Virtio memory balloon driver) Message-ID: I successfully blacklisted the "memory balloon" driver in a VM image and found that another driver is choking on the large memory. The guys in #centos think it might be the USB or PS/2 drivers. Since these are kernel drivers they cannot be blacklisted; the only way to not load them is to not call them. On the hypervisor in /etc/libvirt/qemu/.xml I see this:
I tried removing those lines, but it looks like the XML file is re-created whenever I stop and start the VM. In nova.conf I see "#pointer_model = usbtablet" If I set "pointer_model = " on the HV and then restart nova, I see this in the log: 2020-03-03 11:05:17.233 228915 ERROR nova ConfigFileValueError: Value for option pointer_model is not valid: Valid values are [None, ps2mouse, usbtablet], but found '' If I set "pointer_model = None" then I see this: 2020-03-03 11:06:24.761 229290 ERROR nova ConfigFileValueError: Value for option pointer_model is not valid: Valid values are [None, ps2mouse, usbtablet], but found 'None' What am I missing? # # Generic property to specify the pointer type. # # Input devices allow interaction with a graphical framebuffer. For # example to provide a graphic tablet for absolute cursor movement. # # If set, the 'hw_pointer_model' image property takes precedence over # this configuration option. # # Possible values: # # * None: Uses default behavior provided by drivers (mouse on PS2 for # libvirt x86) # * ps2mouse: Uses relative movement. Mouse connected by PS2 # * usbtablet: Uses absolute movement. Tablet connect by USB # # Related options: # # * usbtablet must be configured with VNC enabled or SPICE enabled and SPICE # agent disabled. When used with libvirt the instance mode should be # configured as HVM. # (string value) # Possible values: # - # ps2mouse - # usbtablet - pointer_model = None -----Original Message----- From: Albert Braden Sent: Wednesday, February 5, 2020 9:34 AM To: openstack-discuss at lists.openstack.org Subject: RE: Virtio memory balloon driver When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to disable it. How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? From fungi at yuggoth.org Tue Mar 3 19:44:27 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 3 Mar 2020 19:44:27 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: <20200303194427.4cmiy6rgaxfrlp6b@yuggoth.org> On 2020-03-03 18:20:59 +0000 (+0000), Albert Braden wrote: [...] > Was my understanding that the community decided to focus on the > unified client incorrect? Is the unified/individual client debate > still a matter of controversy? I'll try to avoid revisiting points other folks have made in this thread, but feel obliged to highlight that "the community" does not possess a hive mind and the idea that it "decides" things is somewhat of a misunderstanding. I would characterize the focus on the unified client as a consensus choice, but that doesn't mean that everyone who has a stake in that choice is in agreement with the direction (and with a community the size of OpenStack's, it's unlikely everyone will ever agree unanimously on a topic). > Is it possible that the unified client will be deprecated in favor > of individual clients after more discussion? [...] Anything is possible. I don't personally consider that a likely outcome, but I don't think anyone has a crystal ball which can tell us that for certain. > Do I need to start looking at individual clients again, and > telling our users to use them in some cases? [...] I would take it as a sign to not ask the Glance contributors if you run into a problem using the unified OpenStack client or SDK, and try reproducing a problem with direct API interactions before reporting a bug against the Glance service (to avoid being told that they won't even look at it if your reproducer relies on OSC). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mordred at inaugust.com Tue Mar 3 19:55:26 2020 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 3 Mar 2020 13:55:26 -0600 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: > On Mar 3, 2020, at 12:20 PM, Albert Braden wrote: > > Sean, thank you for clarifying that. > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? Nope. Several of them even already don’t exist or are deprecated. Additiontally, several surrounding tools have explicit policies to NEVER touch python-*client libraries. Specifically Ansible - but I believe Salt has also migrated to SDK - and then any app developer who wants to be able to sanely target multiple clouds uses SDK instead of python-*client. So I can’t do anything about people preferring individual projects - but the unified stuff is DEFINITELY not getting deprecated or going away - quite simply because it cannot. And I hope that we can continue to convince more people of the power inherent in doing work to support their service in SDK/OSC instead of off in their own corner - but as I said, that I can’t do anything about. > I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > From: Sean McGinnis > Sent: Tuesday, March 3, 2020 9:50 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > On 3/3/20 11:28 AM, Albert Braden wrote: > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > I definitely would not characterize it that way. > > With trying not to put too much personal bias into it, here's what I would say the situation is: > > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > - Glance is a very small team with very, very limited resources > - The OSC team is a very small team with very, very limited resources > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends From mordred at inaugust.com Tue Mar 3 20:12:30 2020 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 3 Mar 2020 14:12:30 -0600 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> Message-ID: > On Mar 3, 2020, at 12:55 PM, Tim Bell wrote: > > > >> On 3 Mar 2020, at 19:20, Albert Braden wrote: >> >> Sean, thank you for clarifying that. >> >> Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? >> >> > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) We’ve been working in SDK also to empower more people directly by being a bit more liberal with core. I think it’s time to start applying this approach to OSC as well. It’s never going to work to require the OSC team to implement everything, but neither is it super awesome to completely decentralize as the plugin/entrypoints issues have shown. I think SDK has been happy with blessing service humans rather quickly. > > BTW, I also would vote for =auto as the default This is what the case will be as we move towards replacing more and more of OSC’s guts with SDK. But let me describe it slightly differently: The way this works in SDK is that there is ONE user interface, which wants to track the latest as best as it can. But we can’t just do “auto” - because microversions can introduce breaking changes, so we need to add support to SDK for the most recent microversion we’re aware of. Then SDK negotiates to find the best microversion that is understands, and it always uses that. SDK has the POV that an end-user should almost never need to care about a micro version - if a user cares they are either in nova-core, or we’ve done something wrong. Case in point is this: https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/compute/v2/server.py#L457-L474 The nova team rightfully changed the semantics of live migrate because of safety. Mriedem put together the logic to express what the appropriate behavior would be, given a set of inputs, across the range of versions so that a user can do things and they’ll work. The end result is a live_migrate call that works across versions as safely as it can. I mention all of this because getting this work done was one of the key things we wanted to get right before we started transitioning OSC in earnest. It’s there - it works, and it’s being used across nova and ironic. So - I hear what people want from OSC - they want a thing that behaves like auto does. We agree - and the mechanism that makes us able to do that _safely_ is things like the above. > Tim > >> We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. >> >> From: Sean McGinnis >> Sent: Tuesday, March 3, 2020 9:50 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) >> >> On 3/3/20 11:28 AM, Albert Braden wrote: >> Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. >> I definitely would not characterize it that way. >> >> With trying not to put too much personal bias into it, here's what I would say the situation is: >> >> - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away >> - Glance is a very small team with very, very limited resources >> - The OSC team is a very small team with very, very limited resources >> - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI >> - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) >> - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends >> From gagehugo at gmail.com Tue Mar 3 20:29:49 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 3 Mar 2020 14:29:49 -0600 Subject: [openstack-helm] Virtual Midcycle - March 2020 Message-ID: Hello everyone, The openstack-helm team is looking to host a virtual midcycle this month to discuss a variety of topics in a more focused group via video conferencing. We have started an etherpad to track anyone who wishes to attend as well as any topics that want to be discussed. https://etherpad.openstack.org/p/osh-virtual-ptg-2020-03 Also there is a doodle poll to determine a time-slot in the next few weeks for the best time, we will try to choose the best time-slot based on the number of available people, please check it out here: https://doodle.com/poll/g6uvdb4rbad9s8gb We will update the etherpad to track any changes in meeting schedule, as well as when we find a web conference tool that works best. - Gage -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Mar 3 20:34:27 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 3 Mar 2020 21:34:27 +0100 Subject: [nova] [neutron] multiple fixed_ip In-Reply-To: References: <20200303130104.GA29109@sync> Message-ID: Hi, If Your network has got IPv4 and IPv6 subnet, it should be done by default that created port will have one IPv4 and one IPv6 allocated. I just did it with nova client: nova boot --flavor m1.micro --image cirros-0.4.0 --nic net-name=private test-vm And my vm has IPs like: +--------------------------------------+---------+--------+------------+-------------+---------------------------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+---------+--------+------------+-------------+---------------------------------------------------------+ | 92385f1f-7899-40b7-94ec-bbceb6749722 | test-vm | ACTIVE | - | Running | private=fdc8:d3a9:de7b:0:f816:3eff:fe0d:16f5, 10.0.0.31 | +--------------------------------------+---------+--------+------------+-------------+————————————————————————————+ Also from novaclient help message it seems that You should be able to specify such IPv4 and IPv6 addresses: nova help boot | grep nic [--nic ] But that I didn’t try actually. > On 3 Mar 2020, at 14:23, Radosław Piliszek wrote: > > Hi Arnaud, > > Non-core here. > Last time I checked you had to decide on one and then update with > neutron (or first create the port with neutron and then give it to > nova :-) ). > Moreover, not sure if IPv6 goes through Nova directly or not (docs > suggest still nah). > > -yoctozepto > > wt., 3 mar 2020 o 14:09 Arnaud Morin napisał(a): >> >> >> Hello all, >> >> I was doing some tests to create a server using nova API. >> My objective is to create a server with one port but multiples IPs (one >> IPv4 and one IPv6). >> >> If I understand well the neutron API, I can create a port using the >> fixed_ips array parameter [1] >> >> Unfortunately, on nova side, it seems to only accept a string with only >> one ip (fixed_ip) [2] >> >> Is it mandatory for me to create the port with neutron? >> Or is there any trick that I missed on nova API side? >> >> Thanks! >> >> >> [1] https://docs.openstack.org/api-ref/network/v2/?expanded=create-port-detail#ports >> [2] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server >> >> >> >> -- >> Arnaud Morin >> >> > — Slawek Kaplonski Senior software engineer Red Hat From pierre at stackhpc.com Tue Mar 3 21:36:59 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 3 Mar 2020 21:36:59 +0000 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> Message-ID: Hi Gaëtan, Going back to your original question: have you tried to open downloaded with curl with a text editor? I am expecting that it is actually slightly bigger than the correct file downloaded with OSC, because it would include HTTP headers at the top. You should drop the `-i` option from your curl command line and try again: -i, --include (HTTP) Include the HTTP-header in the output. The HTTP-header includes things like server-name, date of the document, HTTP-version and more... Best wishes, Pierre Riteau (priteau) On Mon, 2 Mar 2020 at 15:21, wrote: > > Abhishek, > > Thansk for your answer, I tried both CLIs (Train release) and the issue > still the same. > > Paste of the "curl" command: http://paste.openstack.org/show/790197/ > > Result of the "md5sum" on the file created by the "curl": > $ md5sum /tmp/kernel.glance > c3726de8e03158305453f328d85e9957 /tmp/kernel.glance > > As Mark and Radoslaw, I'm quite surprised about OSC been deprecated. > Do you have any release note about this? > > Thanks for your help. > > Gaëtan > > curl -g -i -X GET > http://10.0.0.11:9292/v2/images/de39fc9c-b943-47e3-82c4-bd6d577a9577/file > -H "Content-Type: application/octet-stream" -H "User-Agent: > python-glanceclient" -H "X-Auth-Token: $token" --output > /tmp/kernel.glance -v > Note: Unnecessary use of -X or --request, GET is already inferred. > * Expire in 0 ms for 6 (transfer 0x557679b1de80) > * Trying 10.0.0.11... > * TCP_NODELAY set > * Expire in 200 ms for 4 (transfer 0x557679b1de80) > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- > 0* Connected to 10.0.0.11 (10.0.0.11) port 9292 (#0) > > GET /v2/images/de39fc9c-b943-47e3-82c4-bd6d577a9577/file HTTP/1.1 > > Host: 10.0.0.11:9292 > > Accept: */* > > Content-Type: application/octet-stream > > User-Agent: python-glanceclient > > X-Auth-Token: > > gAAAAABeXRzKVS3uQIIv9t-wV7njIV-T9HIvcwFqcHNivrpyBlesDtgAj1kpWk59a20EJLUo8IeHpTdKgVFwhnAVvbSWHY-HQvxu5dwSFsK4A-7CzAOwdp3svSqxB-FdwWhsY_PElftMX4geA-y_YFZJamefZapiAv6g1gSm-BSv5GYQ0hj3yGY > > > 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- > 0< HTTP/1.1 200 OK > < Content-Type: application/octet-stream > < Content-Md5: 26c6d5c3d8ba9fd4bc4d1ee5959a827c > < Content-Length: 5631728 > < X-Openstack-Request-Id: req-e7ba2455-780f-48a8-b6a2-1c6741d0e368 > < Date: Mon, 02 Mar 2020 15:03:53 GMT > < > { [32768 bytes data] > 100 5499k 100 5499k 0 0 4269k 0 0:00:01 0:00:01 --:--:-- > 4269k > * Connection #0 to host 10.0.0.11 left intact > > > On 2020-03-02 04:54, Mark Goddard wrote: > > On Mon, 2 Mar 2020 at 06:28, Abhishek Kekane > > wrote: > >> > >> Hi Gaëtan, > >> > >> Glance team doesn't recommend to use OSC anymore. > >> I will recommend you to check the same behaviour using > >> python-glanceclient. > > > > That's not cool - everyone has switched to OSC. It's also the first > > time I've heard of it. > > > >> > >> Thanks & Best Regards, > >> > >> Abhishek Kekane > >> > >> > >> On Sat, Feb 29, 2020 at 3:54 AM Monty Taylor > >> wrote: > >>> > >>> > >>> > >>> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: > >>> > > >>> > Hey Monty, > >>> > > >>> > If I download the image via the CLI, the checksum of the file matches the checksum from the image details. > >>> > If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. > >>> > > >>> > The files have the same size, this is really weird. > >>> > >>> WOW. > >>> > >>> I still don’t know the issue - but my unfounded hunch is that the > >>> curl command is likely not doing something it should be. If OSC is > >>> producing a file that matches the image details, that seems like the > >>> right choice for now. > >>> > >>> Seriously fascinating though. > >>> > >>> > Gaëtan > >>> > > >>> > On 2020-02-28 17:00, Monty Taylor wrote: > >>> >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: > >>> >>> Hi guys, > >>> >>> Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? > >>> >>> During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". > >>> >>> Any idea? > >>> >> That seems strange. I don’t know off the top of my head. I do know > >>> >> Artem has patches up to switch OSC to using SDK for image operations. > >>> >> https://review.opendev.org/#/c/699416/ > >>> >> That said, I’d still expect current OSC checksums to be solid. Perhaps > >>> >> there is some filtering/processing being done cloud-side in your > >>> >> glance? If you download the image to a file and run a checksum on it - > >>> >> does it match the checksum given by OSC on upload? Or the checksum > >>> >> given by glance API on download? > >>> >>> Thanks, > >>> >>> Gaëtan > >>> >>> [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > >>> > > >>> > >>> > From openstack at nemebean.com Tue Mar 3 23:05:53 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Mar 2020 17:05:53 -0600 Subject: [oslo][infra] OpenDev git repo for oslo.policy missing commit Message-ID: <1129d4b2-0a8d-d034-5ded-7e49e6e49a77@nemebean.com> Found a weird thing today. The OpenDev oslo.policy repo[0] is missing [1]. Even stranger, I see it on the Github mirror[2]. Any idea what happened here? -Ben 0: https://opendev.org/openstack/oslo.policy/commits/branch/master 1: https://review.opendev.org/#/c/708212/ 2: https://github.com/openstack/oslo.policy/commits/master From cboylan at sapwetik.org Tue Mar 3 23:42:53 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Mar 2020 15:42:53 -0800 Subject: [oslo][infra] OpenDev git repo for oslo.policy missing commit In-Reply-To: <1129d4b2-0a8d-d034-5ded-7e49e6e49a77@nemebean.com> References: <1129d4b2-0a8d-d034-5ded-7e49e6e49a77@nemebean.com> Message-ID: On Tue, Mar 3, 2020, at 3:05 PM, Ben Nemec wrote: > Found a weird thing today. The OpenDev oslo.policy repo[0] is missing > [1]. Even stranger, I see it on the Github mirror[2]. Any idea what > happened here? Some other readers may notice that the commit actually does show up for them. The reason for this is the commit is only missing from one of eight backend gitea servers. You can observe this by visiting https://gitea0X.opendev.org:3000/openstack/oslo.policy/commits/branch/master and replacing the X with 1 through 8. Number 5 is the lucky server. My hunch is that this commit merging and subsequently being replicated coincided with a restart of gitea (or related service) on gitea05. And the replication event was missed. We've tried to ensure we replicate to catch up after explicit upgrades, which implies to me that maybe the db container updated. Note that https://review.opendev.org/#/c/705804/ merged on the same day but after the missing commit. In any case I've triggered a full rereplication to gitea05 to make sure we are caught up and will work through the others as well to ensure none are missed. You should be able to confirm that the commit is present in about 20 minutes. Longer term the plan here is to run a single Gitea cluster which will allow us to do rolling restarts of services without impacting replication. Unfortunately, this requires updates to Gitea to support that. > > -Ben > > 0: https://opendev.org/openstack/oslo.policy/commits/branch/master > 1: https://review.opendev.org/#/c/708212/ > 2: https://github.com/openstack/oslo.policy/commits/master > > From kennelson11 at gmail.com Tue Mar 3 23:43:46 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 3 Mar 2020 15:43:46 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #4 Message-ID: Hello! The decision has been made and the template has been merged[1] with the actual governance update well on its way to being accepted as well[2][3]. With that in mind, its time to get down to business! If You Haven't Started ================= You're in luck! Nothing has to change, just run the cookie cutter[8] and make whatever changes you need to the template and get it added to the repos for your project. Also, please keep the task tracker in StoryBoard up to date[4] as you work. If You Had 'Completed' it before Update #2 ================================ Go and check out the governance change[2] and the template[1] to make sure there isn't anything you did differently-- depending on when you 'completed' the goal, you might have some changes to make to be in line with the current (and presumably final) completion criteria. And, as always, please make sure to keep the tasks in StoryBoard up to date[4]. Previous updates if you missed them[5][6][7]. If you have any questions, please let me know. If you need help with reviewing, please feel free to add me as a reviewer. We can still get this done in time! Thanks, -Kendall (diablo_rojo) [1] Cookie Cutter Change https://review.opendev.org/#/c/708672/ [2] Governance Goal Change https://review.opendev.org/#/c/709617/ [3] Governance Goal Change Typo Fix https://review.opendev.org/#/c/710332 [4] StoryBoard Tracking: https://storyboard.openstack.org/#!/story/2007236 [5]Update #1: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012364.html [6] Update #2: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012570.html [7] Update 3: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012784.html [8] Cookiecutter Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Wed Mar 4 00:06:37 2020 From: gaetan.trellu at incloudus.com (gaetan.trellu at incloudus.com) Date: Tue, 03 Mar 2020 19:06:37 -0500 Subject: [glance] Different checksum between CLI and curl In-Reply-To: Message-ID: <6077cc75-915c-4ed0-a973-ebefda5fd8cd@email.android.com> An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Mar 4 00:17:28 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 4 Mar 2020 00:17:28 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> Message-ID: Thanks everyone for the helpful answers. I think I understand the situation now, and I know what to expect. I'll continue attempting to get signed up as a developer as time permits. Hopefully I can help fix some of the OSC issues. -----Original Message----- From: Monty Taylor Sent: Tuesday, March 3, 2020 12:13 PM To: Tim Bell Cc: openstack-discuss at lists.openstack.org; Sean McGinnis ; Albert Braden Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > On Mar 3, 2020, at 12:55 PM, Tim Bell wrote: > > > >> On 3 Mar 2020, at 19:20, Albert Braden wrote: >> >> Sean, thank you for clarifying that. >> >> Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? >> >> > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://urldefense.proofpoint.com/v2/url?u=https-3A__www.stackalytics.com_-3Fcompany-3Dcern-26metric-3Dcommits-26module-3Dpython-2Dopenstackclient&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=PXaDUXdLASc5aN6yB6-2EAZhPajJl-7Ue1eZOWhBS-s&s=r14Sy3wGjaak4CcfQegBje22E5rxQKgxMq_x9dXcDH0&e= ) We’ve been working in SDK also to empower more people directly by being a bit more liberal with core. I think it’s time to start applying this approach to OSC as well. It’s never going to work to require the OSC team to implement everything, but neither is it super awesome to completely decentralize as the plugin/entrypoints issues have shown. I think SDK has been happy with blessing service humans rather quickly. > > BTW, I also would vote for =auto as the default This is what the case will be as we move towards replacing more and more of OSC’s guts with SDK. But let me describe it slightly differently: The way this works in SDK is that there is ONE user interface, which wants to track the latest as best as it can. But we can’t just do “auto” - because microversions can introduce breaking changes, so we need to add support to SDK for the most recent microversion we’re aware of. Then SDK negotiates to find the best microversion that is understands, and it always uses that. SDK has the POV that an end-user should almost never need to care about a micro version - if a user cares they are either in nova-core, or we’ve done something wrong. Case in point is this: https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_openstacksdk_src_branch_master_openstack_compute_v2_server.py-23L457-2DL474&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=PXaDUXdLASc5aN6yB6-2EAZhPajJl-7Ue1eZOWhBS-s&s=DIR_Qd19fwvb18RV_tqnnwOwFjffzojLOLIeE1oKLtU&e= The nova team rightfully changed the semantics of live migrate because of safety. Mriedem put together the logic to express what the appropriate behavior would be, given a set of inputs, across the range of versions so that a user can do things and they’ll work. The end result is a live_migrate call that works across versions as safely as it can. I mention all of this because getting this work done was one of the key things we wanted to get right before we started transitioning OSC in earnest. It’s there - it works, and it’s being used across nova and ironic. So - I hear what people want from OSC - they want a thing that behaves like auto does. We agree - and the mechanism that makes us able to do that _safely_ is things like the above. > Tim > >> We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. >> >> From: Sean McGinnis >> Sent: Tuesday, March 3, 2020 9:50 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) >> >> On 3/3/20 11:28 AM, Albert Braden wrote: >> Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. >> I definitely would not characterize it that way. >> >> With trying not to put too much personal bias into it, here's what I would say the situation is: >> >> - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away >> - Glance is a very small team with very, very limited resources >> - The OSC team is a very small team with very, very limited resources >> - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI >> - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) >> - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends >> From sean.mcginnis at gmx.com Wed Mar 4 01:11:24 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 4 Mar 2020 02:11:24 +0100 Subject: [all] Announcing OpenStack Wallaby! Message-ID: That's right - we have the W name already. No months on end of referring to "the W release" as we start making long term plans. Wallaby https://en.wikipedia.org/wiki/Wallaby Wallabies are native to Australia, which at the start of this naming period was experiencing unprecedented wild fires. This name has passed the legal vetting phase and we are good to go. Full results of the naming poll can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_60b98f102c5d08d5 As a reminder, the way we choose a release name had several changes this time around. Full details of the current release process can be found here: https://governance.openstack.org/tc/reference/release-naming.html There were a lot of great names proposed. I think removing the geographical restriction has been a good move. Thank you to everyone who proposed ideas for this release cycle. Happy stacking! Sean From gmann at ghanshyammann.com Wed Mar 4 01:15:49 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 03 Mar 2020 19:15:49 -0600 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> Message-ID: <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell wrote ---- > > > On 3 Mar 2020, at 19:55, Tim Bell wrote: > > > On 3 Mar 2020, at 19:20, Albert Braden wrote: > Sean, thank you for clarifying that. > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. > BTW, I found the etherpad from Berlin (https://etherpad.openstack.org/p/BER-t-series-goals) and the associated mailing list discussion at http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html Yeah, we are in process of selecting the Victoria cycle community-wide goal and this can be good candidate. I agree with the idea/requirement of a multi-cycle goal. Another option is to build a pop-up team for the Victoria cycle to start burning down the keys issues/work. For both ways (either goal or pop-up team), we need some set of people to drive it. If anyone would like to volunteer for this, we can start discussing the details. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html -gmann > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) > BTW, I also would vote for =auto as the default. > Tim > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > > From: Sean McGinnis > Sent: Tuesday, March 3, 2020 9:50 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > On 3/3/20 11:28 AM, Albert Braden wrote: > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > I definitely would not characterize it that way. > With trying not to put too much personal bias into it, here's what I would say the situation is: > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > - Glance is a very small team with very, very limited resources > - The OSC team is a very small team with very, very limited resources > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > > > > > From zhipengh512 at gmail.com Wed Mar 4 04:06:39 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 4 Mar 2020 12:06:39 +0800 Subject: [all] Announcing OpenStack Wallaby! In-Reply-To: References: Message-ID: Cuuute ! On Wed, Mar 4, 2020 at 9:15 AM Sean McGinnis wrote: > That's right - we have the W name already. No months on end of referring > to "the W release" as we start making long term plans. > > Wallaby > https://en.wikipedia.org/wiki/Wallaby > Wallabies are native to Australia, which at the start of this naming > period was experiencing unprecedented wild fires. > > This name has passed the legal vetting phase and we are good to go. Full > results of the naming poll can be found here: > > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_60b98f102c5d08d5 > > As a reminder, the way we choose a release name had several changes this > time around. Full details of the current release process can be found here: > > https://governance.openstack.org/tc/reference/release-naming.html > > There were a lot of great names proposed. I think removing the > geographical restriction has been a good move. Thank you to everyone who > proposed ideas for this release cycle. > > Happy stacking! > > Sean > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Wed Mar 4 06:59:46 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 4 Mar 2020 06:59:46 +0000 Subject: [nova] [neutron] multiple fixed_ip In-Reply-To: References: <20200303130104.GA29109@sync> Message-ID: <20200304065946.GB29109@sync> Hello Slawek and all, You are right, I forgot to mention that I want to set a specific IP, so yes I am using the parameters you describe (v4-fixed-ip). However when trying to use both the v4-fixed-ip and v6-fixed-ip, it does not work. It seems that those params are mutually exclusives [1] on client side. Does anyone know if the server would accept more than one fixed-ips? My attemps were not sucessful regarding this matter. Another question is the accessIPv4 (and v6) params, does anyone know what they are used for? Because they seems completely ignored by neutron when used on nova client side [2]. [1] https://github.com/openstack/python-novaclient/blob/b9a7e03074cbaacc3f270b2b8228a5b85350a2de/novaclient/v2/servers.py#L798 [2] https://github.com/openstack/python-novaclient/blob/b9a7e03074cbaacc3f270b2b8228a5b85350a2de/novaclient/v2/servers.py#L815 -- Arnaud Morin On 03.03.20 - 21:34, Slawek Kaplonski wrote: > Hi, > > If Your network has got IPv4 and IPv6 subnet, it should be done by default that created port will have one IPv4 and one IPv6 allocated. I just did it with nova client: > > nova boot --flavor m1.micro --image cirros-0.4.0 --nic net-name=private test-vm > > And my vm has IPs like: > > +--------------------------------------+---------+--------+------------+-------------+---------------------------------------------------------+ > | ID | Name | Status | Task State | Power State | Networks | > +--------------------------------------+---------+--------+------------+-------------+---------------------------------------------------------+ > | 92385f1f-7899-40b7-94ec-bbceb6749722 | test-vm | ACTIVE | - | Running | private=fdc8:d3a9:de7b:0:f816:3eff:fe0d:16f5, 10.0.0.31 | > +--------------------------------------+---------+--------+------------+-------------+————————————————————————————+ > > Also from novaclient help message it seems that You should be able to specify such IPv4 and IPv6 addresses: > > nova help boot | grep nic > [--nic ] > > But that I didn’t try actually. > > > On 3 Mar 2020, at 14:23, Radosław Piliszek wrote: > > > > Hi Arnaud, > > > > Non-core here. > > Last time I checked you had to decide on one and then update with > > neutron (or first create the port with neutron and then give it to > > nova :-) ). > > Moreover, not sure if IPv6 goes through Nova directly or not (docs > > suggest still nah). > > > > -yoctozepto > > > > wt., 3 mar 2020 o 14:09 Arnaud Morin napisał(a): > >> > >> > >> Hello all, > >> > >> I was doing some tests to create a server using nova API. > >> My objective is to create a server with one port but multiples IPs (one > >> IPv4 and one IPv6). > >> > >> If I understand well the neutron API, I can create a port using the > >> fixed_ips array parameter [1] > >> > >> Unfortunately, on nova side, it seems to only accept a string with only > >> one ip (fixed_ip) [2] > >> > >> Is it mandatory for me to create the port with neutron? > >> Or is there any trick that I missed on nova API side? > >> > >> Thanks! > >> > >> > >> [1] https://docs.openstack.org/api-ref/network/v2/?expanded=create-port-detail#ports > >> [2] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server > >> > >> > >> > >> -- > >> Arnaud Morin > >> > >> > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > From lijie at unitedstack.com Wed Mar 4 07:44:17 2020 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Wed, 4 Mar 2020 15:44:17 +0800 Subject: [nova] ask some questions about flavor Message-ID: Hi,all:         I have two questions about the flavor. One is the property "OS-FLV-DISABLED:disabled" defined when we created the flavor, but there is no method to change the property. And I see a patch about this [1], but it is abandoned. So I want to know our community how to consider this function.          Another one is when we call the "list-migrations" api, why we receive the new_instance_type_id and old_instance_type_id in response of list_migrations should be internal value for the migration-type is "resize"[2]? Maybe the value should be exposed on REST API, so that we can know which is the old flavor.          Can you tell me more about this? Thank you very much. Ref: [1]:https://review.opendev.org/#/c/61291/ [2]:https://review.opendev.org/#/c/588481/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Wed Mar 4 09:09:27 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Wed, 4 Mar 2020 09:09:27 +0000 Subject: =?utf-8?B?UmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVtub3ZhXSBhc2sgc29t?= =?utf-8?Q?e_questions_about_flavor?= Message-ID: <28518e7a5cbb40f2b208f3a4482b3b94@inspur.com> Items: [lists.openstack.org代发][nova] ask some questions about flavor Hi,all: > I have two questions about the flavor. One is the property "OS-FLV-DISABLED:disabled" defined when we created the flavor, but there is no method to change the property. And I see a patch about this [1], but it is abandoned. So I want to know our community how > to consider this function. "OS-FLV-DISABLED:disabled", this is typically only visible to administrative users. I'm not sure if it is worth supporting the modification/update. I think it should be planned in advance for managers. If the specific extra spec bound to flavor is only used as a dedicated flavor. > Another one is when we call the "list-migrations" api, why we receive the new_instance_type_id and old_instance_type_id in response of list_migrations should be internal value for the migration-type is "resize"[2]? Maybe the value should be exposed on REST > API, so that we can know which is the old flavor. In microversion 2.23 we support to show “migration-type” in the List Migrations API [1][2], I think you call List Migrations REST API is not add the “OpenStack-API-Version” or “X-OpenStack-Nova-API-Version”, right? And you can get the list migrations filter by “migration-type” (enum in: evacuation, live-migration, migration (cold), resize), as the call such as: http://192.168.2.11/compute/v2.1/os-migrations?migration_type=resize that show what migration type what you want. [1] API changes: https://docs.openstack.org/api-ref/compute/?expanded=list-migrations-detail,show-migration-details-detail,id320-detail#list-migrations [2] microverion 2.3: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id21 > Can you tell me more about this? Thank you very much. Ref: [1]:https://review.opendev.org/#/c/61291/ [2]:https://review.opendev.org/#/c/588481/ Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.se Wed Mar 4 09:19:59 2020 From: tobias.urdin at binero.se (Tobias Urdin) Date: Wed, 4 Mar 2020 10:19:59 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> Message-ID: Can shime in here that Puppet OpenStack has almost completely migrated to OSC since a pretty long time ago, including Glance. I think we only have some usage to the neutron CLI that needs to be replaced. (Even though I would like to talk directly to the APIs, but hey, Ruby isn't my strong suit). Best regards On 3/3/20 8:59 PM, Monty Taylor wrote: > >> On Mar 3, 2020, at 12:20 PM, Albert Braden wrote: >> >> Sean, thank you for clarifying that. >> >> Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? > Nope. Several of them even already don’t exist or are deprecated. > > Additiontally, several surrounding tools have explicit policies to NEVER touch python-*client libraries. Specifically Ansible - but I believe Salt has also migrated to SDK - and then any app developer who wants to be able to sanely target multiple clouds uses SDK instead of python-*client. > > So I can’t do anything about people preferring individual projects - but the unified stuff is DEFINITELY not getting deprecated or going away - quite simply because it cannot. And I hope that we can continue to convince more people of the power inherent in doing work to support their service in SDK/OSC instead of off in their own corner - but as I said, that I can’t do anything about. > >> I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? >> >> We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. >> From: Sean McGinnis >> Sent: Tuesday, March 3, 2020 9:50 AM >> To: openstack-discuss at lists.openstack.org >> Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) >> >> On 3/3/20 11:28 AM, Albert Braden wrote: >> Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. >> I definitely would not characterize it that way. >> >> With trying not to put too much personal bias into it, here's what I would say the situation is: >> >> - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away >> - Glance is a very small team with very, very limited resources >> - The OSC team is a very small team with very, very limited resources >> - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI >> - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) >> - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > > From lijie at unitedstack.com Wed Mar 4 09:31:34 2020 From: lijie at unitedstack.com (=?utf-8?B?UmFtYm8=?=) Date: Wed, 4 Mar 2020 17:31:34 +0800 Subject: =?utf-8?B?UmU6UmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj?= =?utf-8?B?5Y+RXVtub3ZhXSBhc2sgc29tZSBxdWVzdGlvbnMg?= =?utf-8?B?YWJvdXQgZmxhdm9y?= In-Reply-To: <28518e7a5cbb40f2b208f3a4482b3b94@inspur.com> References: <28518e7a5cbb40f2b208f3a4482b3b94@inspur.com> Message-ID: Oh,no. I know we support to get the list migrations filter by "migration-type". Actually, I just want to know why the "new_instance_type_id" and "old_instance_type_id" in response of list_migrations the response should be an *internal* value.     ------------------ Original ------------------ From: "Brin Zhang(张百林)"; Date: 2020年3月4日(星期三) 下午5:09 To: "lijie at unitedstack.com"; "openstack-discuss at lists.openstack.org"; Subject: Re: [lists.openstack.org代发][nova] ask some questions about flavor   Items: [lists.openstack.org代发][nova] ask some questions about flavor   Hi,all: > I have two questions about the flavor. One is the property "OS-FLV-DISABLED:disabled" defined when we created the flavor, but there is no method to change the property. And I see a patch about this [1], but it is abandoned. So I want to know our community how > to consider this function.   "OS-FLV-DISABLED:disabled", this is typically only visible to administrative users. I'm not sure if it is worth supporting the modification/update. I think it should be planned in advance for managers. If the specific extra spec bound to flavor is only used as a dedicated flavor.   > Another one is when we call the "list-migrations" api, why we receive the new_instance_type_id and old_instance_type_id in response of list_migrations should be internal value for the migration-type is "resize"[2]? Maybe the value should be exposed on REST > API, so that we can know which is the old flavor.   In microversion 2.23 we support to show “migration-type” in the List Migrations API [1][2], I think you call List Migrations REST  API is not add the “OpenStack-API-Version” or “X-OpenStack-Nova-API-Version”, right?   And you can get the list migrations filter by “migration-type” (enum in: evacuation, live-migration, migration (cold), resize), as the call such as: http://192.168.2.11/compute/v2.1/os-migrations?migration_type=resize that show what migration type what you want.   [1] API changes: https://docs.openstack.org/api-ref/compute/?expanded=list-migrations-detail,show-migration-details-detail,id320-detail#list-migrations [2] microverion 2.3: https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id21   > Can you tell me more about this? Thank you very much.         Ref: [1]:https://review.opendev.org/#/c/61291/ [2]:https://review.opendev.org/#/c/588481/     Best Regards Rambo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Mar 4 09:57:52 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 4 Mar 2020 09:57:52 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: On Wed, 4 Mar 2020 at 01:16, Ghanshyam Mann wrote: > > ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell wrote ---- > > > > > > On 3 Mar 2020, at 19:55, Tim Bell wrote: > > > > > > On 3 Mar 2020, at 19:20, Albert Braden wrote: > > Sean, thank you for clarifying that. > > > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > > > > > > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. > > BTW, I found the etherpad from Berlin (https://etherpad.openstack.org/p/BER-t-series-goals) and the associated mailing list discussion at http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > Yeah, we are in process of selecting the Victoria cycle community-wide goal and this can be good candidate. I agree with the idea/requirement of a multi-cycle goal. > Another option is to build a pop-up team for the Victoria cycle to start burning down the keys issues/work. For both ways (either goal or pop-up team), we need > some set of people to drive it. If anyone would like to volunteer for this, we can start discussing the details. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html > > -gmann This seems like quite an important issue for OpenStack usability. Clearly there are resourcing issues within the glance team (and possibly also some personal preferences) that have prevented OSC gaining feature parity with the glance client. Having all of the core projects able to recommend using OSC seems to me like it should be quite a high priority - more so than having support for every project out there. Would cross-project goal effort be better spent swarming on filling these gaps first? Do we have any mechanisms to help drive that? I know we have the help most-wanted list. > > > > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) > > BTW, I also would vote for =auto as the default. > > Tim > > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > > > > From: Sean McGinnis > > Sent: Tuesday, March 3, 2020 9:50 AM > > To: openstack-discuss at lists.openstack.org > > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > > > On 3/3/20 11:28 AM, Albert Braden wrote: > > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > > I definitely would not characterize it that way. > > With trying not to put too much personal bias into it, here's what I would say the situation is: > > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > > - Glance is a very small team with very, very limited resources > > - The OSC team is a very small team with very, very limited resources > > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > > > > > > > > > > > From skaplons at redhat.com Wed Mar 4 10:15:40 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 4 Mar 2020 11:15:40 +0100 Subject: [neutron] Review priorities for RFEs Message-ID: <216E1F95-06AB-413B-BC9E-0509FEDF76F8@redhat.com> Hi neutrinos, I just went through patches related to our RFEs which we want to merge before Ussuri-3 and I set them review priority flag. You can find them at [1]. If I missed any patch related to any of those RFEs, please let me know by email or on IRC. And please try to review those patches if possible ;) Thx in advance. [1] https://tinyurl.com/vezk6n6 — Slawek Kaplonski Senior software engineer Red Hat From arnaud.morin at gmail.com Wed Mar 4 11:02:58 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 4 Mar 2020 11:02:58 +0000 Subject: [nova] [neutron] multiple fixed_ip In-Reply-To: References: <20200303130104.GA29109@sync> Message-ID: <20200304110258.GC29109@sync> Hello, Thanks for you answer, that's exactly what we are doing right now, I just wanted to know if there is a more straight solution :p Thanks! -- Arnaud Morin On 03.03.20 - 14:23, Radosław Piliszek wrote: > Hi Arnaud, > > Non-core here. > Last time I checked you had to decide on one and then update with > neutron (or first create the port with neutron and then give it to > nova :-) ). > Moreover, not sure if IPv6 goes through Nova directly or not (docs > suggest still nah). > > -yoctozepto > > wt., 3 mar 2020 o 14:09 Arnaud Morin napisał(a): > > > > > > Hello all, > > > > I was doing some tests to create a server using nova API. > > My objective is to create a server with one port but multiples IPs (one > > IPv4 and one IPv6). > > > > If I understand well the neutron API, I can create a port using the > > fixed_ips array parameter [1] > > > > Unfortunately, on nova side, it seems to only accept a string with only > > one ip (fixed_ip) [2] > > > > Is it mandatory for me to create the port with neutron? > > Or is there any trick that I missed on nova API side? > > > > Thanks! > > > > > > [1] https://docs.openstack.org/api-ref/network/v2/?expanded=create-port-detail#ports > > [2] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server > > > > > > > > -- > > Arnaud Morin > > > > From radoslaw.piliszek at gmail.com Wed Mar 4 11:17:12 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 4 Mar 2020 12:17:12 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool Message-ID: Please be informed that oslo.cache 2.1.0 breaks oslo_cache.memcache_pool Kolla-Ansible gate is already RED and a quick codesearch revealed other deployment methods might be in trouble soon as well. This does not affect devstack/tempest as they use dogpile.cache.memcached instead. The error is TypeError: __init__() got an unexpected keyword argument 'dead_retry' For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 -yoctozepto From hberaud at redhat.com Wed Mar 4 12:20:46 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Mar 2020 13:20:46 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: I think our issue is due to the fact that python-memcached accept a param named `dead_retry` [1] which is not defined in pymemcache. We just need to define it in our oslo.cache mapping. During testing we faced the same kind of issue with connection timeout. [1] https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 [2] https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 Le mer. 4 mars 2020 à 12:21, Radosław Piliszek a écrit : > Please be informed that oslo.cache 2.1.0 breaks oslo_cache.memcache_pool > > Kolla-Ansible gate is already RED and a quick codesearch revealed > other deployment methods might be in trouble soon as well. > > This does not affect devstack/tempest as they use > dogpile.cache.memcached instead. > > The error is TypeError: __init__() got an unexpected keyword argument > 'dead_retry' > > For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 > > -yoctozepto > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Mar 4 12:28:58 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Mar 2020 13:28:58 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: What do you think about adding a mapping between `retry_timeout` [1] and `dead_retry` [2]? [1] https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 [2] https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 Le mer. 4 mars 2020 à 13:20, Herve Beraud a écrit : > I think our issue is due to the fact that python-memcached accept a param > named `dead_retry` [1] which is not defined in pymemcache. > > We just need to define it in our oslo.cache mapping. During testing we > faced the same kind of issue with connection timeout. > > [1] > https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 > [2] > https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 > > Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < > radoslaw.piliszek at gmail.com> a écrit : > >> Please be informed that oslo.cache 2.1.0 breaks oslo_cache.memcache_pool >> >> Kolla-Ansible gate is already RED and a quick codesearch revealed >> other deployment methods might be in trouble soon as well. >> >> This does not affect devstack/tempest as they use >> dogpile.cache.memcached instead. >> >> The error is TypeError: __init__() got an unexpected keyword argument >> 'dead_retry' >> >> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >> >> -yoctozepto >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Mar 4 12:35:37 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Mar 2020 13:35:37 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: `dead_timeout` [1] looks more appropriate in this case. [1] https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L58 Le mer. 4 mars 2020 à 13:28, Herve Beraud a écrit : > What do you think about adding a mapping between `retry_timeout` [1] and > `dead_retry` [2]? > > [1] > https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 > [2] > https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 > > Le mer. 4 mars 2020 à 13:20, Herve Beraud a écrit : > >> I think our issue is due to the fact that python-memcached accept a param >> named `dead_retry` [1] which is not defined in pymemcache. >> >> We just need to define it in our oslo.cache mapping. During testing we >> faced the same kind of issue with connection timeout. >> >> [1] >> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >> [2] >> https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 >> >> Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < >> radoslaw.piliszek at gmail.com> a écrit : >> >>> Please be informed that oslo.cache 2.1.0 breaks oslo_cache.memcache_pool >>> >>> Kolla-Ansible gate is already RED and a quick codesearch revealed >>> other deployment methods might be in trouble soon as well. >>> >>> This does not affect devstack/tempest as they use >>> dogpile.cache.memcached instead. >>> >>> The error is TypeError: __init__() got an unexpected keyword argument >>> 'dead_retry' >>> >>> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >>> >>> -yoctozepto >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer >> Red Hat - Openstack Oslo >> irc: hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From moguimar at redhat.com Wed Mar 4 12:41:44 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 4 Mar 2020 13:41:44 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: `dead_timeout`++ On Wed, Mar 4, 2020 at 1:36 PM Herve Beraud wrote: > `dead_timeout` [1] looks more appropriate in this case. > > [1] > https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L58 > > Le mer. 4 mars 2020 à 13:28, Herve Beraud a écrit : > >> What do you think about adding a mapping between `retry_timeout` [1] and >> `dead_retry` [2]? >> >> [1] >> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 >> [2] >> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >> >> Le mer. 4 mars 2020 à 13:20, Herve Beraud a écrit : >> >>> I think our issue is due to the fact that python-memcached accept a >>> param named `dead_retry` [1] which is not defined in pymemcache. >>> >>> We just need to define it in our oslo.cache mapping. During testing we >>> faced the same kind of issue with connection timeout. >>> >>> [1] >>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>> [2] >>> https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 >>> >>> Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < >>> radoslaw.piliszek at gmail.com> a écrit : >>> >>>> Please be informed that oslo.cache 2.1.0 breaks oslo_cache.memcache_pool >>>> >>>> Kolla-Ansible gate is already RED and a quick codesearch revealed >>>> other deployment methods might be in trouble soon as well. >>>> >>>> This does not affect devstack/tempest as they use >>>> dogpile.cache.memcached instead. >>>> >>>> The error is TypeError: __init__() got an unexpected keyword argument >>>> 'dead_retry' >>>> >>>> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >>>> >>>> -yoctozepto >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer >>> Red Hat - Openstack Oslo >>> irc: hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer >> Red Hat - Openstack Oslo >> irc: hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Mar 4 13:16:51 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Mar 2020 14:16:51 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: Fix proposed https://review.opendev.org/#/c/711220/ Le mer. 4 mars 2020 à 13:42, Moises Guimaraes de Medeiros < moguimar at redhat.com> a écrit : > `dead_timeout`++ > > On Wed, Mar 4, 2020 at 1:36 PM Herve Beraud wrote: > >> `dead_timeout` [1] looks more appropriate in this case. >> >> [1] >> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L58 >> >> Le mer. 4 mars 2020 à 13:28, Herve Beraud a écrit : >> >>> What do you think about adding a mapping between `retry_timeout` [1] and >>> `dead_retry` [2]? >>> >>> [1] >>> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 >>> [2] >>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>> >>> Le mer. 4 mars 2020 à 13:20, Herve Beraud a écrit : >>> >>>> I think our issue is due to the fact that python-memcached accept a >>>> param named `dead_retry` [1] which is not defined in pymemcache. >>>> >>>> We just need to define it in our oslo.cache mapping. During testing we >>>> faced the same kind of issue with connection timeout. >>>> >>>> [1] >>>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>>> [2] >>>> https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 >>>> >>>> Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < >>>> radoslaw.piliszek at gmail.com> a écrit : >>>> >>>>> Please be informed that oslo.cache 2.1.0 breaks >>>>> oslo_cache.memcache_pool >>>>> >>>>> Kolla-Ansible gate is already RED and a quick codesearch revealed >>>>> other deployment methods might be in trouble soon as well. >>>>> >>>>> This does not affect devstack/tempest as they use >>>>> dogpile.cache.memcached instead. >>>>> >>>>> The error is TypeError: __init__() got an unexpected keyword argument >>>>> 'dead_retry' >>>>> >>>>> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >>>>> >>>>> -yoctozepto >>>>> >>>>> >>>> >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer >>>> Red Hat - Openstack Oslo >>>> irc: hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer >>> Red Hat - Openstack Oslo >>> irc: hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer >> Red Hat - Openstack Oslo >> irc: hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdopiera at redhat.com Wed Mar 4 13:37:40 2020 From: rdopiera at redhat.com (Radomir Dopieralski) Date: Wed, 4 Mar 2020 14:37:40 +0100 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: I think it makes a lot of sense to make them independent, especially since the releases often contain security fixes, so that they should be included in any older supported releases of Horizon as well. On Tue, Mar 3, 2020 at 3:02 PM Sean McGinnis wrote: > On 3/3/20 4:11 AM, Akihiro Motoki wrote: > > Thanks Thierry for the detail explanation. > > The horizon team will update the corresponding repos for new minor > > releases and follow the usual release process. > > One question: we have passed the milestone-2. Is it better to wait > > till Victoria dev cycle is open? > > > > Thanks, > > Akihiro > > We are past the deadline for inclusion in ussuri. But that said, these > are things that are currently being used by the team, so I think it's a > little misleading in its current state. I think we should get these new > releases done in this cycle if possible. > > Part of this is also the assumption that these will be cycle based. I > wonder if this are more appropriate as independent deliverables? That > means they are not tied to a specific release cycle and can be released > whenever there is something to be released. At least something to think > about. > > > https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > > > On Fri, Feb 28, 2020 at 1:47 AM Thierry Carrez > wrote: > >> Thierry Carrez wrote: > >>> The way we've been handling this in the past was to ignore past > releases > >>> (since they are not signed by the release team), and push a new one > >>> through the releases repository. It should replace the unofficial one > in > >>> PyPI and make sure all is in order. > >> Clarification with a practical example: > >> > >> xstatic-hogan 2.0.0.2 is on PyPI, but has no tag in the > >> openstack/xstatic-hogan repo, and no deliverable file in > openstack/releases. > >> > >> Solution is to resync everything by proposing a 2.0.0.3 release that > >> will have tag, be in openstack/releases and have a matching upload on > PyPI. > >> > >> This is done by: > >> > >> - bumping BUILD at > >> > https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py# > >> > >> - adding a deliverables/_independent/xstatic-hogan.yaml file in > >> openstack/releases defining a tag for 2.0.0.3 > >> > >> - removing the "deprecated" line from > >> > https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml#L542 > >> > >> Repeat for every affected package :) > >> > >> -- > >> Thierry Carrez (ttx) > >> > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Mar 4 15:58:50 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 4 Mar 2020 16:58:50 +0100 Subject: [CINDER] Snapshots export In-Reply-To: References: Message-ID: <20200304155850.b4ydu4vfxthih7we@localhost> On 03/03, Alfredo De Luca wrote: > Hi all. > We have our env with Openstack (Train) and cinder with CEPH (nautilus) > backend. > We are creating automatic volumes snapshots and now we'd like to export > them as a backup/restore plan. After exporting the snapshots we will use > Acronis as backup tool. > > I couldn't find the right steps/commands to exports the snapshots. > Any info? > Cheers > > -- > *Alfredo* Hi Alfredo, What kind of backup/restore plan do you have planned? Because snapshots are not meant to be used in a Disaster Recovery backup/restore plan, so the only thing available are the manage/unmanage commands. These commands are meant to add an existing volume/snapshots into Cinder together, not to unmanage/manage them independently. For example, you wouldn't be able to manage a snapshot if the volume is not already managed. Also unmanaging the snapshot would block the deletion of the RBD volume itself. Cheers, Gorka. From mordred at inaugust.com Wed Mar 4 16:19:29 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 4 Mar 2020 10:19:29 -0600 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams Message-ID: Hey everybody, I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. Monty From artem.goncharov at gmail.com Wed Mar 4 16:28:14 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 4 Mar 2020 17:28:14 +0100 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: <9709C274-FCF2-4FAB-8D3B-86EB5FDBFAA9@gmail.com> I would definitely vote for that, since it will help to address one of our biggest problems - changes in OSC take very long to get in due to lack of resources (not to blame anybody). Artem > On 4. Mar 2020, at 17:19, Monty Taylor wrote: > > Hey everybody, > > I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re > Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. > > I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. > > Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. > > Monty From amotoki at gmail.com Wed Mar 4 16:36:16 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 5 Mar 2020 01:36:16 +0900 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: I totally forgot xstatic deliverables adopts the "independent" release mode when I sent the mail. I know the policy is only applied to cycle-based deliverables, but I totally forgot it...... xstatic deliverables I am talking about fit into "independent deliverables". On Wed, Mar 4, 2020 at 10:42 PM Radomir Dopieralski wrote: > > I think it makes a lot of sense to make them independent, especially since the releases often contain security fixes, so that they should be included in any older supported releases of Horizon as well. > > On Tue, Mar 3, 2020 at 3:02 PM Sean McGinnis wrote: >> >> On 3/3/20 4:11 AM, Akihiro Motoki wrote: >> > Thanks Thierry for the detail explanation. >> > The horizon team will update the corresponding repos for new minor >> > releases and follow the usual release process. >> > One question: we have passed the milestone-2. Is it better to wait >> > till Victoria dev cycle is open? >> > >> > Thanks, >> > Akihiro >> >> We are past the deadline for inclusion in ussuri. But that said, these >> are things that are currently being used by the team, so I think it's a >> little misleading in its current state. I think we should get these new >> releases done in this cycle if possible. >> >> Part of this is also the assumption that these will be cycle based. I >> wonder if this are more appropriate as independent deliverables? That >> means they are not tied to a specific release cycle and can be released >> whenever there is something to be released. At least something to think >> about. >> >> https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary >> >> > On Fri, Feb 28, 2020 at 1:47 AM Thierry Carrez wrote: >> >> Thierry Carrez wrote: >> >>> The way we've been handling this in the past was to ignore past releases >> >>> (since they are not signed by the release team), and push a new one >> >>> through the releases repository. It should replace the unofficial one in >> >>> PyPI and make sure all is in order. >> >> Clarification with a practical example: >> >> >> >> xstatic-hogan 2.0.0.2 is on PyPI, but has no tag in the >> >> openstack/xstatic-hogan repo, and no deliverable file in openstack/releases. >> >> >> >> Solution is to resync everything by proposing a 2.0.0.3 release that >> >> will have tag, be in openstack/releases and have a matching upload on PyPI. >> >> >> >> This is done by: >> >> >> >> - bumping BUILD at >> >> https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py# >> >> >> >> - adding a deliverables/_independent/xstatic-hogan.yaml file in >> >> openstack/releases defining a tag for 2.0.0.3 >> >> >> >> - removing the "deprecated" line from >> >> https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml#L542 >> >> >> >> Repeat for every affected package :) >> >> >> >> -- >> >> Thierry Carrez (ttx) >> >> >> > > > -- > Radomir Dopieralski From amy at demarco.com Wed Mar 4 16:38:43 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 4 Mar 2020 10:38:43 -0600 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: Monty, That sounds like a good plan thanks for proposing it. Amy (spotz) On Wed, Mar 4, 2020 at 10:22 AM Monty Taylor wrote: > Hey everybody, > > I’d like to propose merging the SDK and OSC teams. We already share an IRC > channel, and already share a purpose in life. In OSC we have a current goal > of swapping out client implementation for SDK, and we’re > Already ensuring that SDK does what it needs to do to facilitate that > goal. We also already share PTG space, and have requested a shared set of > time at the upcoming Denver PTG. So really the separation is historical not > practical, and these days having additional layers of governance is not > super useful. > > I propose that we do a simple merge of the teams. This means the current > SDK cores will become cores on OSC, and as most of the OSC cores are > already SDK cores, it means the SDK team gains amotoki - which is always a > positive. > > Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we > remain hopeful that this will change, we’re slowly coming to terms with the > possibility that it might not. With that in mind, I’ll serve as the PTL for > the new combined team until the next election. > > Monty > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Wed Mar 4 16:49:36 2020 From: gr at ham.ie (Graham Hayes) Date: Wed, 4 Mar 2020 16:49:36 +0000 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: <9be364bd-49dd-ad97-3a21-ee0cc87a9298@ham.ie> On 04/03/2020 16:19, Monty Taylor wrote: > Hey everybody, > > I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re > Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. This makes sense. > > I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. Yeah - projects were supposed to be mainly about common groups of people working on stuff, so if the overlap is so close already, it seems like a no brainer. > > Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. If this is good with the two teams, this is good with me :) Hopefully this can help with projects teams issues with OSC/SDK response times. > > Monty > From mordred at inaugust.com Wed Mar 4 16:56:13 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 4 Mar 2020 10:56:13 -0600 Subject: [sdk] Additions and subtractions from core team Message-ID: Heya, With the previous email about merging OSC and SDK teams, I’d also like to propose the following changes to the SDK core team (keeping in mind that likely means the core team of both OSC and SDK real soon now) Adds: Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really be a core in all projects anyway Sean McGinnis - Sean has been reviewing things as a stable branch maint in both SDK and OSC, and as such has shown a good tendency to help things along when needed and to not approve things when he doesn’t know what’s up. Subtractions: All of these people are awesome, but they’re all long gone: Brian Curtin Clint Byrum Everett Toews Jamie Lennox Jesse Noller Ricardo Carillo Cruz Richard Theis Rosario Di Somma Sam Yaple Terry Howe Monty From artem.goncharov at gmail.com Wed Mar 4 17:02:01 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 4 Mar 2020 18:02:01 +0100 Subject: [sdk] Additions and subtractions from core team In-Reply-To: References: Message-ID: +1 from me ---- typed from mobile, auto-correct typos assumed ---- On Wed, 4 Mar 2020, 18:00 Monty Taylor, wrote: > Heya, > > With the previous email about merging OSC and SDK teams, I’d also like to > propose the following changes to the SDK core team (keeping in mind that > likely means the core team of both OSC and SDK real soon now) > > Adds: > > Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really > be a core in all projects anyway > Sean McGinnis - Sean has been reviewing things as a stable branch maint in > both SDK and OSC, and as such has shown a good tendency to help things > along when needed and to not approve things when he doesn’t know what’s up. > > Subtractions: > > All of these people are awesome, but they’re all long gone: > > Brian Curtin > Clint Byrum > Everett Toews > Jamie Lennox > Jesse Noller > Ricardo Carillo Cruz > Richard Theis > Rosario Di Somma > Sam Yaple > Terry Howe > > Monty > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Wed Mar 4 17:07:48 2020 From: openstack at fried.cc (Eric Fried) Date: Wed, 4 Mar 2020 11:07:48 -0600 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: +1 On 3/4/20 10:19 AM, Monty Taylor wrote: > Hey everybody, > > I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re > Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. > > I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. > > Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. > > Monty > From openstack at fried.cc Wed Mar 4 17:07:59 2020 From: openstack at fried.cc (Eric Fried) Date: Wed, 4 Mar 2020 11:07:59 -0600 Subject: [sdk] Additions and subtractions from core team In-Reply-To: References: Message-ID: <304f4dc4-a949-0e43-9abe-7e9dd03ca217@fried.cc> +1 On 3/4/20 10:56 AM, Monty Taylor wrote: > Heya, > > With the previous email about merging OSC and SDK teams, I’d also like to propose the following changes to the SDK core team (keeping in mind that likely means the core team of both OSC and SDK real soon now) > > Adds: > > Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really be a core in all projects anyway > Sean McGinnis - Sean has been reviewing things as a stable branch maint in both SDK and OSC, and as such has shown a good tendency to help things along when needed and to not approve things when he doesn’t know what’s up. > > Subtractions: > > All of these people are awesome, but they’re all long gone: > > Brian Curtin > Clint Byrum > Everett Toews > Jamie Lennox > Jesse Noller > Ricardo Carillo Cruz > Richard Theis > Rosario Di Somma > Sam Yaple > Terry Howe > > Monty > From dtantsur at redhat.com Wed Mar 4 17:11:53 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 4 Mar 2020 18:11:53 +0100 Subject: [sdk] Additions and subtractions from core team In-Reply-To: References: Message-ID: Hi, On Wed, Mar 4, 2020 at 5:58 PM Monty Taylor wrote: > Heya, > > With the previous email about merging OSC and SDK teams, I’d also like to > propose the following changes to the SDK core team (keeping in mind that > likely means the core team of both OSC and SDK real soon now) > > Adds: > > Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really > be a core in all projects anyway > Sean McGinnis - Sean has been reviewing things as a stable branch maint in > both SDK and OSC, and as such has shown a good tendency to help things > along when needed and to not approve things when he doesn’t know what’s up. > A confident +2. > > Subtractions: > > All of these people are awesome, but they’re all long gone: > > Brian Curtin > Clint Byrum > Everett Toews > Jamie Lennox > Jesse Noller > Ricardo Carillo Cruz > Richard Theis > Rosario Di Somma > Sam Yaple > Terry Howe > Mmmm, can I refuse? No? Okay, okay... Dmitry > > Monty > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Wed Mar 4 17:22:26 2020 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 4 Mar 2020 12:22:26 -0500 Subject: CPU Topology confusion Message-ID: Folks, We are running openstack with KVM and i have noticed kvm presenting wrong CPU Tolopoly to VM and because of that we are seeing bad performance to our application. This is openstack compute: # lstopo-no-graphics --no-io Machine (64GB total) NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 PU L#0 (P#0) PU L#1 (P#20) L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 PU L#2 (P#1) PU L#3 (P#21) This is VM running on above compute # lstopo-no-graphics --no-io Machine (59GB total) NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) L2 L#0 (4096KB) + Core L#0 L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) L2 L#1 (4096KB) + Core L#1 L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) if you noticed P#0 and P#1 has own (32KB) cache per thread that is wrong presentation if you compare with physical CPU. This is a screenshot of AWS vs Openstack CPU Topology and looking at openstack its presentation is little odd, is that normal? https://imgur.com/a/2sPwJVC I am running CentOS7.6 with kvm 2.12 version. From gmann at ghanshyammann.com Wed Mar 4 17:22:37 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Mar 2020 11:22:37 -0600 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: <170a6921b22.118a63397430600.1168793341050830689@ghanshyammann.com> ---- On Wed, 04 Mar 2020 03:57:52 -0600 Mark Goddard wrote ---- > On Wed, 4 Mar 2020 at 01:16, Ghanshyam Mann wrote: > > > > ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell wrote ---- > > > > > > > > > On 3 Mar 2020, at 19:55, Tim Bell wrote: > > > > > > > > > On 3 Mar 2020, at 19:20, Albert Braden wrote: > > > Sean, thank you for clarifying that. > > > > > > Was my understanding that the community decided to focus on the unified client incorrect? Is the unified/individual client debate still a matter of controversy? Is it possible that the unified client will be deprecated in favor of individual clients after more discussion? I haven’t looked at any of the individual clients since 2018 (except for osc-placement which is kind of a special case), because I thought they were all going away and could be safely ignored until they did, and I haven’t included any information about the individual clients in the documentation that I write for our users, and if they ask I have been telling them to not use the individual clients. Do I need to start looking at individual clients again, and telling our users to use them in some cases? > > > > > > > > > > > > I remember a forum discussion where a community goal was proposed to focus on OSC rather than individual project CLIs (I think Matt and I were proposers). There were concerns on the effort to do this and that it would potentially be multi-cycle. > > > BTW, I found the etherpad from Berlin (https://etherpad.openstack.org/p/BER-t-series-goals) and the associated mailing list discussion at http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > > > Yeah, we are in process of selecting the Victoria cycle community-wide goal and this can be good candidate. I agree with the idea/requirement of a multi-cycle goal. > > Another option is to build a pop-up team for the Victoria cycle to start burning down the keys issues/work. For both ways (either goal or pop-up team), we need > > some set of people to drive it. If anyone would like to volunteer for this, we can start discussing the details. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html > > > > -gmann > > This seems like quite an important issue for OpenStack usability. > Clearly there are resourcing issues within the glance team (and > possibly also some personal preferences) that have prevented OSC > gaining feature parity with the glance client. Having all of the core > projects able to recommend using OSC seems to me like it should be > quite a high priority - more so than having support for every project > out there. Would cross-project goal effort be better spent swarming on > filling these gaps first? Do we have any mechanisms to help drive > that? I know we have the help most-wanted list. That is a good idea to first target big projects. We can finish this for nova, glance, cinder, keystone, glance, swift at first. Apart from Upstream Opportunity (help-most-wanted list) [2], one better way is pop-up team [1]. For that, we need a set of people from these projects or any developer to start working. Also, we can add this as the upstream opportunity for 2020 and see if we get any help. [1] https://governance.openstack.org/tc/reference/popup-teams.html [2] https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2020/index.html -gmann > > > > > > > > > My experience in discussion with the CERN user community and other OpenStack operators is that OSC is felt to be the right solution for the end user facing parts of the cloud (admin commands could be another discussion if necessary). Experienced admin operators can remember that glance looks after images and nova looks after instances. Our average user can get very confused, especially given that OSC supports additional options for authentication (such as Kerberos and Certificates along with clouds.yaml) so users need to re-authenticate with a different openrc to work on their project. > > > While I understand there are limited resources all round, I would prefer that we focus on adding new project functions to OSC which will eventually lead to feature parity. > > > Attracting ‘drive-by’ contributions from operations staff for OSC work (it's more likely to be achieved if it makes the operations work less e.g. save on special end user documentation by contributing code). This is demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' functionality along with lots of random OSC updates as listed hat https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) > > > BTW, I also would vote for =auto as the default. > > > Tim > > > We are on Rocky now but I expect that we will upgrade as necessary to stay on supported versions. > > > > > > From: Sean McGinnis > > > Sent: Tuesday, March 3, 2020 9:50 AM > > > To: openstack-discuss at lists.openstack.org > > > Subject: Re: OSC future (formerly [glance] Different checksum between CLI and curl) > > > > > > On 3/3/20 11:28 AM, Albert Braden wrote: > > > Am I understanding correctly that the Openstack community decided to focus on the unified client, and to deprecate the individual clients, and that the Glance team did not agree with this decision, and that the Glance team is now having a pissing match with the rest of the community, and is unilaterally deciding to continue developing the Glance client and refusing to work on the unified client, or is something different going on? I would ask everyone involved to remember that we operators are down here, and the yellow rain falling on our heads does not smell very good. > > > I definitely would not characterize it that way. > > > With trying not to put too much personal bias into it, here's what I would say the situation is: > > > - Some part of the community has said OSC should be the only CLI and that individual CLIs should go away > > > - Glance is a very small team with very, very limited resources > > > - The OSC team is a very small team with very, very limited resources > > > - CLI capabilities need to be exposed for Glance changes and the easiest way to get them out for the is by updating the Glance CLI > > > - No one from the OSC team has been able to proactively help to make sure these changes make it into the OSC client (see bullet 3) > > > - There exists a sizable functionality gap between per-project CLIs and what OSC provides, and although a few people have done a lot of great work to close that gap, there is still a lot to be done and does not appear the gap will close at any point in the near future based on the current trends > > > > > > > > > > > > > > > > > > > From mordred at inaugust.com Wed Mar 4 17:38:54 2020 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 4 Mar 2020 11:38:54 -0600 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: <9be364bd-49dd-ad97-3a21-ee0cc87a9298@ham.ie> References: <9be364bd-49dd-ad97-3a21-ee0cc87a9298@ham.ie> Message-ID: <150614DB-C9BD-413C-9790-C419635A2AFC@inaugust.com> > On Mar 4, 2020, at 10:49 AM, Graham Hayes wrote: > > On 04/03/2020 16:19, Monty Taylor wrote: >> Hey everybody, >> I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re >> Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. > > This makes sense. > >> I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. > > Yeah - projects were supposed to be mainly about common groups of people > working on stuff, so if the overlap is so close already, it seems like > a no brainer. > >> Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. > > If this is good with the two teams, this is good with me :) > Hopefully this can help with projects teams issues with OSC/SDK response > times. >> Monty I think it can. I’ve had some chats with some folks on the team and I think we all think this will help streamline and enable us to respond more quickly. From m2elsakha at gmail.com Wed Mar 4 18:06:43 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Wed, 4 Mar 2020 13:06:43 -0500 Subject: Upcoming UC meeting: March 5th 2020 Message-ID: Good day everyone, As you may have heard, there is an ongoing discussion for having a single body to encompass both the UC and TC responsibilities, potentially by merging the two bodies. This is still at early stages, and thus to facilitate this discussion we will hold tomorrow's UC meeting using video conferencing to allow all audience to easily share their thoughts and input. The meeting will happen at its normal time "13:30 EST", If you are interested in joining, please use the link below https://hangouts.google.com/u/1/call/q_bcZpCRXgB2_Cak4ycyAEEI Thanks Mohamed --melsakhawy -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Wed Mar 4 18:22:30 2020 From: allison at openstack.org (Allison Price) Date: Wed, 4 Mar 2020 12:22:30 -0600 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: References: Message-ID: <341F5F74-BAB4-463E-A69B-4E3F356AD1F9@openstack.org> Hi Mohamed, Did you mean CET? I believe the recurring meeting time is at 8:30 Eastern Standard Time (EST)? Thanks, Allison > On Mar 4, 2020, at 12:06 PM, Mohamed Elsakhawy wrote: > > Good day everyone, > > As you may have heard, there is an ongoing discussion for having a single body to encompass both the UC and TC responsibilities, potentially by merging the two bodies. This is still at early stages, and thus to facilitate this discussion we will hold tomorrow's UC meeting using video conferencing to allow all audience to easily share their thoughts and input. > > The meeting will happen at its normal time "13:30 EST", If you are interested in joining, please use the link below > > https://hangouts.google.com/u/1/call/q_bcZpCRXgB2_Cak4ycyAEEI > > Thanks > > Mohamed > --melsakhawy > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Wed Mar 4 18:23:37 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Mar 2020 19:23:37 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: I proposed the following two patches to address the issue and improve this module beyond the current issue: - https://review.opendev.org/711220 (the fix) - https://review.opendev.org/711247 (the improvements) After these patches will be merged and the issue fixed we will blacklist the version 2.1.0 of oslo.cache and propose a new release with the previous fixes embedded. Do not hesitate to review them and leave comments. Thanks for your reading. Le mer. 4 mars 2020 à 14:16, Herve Beraud a écrit : > Fix proposed https://review.opendev.org/#/c/711220/ > > Le mer. 4 mars 2020 à 13:42, Moises Guimaraes de Medeiros < > moguimar at redhat.com> a écrit : > >> `dead_timeout`++ >> >> On Wed, Mar 4, 2020 at 1:36 PM Herve Beraud wrote: >> >>> `dead_timeout` [1] looks more appropriate in this case. >>> >>> [1] >>> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L58 >>> >>> Le mer. 4 mars 2020 à 13:28, Herve Beraud a écrit : >>> >>>> What do you think about adding a mapping between `retry_timeout` [1] >>>> and `dead_retry` [2]? >>>> >>>> [1] >>>> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 >>>> [2] >>>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>>> >>>> Le mer. 4 mars 2020 à 13:20, Herve Beraud a >>>> écrit : >>>> >>>>> I think our issue is due to the fact that python-memcached accept a >>>>> param named `dead_retry` [1] which is not defined in pymemcache. >>>>> >>>>> We just need to define it in our oslo.cache mapping. During testing we >>>>> faced the same kind of issue with connection timeout. >>>>> >>>>> [1] >>>>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>>>> [2] >>>>> https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 >>>>> >>>>> Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < >>>>> radoslaw.piliszek at gmail.com> a écrit : >>>>> >>>>>> Please be informed that oslo.cache 2.1.0 breaks >>>>>> oslo_cache.memcache_pool >>>>>> >>>>>> Kolla-Ansible gate is already RED and a quick codesearch revealed >>>>>> other deployment methods might be in trouble soon as well. >>>>>> >>>>>> This does not affect devstack/tempest as they use >>>>>> dogpile.cache.memcached instead. >>>>>> >>>>>> The error is TypeError: __init__() got an unexpected keyword argument >>>>>> 'dead_retry' >>>>>> >>>>>> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >>>>>> >>>>>> -yoctozepto >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Hervé Beraud >>>>> Senior Software Engineer >>>>> Red Hat - Openstack Oslo >>>>> irc: hberaud >>>>> -----BEGIN PGP SIGNATURE----- >>>>> >>>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>>> v6rDpkeNksZ9fFSyoY2o >>>>> =ECSj >>>>> -----END PGP SIGNATURE----- >>>>> >>>>> >>>> >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer >>>> Red Hat - Openstack Oslo >>>> irc: hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer >>> Red Hat - Openstack Oslo >>> irc: hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> >> Moisés Guimarães >> >> Software Engineer >> >> Red Hat >> >> >> > > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Mar 4 18:28:43 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Mar 2020 12:28:43 -0600 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: References: Message-ID: <5E5FF35B.2040806@openstack.org> Will the hangout be recorded and available for the rest of the community to view if they aren't able to make the meeting? Thank you, Jimmy > Mohamed Elsakhawy > March 4, 2020 at 12:06 PM > Good day everyone, > > As you may have heard, there is an ongoing discussion for having a > single body to encompass both the UC and TC responsibilities, > potentially by merging the two bodies. This is still at early stages, > and thus to facilitate this discussion we will hold tomorrow's UC > meeting using video conferencing to allow all audience to easily share > their thoughts and input. > > The meeting will happen at its normal time "13:30 EST", If you are > interested in joining, please use the link below > > https://hangouts.google.com/u/1/call/q_bcZpCRXgB2_Cak4ycyAEEI > > Thanks > > Mohamed > --melsakhawy > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m2elsakha at gmail.com Wed Mar 4 18:25:52 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Wed, 4 Mar 2020 13:25:52 -0500 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: <341F5F74-BAB4-463E-A69B-4E3F356AD1F9@openstack.org> References: <341F5F74-BAB4-463E-A69B-4E3F356AD1F9@openstack.org> Message-ID: Apologies, 13:30 is actually *UTC* . The meeting will happen *13:30 UTC - 8:30 EST* On Wed, Mar 4, 2020 at 1:22 PM Allison Price wrote: > Hi Mohamed, > > Did you mean CET? I believe the recurring meeting time is at 8:30 Eastern > Standard Time (EST)? > > Thanks, > Allison > > > On Mar 4, 2020, at 12:06 PM, Mohamed Elsakhawy > wrote: > > Good day everyone, > > As you may have heard, there is an ongoing discussion for having a single > body to encompass both the UC and TC responsibilities, potentially by > merging the two bodies. This is still at early stages, and thus to > facilitate this discussion we will hold tomorrow's UC meeting using video > conferencing to allow all audience to easily share their thoughts and input. > > The meeting will happen at its normal time "13:30 EST", If you are > interested in joining, please use the link below > > https://hangouts.google.com/u/1/call/q_bcZpCRXgB2_Cak4ycyAEEI > > Thanks > > Mohamed > --melsakhawy > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m2elsakha at gmail.com Wed Mar 4 18:38:14 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Wed, 4 Mar 2020 13:38:14 -0500 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: <5E5FF35B.2040806@openstack.org> References: <5E5FF35B.2040806@openstack.org> Message-ID: Yes, it will be recorded and available for later viewing by the community Thanks Mohamed On Wed, Mar 4, 2020 at 1:28 PM Jimmy McArthur wrote: > Will the hangout be recorded and available for the rest of the community > to view if they aren't able to make the meeting? > > Thank you, > Jimmy > > Mohamed Elsakhawy > March 4, 2020 at 12:06 PM > Good day everyone, > > As you may have heard, there is an ongoing discussion for having a single > body to encompass both the UC and TC responsibilities, potentially by > merging the two bodies. This is still at early stages, and thus to > facilitate this discussion we will hold tomorrow's UC meeting using video > conferencing to allow all audience to easily share their thoughts and input. > > The meeting will happen at its normal time "13:30 EST", If you are > interested in joining, please use the link below > > https://hangouts.google.com/u/1/call/q_bcZpCRXgB2_Cak4ycyAEEI > > Thanks > > Mohamed > --melsakhawy > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Mar 4 18:57:38 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 4 Mar 2020 13:57:38 -0500 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: On 2/03/20 4:45 pm, Mohammed Naser wrote: > Hi everyone: > > We're now in a spot where we have an increasing amount of projects > that don't end up with a volunteer as PTL, even if the project has > contributors .. no one wants to hold that responsibility alone for > many reasons. With time, the PTL role has become far more overloaded > with many extra responsibilities than what we define in our charter: > > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. Just for fun I had a read through the thread from when I last proposed getting rid of PTLs, 5.5 years ago: http://lists.openstack.org/pipermail/openstack-dev/2014-August/043826.html I wrote that when I was a PTL. Now that I have been on all sides of it (Core team member, PTL, ex-PTL, TC member), let's see how well this has aged :D > First off, the PTL is not responsible for everything in a project. > *Everyone* is responsible for everything in a project. > > The PTL is *accountable* for everything in a project. PTLs are the > mechanism the TC uses to ensure that programs remain accountable to the > wider community. I still think this is true. But it's also true that if everyone is responsible then nobody is really responsible. Somebody has to be responsible for knowing all of the things that somebody needs to be responsible for and making sure that somebody is responsible for each. That can be done without a PTL as such, but the PTL system does provide a way of externally bootstrapping it in every project. > We have a heavyweight election process for PTLs once every > cycle because that used to be the process for electing the TC. Now that > it no longer serves this dual purpose, PTL elections have outlived their > usefulness. I had completely forgotten about this. From a TC perspective, we don't have a lot of visibility on internal ructions that may be going on in any particular project. The election does at least assure us that there is an outlet valve for any issues, and the fact that it is completely normalised across all of OpenStack makes it more likely that someone will actually challenge the PTL if there is a problem. > there's no need to impose that process on every project. If > they want to rotate the tech lead every week instead of every 6 months, > why not let them? We'll soon see from experimentation which models work. One cannot help wondering if we might get more Nova cores willing to sign up for a 1-week commitment to be the "PTL" than we're getting for a 6-months-and-maybe-indefinitely commitment. >> We also >> still need someone to have the final say in case of deadlocked issues. > > -1 we really don't. I still think I am mostly right about this (and I know Thierry still thinks he is right and I am wrong ;) IMHO it's never the job of the PTL to have a casting vote. It *is* the job of the PTL - and all leaders in the project - to ensure that consensus is eventually reached somehow; that discussion is not just allowed to continue forever without a resolution when people disagree. While all leaders should be doing this, I can see some benefit in having one person who sees it as specifically their responsibility, and as noted above the PTL election process ensures that this happens in every project. In summary, I still think that in a healthy project the requirement to have a PTL is probably mildly unhelpful. One thing that didn't come up in that thread but that I have mentioned elsewhere, was that when I became a PTL I very quickly learned to be very careful about what I expressed an opinion on and how, lest I accidentally close down a conversation that I was intending to open up. Because *overnight* people developed this sudden tendency to be like "HE HATH SPOKEN" whenever I weighed in. (This is very unnerving BTW, and one reason I feel like I can be more helpful by *not* running for PTL.) So having a PTL means giving up a core team member in some senses. Ultimately, from the TC perspective it's a tool for reducing the variance in outcomes compared to letting every team decide their own leadership structure. As with all interventions that act by reducing variance (rather than increasing the average), this will tend to be a burden on higher-performing teams while raising the floor for lower-performing ones. So that's the trade-off we have to make. cheers, Zane. From fungi at yuggoth.org Wed Mar 4 19:06:43 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 4 Mar 2020 19:06:43 +0000 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: References: <5E5FF35B.2040806@openstack.org> Message-ID: <20200304190643.uqztyrgwkotkqyrg@yuggoth.org> On 2020-03-04 13:38:14 -0500 (-0500), Mohamed Elsakhawy wrote: > Yes, it will be recorded and available for later viewing by the > community [...] Since you're hosting the discussion on a meeting platform which is not reachable from China, at least make sure to publish the recording somewhere which our Chinese community members have some hope of accessing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From james.denton at rackspace.com Wed Mar 4 19:26:54 2020 From: james.denton at rackspace.com (James Denton) Date: Wed, 4 Mar 2020 19:26:54 +0000 Subject: [neutron] security group list regression In-Reply-To: <4740f4822e7b571b40aa5dc549e3c59a2ee659c4.camel@redhat.com> References: <7DD0691D-19A3-4CDB-B377-F67829A86AD7@rackspace.com> <4740f4822e7b571b40aa5dc549e3c59a2ee659c4.camel@redhat.com> Message-ID: <59ACC4AB-95A8-49F4-91BD-8F94EF76494C@rackspace.com> Hi Rodolfo, The client we're using for Train does indeed have the patch. The Stein environment, running python-openstackclient 3.18.1, did not. I was able to patch it and speed up the DELETE operation. Real world, the user could probably just update the client and get the fix. Thanks again! On 3/3/20, 4:49 AM, "Rodolfo Alonso" wrote: CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello James: Yes, this is a known issue in OSclient: most of the "objects" (networks, subnets, routers, etc) to be retrieved, can usually can be retrieved by ID and by name. OSclient tries first to use the ID because is unique and a DB key. Then, instead of asking the server for a unique register (filtered by the name), the client retrieves the whole list and filters the results. But this problem was resolved in Train: https://review.opendev.org/#/c/637238/. Can you check, in openstacksdk, that you have this patch? At least in T. According to [1] and [2], "name" should be used as filter in the OSsdk "find" call. Regards. [1]https://review.opendev.org/#/c/637238/20/openstack/resource.py [2]https://github.com/openstack/openstacksdk/blob/master/openstack/network/v2/security_group.py#L29 On Mon, 2020-03-02 at 22:25 +0000, James Denton wrote: > Rodolfo, > > Thanks for continuing to push this on the ML and in the bug report. > > Happy to report that the client and SDK patches you provided have drastically reduced the SG list > time from ~90-120s to ~12-14s within Stein and Train lab environments. > > One last thing... when you perform an 'openstack security group delete ', the initial lookup > by name fails. In Train, the client falls back to using the 'name' parameter (/security- > groups?name=). This lookup is quick and the security group is found and deleted. However, on > Rocky/Stein (e.g. client 3.18.1), instead of searching by parameter, the client appears to perform > a GET /security-groups without limiting the fields and takes a long time. > > 'openstack security group list' with patch: > REQ: curl -g -i -X GET " > http://10.0.236.150:9696/v2.0/security-groups?fields=set%28%5B%27description%27%2C+%27project_id%27%2C+%27id%27%2C+%27tags%27%2C+%27name%27%5D%29 > " -H "Accept: application/json" -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python- > requests/2.21.0 CPython/2.7.17" -H "X-Auth-Token: > {SHA256}3e747da939e8c4befe72d5ca7105971508bd56cdf36208ba6b960d1aee6d19b6" > > 'openstack security group delete ': > > Train (notice the name param): > REQ: curl -g -i -X GET http://10.20.0.11:9696/v2.0/security-groups/train-test-1755 -H "User-Agent: > openstacksdk/0.36.0 keystoneauth1/3.17.1 python-requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: > {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" > http://10.20.0.11:9696 "GET /v2.0/security-groups/train-test-1755 HTTP/1.1" 404 125 > REQ: curl -g -i -X GET "http://10.20.0.11:9696/v2.0/security-groups?name=train-test-1755" -H > "Accept: application/json" -H "User-Agent: openstacksdk/0.36.0 keystoneauth1/3.17.1 python- > requests/2.22.0 CPython/3.6.7" -H "X-Auth-Token: > {SHA256}bf291d5f12903876fc69151db37d295da961ba684a575e77fb6f4829b55df1bf" > http://10.20.0.11:9696 "GET /v2.0/security-groups?name=train-test-1755 HTTP/1.1" 200 1365 > > Stein & below (notice lack of fields): > REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups/stein-test-5189 -H "User- > Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H "X-Auth- > Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" > http://10.0.236.150:9696 "GET /v2.0/security-groups/stein-test-5189 HTTP/1.1" 404 125 > REQ: curl -g -i -X GET http://10.0.236.150:9696/v2.0/security-groups -H "Accept: application/json" > -H "User-Agent: openstacksdk/0.27.0 keystoneauth1/3.13.1 python-requests/2.21.0 CPython/2.7.17" -H > "X-Auth-Token: {SHA256}e9f87afe851ff5380d8402ee81199c466be9c84fe67ed0302e8b178f33aa1fc2" > > > Haven't quite figured out where fields can be used to speed up the delete process on the older > client, or if the newer client would be backwards-compatible (and how far back). > > Thanks, > James > > On 3/2/20, 9:31 AM, "James Denton" wrote: > > CAUTION: This message originated externally, please use caution when clicking on links or > opening attachments! > > > Thanks, Rodolfo. I'll take a look at each of these after coffee and clarify my position (if > needed). > > James > > On 3/2/20, 6:27 AM, "Rodolfo Alonso" wrote: > > CAUTION: This message originated externally, please use caution when clicking on links or > opening attachments! > > > Hello James: > > Just to make a quick summary of the status of the commented bugs/regressions: > > 1) https://bugs.launchpad.net/neutron/+bug/1810563: adding rules to security groups is > slow > That was addressed in https://review.opendev.org/#/c/633145/ and > https://review.opendev.org/#/c/637407/, removing the O^2 check and using lazy loading. > > > 2) https://bugzilla.redhat.com/show_bug.cgi?id=1788749: Neutron List networks API > regression > The last reply was marked as private. I've undone this and you can read now c#2. Testing > with a > similar scenario, I don't see any performance degradation between Queens and Train. > > > 3) https://bugzilla.redhat.com/show_bug.cgi?id=1721273: Neutron API List Ports Performance > regression > That problem was solved in https://review.opendev.org/#/c/667981/ and > https://review.opendev.org/#/c/667998/, by refactoring how the port QoS extension was > reading and > applying the QoS info in the port dict. > > > 4) https://bugs.launchpad.net/neutron/+bug/1865223: regression for security group list > between > Newton and Rocky+ > > This is similar to https://bugs.launchpad.net/neutron/+bug/1863201. In this case, the > regression was > detected from R to S. The performance dropped from 3 secs to 110 secs (36x). That issue > was > addressed by https://review.opendev.org/#/c/708695/. > > But while 1865223 is talking about *SG list*, 1863201 is related to *SG rule list*. I > would like to > make this differentiation, because both retrieval commands are not related. > > In this bug (1863201), the performance degradation multiplies by x3 (N->Q) the initial > time. This > could be caused by the OVO integration (O->P: https://review.opendev.org/#/c/284738/). > Instead of > using the DB object now we make this call using the OVO object containing the DB register > (something > like a DB view). That's something I still need to check. > > Just to make a concretion: the patch 708695 improves the *SG rule* retrieval, not the SG > list > command. Another punctualization is that this patch will help in the case of having a > balance > between SG rules and SG. This patch will help to retrieve from the DB only those SG rules > belonging > to the project. If, as you state in > https://bugs.launchpad.net/neutron/+bug/1865223/comments/4, most > of those SG rules belong to the same project, there is little improvement there. > > As commented, I'm still looking at improving the SG OVO performance. > > Regards > > > On Mon, 2020-03-02 at 03:03 +0000, Erik Olof Gunnar Andersson wrote: > > When we went from Mitaka to Rocky in August last year and we saw an exponential increase > in api > > times for listing security group rules. > > > > I think I last commented on this bug https://bugs.launchpad.net/neutron/+bug/1810563, > but I have > > brought it up on a few other occasions as well. > > Bug #1810563 “adding rules to security groups is slow” : Bugs : neutron Sometime > between liberty > > and pike, adding rules to SG's got slow, and slower with every rule added. Gerrit review > with > > fixes is incoming. You can repro with a vanilla devstack install on master, and this > script: > > #!/bin/bash OPENSTACK_TOKEN=$(openstack token issue | grep '| id' | awk '{print $4}') > export > > OPENSTACK_TOKEN CCN1=10.210.162.2 CCN3=10.210.162.10 export ENDPOINT=localhost > make_rules() { > > iter=$1 prefix=$2 file="$3" echo "generating rules" cat >$file > <<EOF > > {... bugs.launchpad.net > > > > > > From: Slawek Kaplonski > > Sent: Saturday, February 29, 2020 12:44 AM > > To: James Denton > > Cc: openstack-discuss > > Subject: Re: [neutron] security group list regression > > > > Hi, > > > > I just replied in Your bug report. Can You try to apply patch > > > https://urldefense.com/v3/__https://review.opendev.org/*/c/708695/__;Iw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Ul-OlUqQ$ > > to see if that will help with this problem? > > > > > On 29 Feb 2020, at 02:41, James Denton wrote: > > > > > > Hello all, > > > > > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty > severe > > regression in the time it takes the API to return the list of security groups. This > environment > > has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack > security > > group list’ command to complete. I don’t have actual data from the same environment > running > > Newton, but was able to replicate this behavior with the following lab environments > running a mix > > of virtual and baremetal machines: > > > > > > Newton (VM) > > > Rocky (BM) > > > Stein (VM) > > > Train (BM) > > > > > > Number of sec grps vs time in seconds: > > > > > > # Newton Rocky Stein Train > > > 200 4.1 3.7 5.4 5.2 > > > 500 5.3 7 11 9.4 > > > 1000 7.2 12.4 19.2 16 > > > 2000 9.2 24.2 35.3 30.7 > > > 3000 12.1 36.5 52 44 > > > 4000 16.1 47.2 73 58.9 > > > 5000 18.4 55 90 69 > > > > > > As you can see (hopefully), the response time increased significantly between Newton > and Rocky, > > and has grown slightly ever since. We don't know, yet, if this behavior can be seen with > other > > 'list' commands or is limited to secgroups. We're currently verifying on some > intermediate > > releases to see where things went wonky. > > > > > > There are some similar recent reports out in the wild with little feedback: > > > > > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1788749__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8Vx5jGlrA$ > > > > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1721273__;!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8U9NbN_LA$ > > > > > > > > I opened a bug here, too: > > > > > > > > > https://urldefense.com/v3/__https://bugs.launchpad.net/neutron/*bug/1865223__;Kw!!Ci6f514n9QsL8ck!2GsBjp6V_V3EzrzAbWgNfsURfCm2tZmlUaw2J6OxFwJZUCV71lSP1b9jg8UtMQ2-Dw$ > > > > > > > > Bottom line: Has anyone else experienced similar regressions in recent releases? If > so, were you > > able to address them with any sort of tuning? > > > > > > Thanks in advance, > > > James > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > > > > From m2elsakha at gmail.com Wed Mar 4 19:26:54 2020 From: m2elsakha at gmail.com (Mohamed Elsakhawy) Date: Wed, 4 Mar 2020 14:26:54 -0500 Subject: Upcoming UC meeting: March 5th 2020 In-Reply-To: <20200304190643.uqztyrgwkotkqyrg@yuggoth.org> References: <5E5FF35B.2040806@openstack.org> <20200304190643.uqztyrgwkotkqyrg@yuggoth.org> Message-ID: Of course, that's a good point. On Wed, Mar 4, 2020 at 2:07 PM Jeremy Stanley wrote: > On 2020-03-04 13:38:14 -0500 (-0500), Mohamed Elsakhawy wrote: > > Yes, it will be recorded and available for later viewing by the > > community > [...] > > Since you're hosting the discussion on a meeting platform which is > not reachable from China, at least make sure to publish the > recording somewhere which our Chinese community members have some > hope of accessing. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Mar 4 19:53:00 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 4 Mar 2020 14:53:00 -0500 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins Message-ID: Hello QA team and devstack-plugin-ceph-core people, The Cinder team has some proposals we'd like to float. 1. The Cinder team is interested in becoming more active in the maintenance of openstack/devstack-plugin-ceph [0]. Currently, the devstack-plugin-ceph-core is https://review.opendev.org/#/admin/groups/1196,members The cinder-core is already represented by Eric and Sean; we'd like to replace them by including the cinder-core group. 2. The Cinder team is interested in becoming more active in the maintenance of x/devstack-plugin-nfs [1]. Currently, the devstack-plugin-nfs-core is https://review.opendev.org/#/admin/groups/1330,members It's already 75% cinder-core members; we'd like to replace the individual members with the cinder-core group. We also propose that devstack-core be added as an included group. 3. The Cinder team is interested in implementing a new devstack plugin: openstack/devstack-plugin-open-cas This will enable thorough testing of a new feature [2] being introduced as experimental in Ussuri and expected to be finalized in Victoria. Our plan would be to make both cinder-core and devstack-core included groups for the gerrit group governing the new plugin. 4. This is a minor point, but can the devstack-plugin-nfs repo be moved back into the 'openstack' namespace? Let us know which of these proposals you find acceptable. [0] https://opendev.org/openstack/devstack-plugin-ceph [1] https://opendev.org/x/devstack-plugin-nfs [2] https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache From jungleboyj at gmail.com Wed Mar 4 19:56:26 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 4 Mar 2020 13:56:26 -0600 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <5c461b2e-a730-1fd9-aee4-ae0ea0e2eff9@gmail.com> On 3/4/2020 12:57 PM, Zane Bitter wrote: > On 2/03/20 4:45 pm, Mohammed Naser wrote: >> Hi everyone: >> >> We're now in a spot where we have an increasing amount of projects >> that don't end up with a volunteer as PTL, even if the project has >> contributors .. no one wants to hold that responsibility alone for >> many reasons.  With time, the PTL role has become far more overloaded >> with many extra responsibilities than what we define in our charter: >> >> https://governance.openstack.org/tc/reference/charter.html#project-team-leads >> >> >> I think it's time to re-evaluate the project leadership model that we >> have.  I am thinking that perhaps it would make a lot of sense to move >> from a single PTL model to multiple maintainers.  This would leave it >> up to the maintainers to decide how they want to sort the different >> requirements/liaisons/contact persons between them. > > Just for fun I had a read through the thread from when I last proposed > getting rid of PTLs, 5.5 years ago: > > http://lists.openstack.org/pipermail/openstack-dev/2014-August/043826.html > > > I wrote that when I was a PTL. Now that I have been on all sides of it > (Core team member, PTL, ex-PTL, TC member), let's see how well this > has aged :D > >> First off, the PTL is not responsible for everything in a project. >> *Everyone* is responsible for everything in a project. >> >> The PTL is *accountable* for everything in a project. PTLs are the >> mechanism the TC uses to ensure that programs remain accountable to >> the wider community. > > I still think this is true. But it's also true that if everyone is > responsible then nobody is really responsible. Somebody has to be > responsible for knowing all of the things that somebody needs to be > responsible for and making sure that somebody is responsible for each. > > That can be done without a PTL as such, but the PTL system does > provide a way of externally bootstrapping it in every project. > >> We have a heavyweight election process for PTLs once every cycle >> because that used to be the process for electing the TC. Now that it >> no longer serves this dual purpose, PTL elections have outlived their >> usefulness. > > I had completely forgotten about this. > > From a TC perspective, we don't have a lot of visibility on internal > ructions that may be going on in any particular project. The election > does at least assure us that there is an outlet valve for any issues, > and the fact that it is completely normalised across all of OpenStack > makes it more likely that someone will actually challenge the PTL if > there is a problem. > >> there's no need to impose that process on every project. If they want >> to rotate the tech lead every week instead of every 6 months, why not >> let them? We'll soon see from experimentation which models work. > > One cannot help wondering if we might get more Nova cores willing to > sign up for a 1-week commitment to be the "PTL" than we're getting for > a 6-months-and-maybe-indefinitely commitment. > >>> We also >>> still need someone to have the final say in case of deadlocked issues. >> >> -1 we really don't. > > I still think I am mostly right about this (and I know Thierry still > thinks he is right and I am wrong ;) > > IMHO it's never the job of the PTL to have a casting vote. It *is* the > job of the PTL - and all leaders in the project - to ensure that > consensus is eventually reached somehow; that discussion is not just > allowed to continue forever without a resolution when people disagree. > > While all leaders should be doing this, I can see some benefit in > having one person who sees it as specifically their responsibility, > and as noted above the PTL election process ensures that this happens > in every project. > > > In summary, I still think that in a healthy project the requirement to > have a PTL is probably mildly unhelpful. One thing that didn't come up > in that thread but that I have mentioned elsewhere, was that when I > became a PTL I very quickly learned to be very careful about what I > expressed an opinion on and how, lest I accidentally close down a > conversation that I was intending to open up. Because *overnight* > people developed this sudden tendency to be like "HE HATH SPOKEN" > whenever I weighed in. (This is very unnerving BTW, and one reason I > feel like I can be more helpful by *not* running for PTL.) So having a > PTL means giving up a core team member in some senses. > > Ultimately, from the TC perspective it's a tool for reducing the > variance in outcomes compared to letting every team decide their own > leadership structure. As with all interventions that act by reducing > variance (rather than increasing the average), this will tend to be a > burden on higher-performing teams while raising the floor for > lower-performing ones. So that's the trade-off we have to make. > > cheers, > Zane. > I have been wanting to weigh in on this thread and was waiting for the right moment. Zane's input sums up how I feel as well.  I think that having consistent leadership structure across projects is important and helps keep us aware of the health of projects. Perhaps we can help return interest in the PTL role by providing examples of teams that share the work and have the PTL to help make final decisions.  I know that the Cinder team has been doing this for quite some time successfully. Jay From soumplis at admin.grnet.gr Wed Mar 4 19:59:52 2020 From: soumplis at admin.grnet.gr (Alexandros Soumplis) Date: Wed, 4 Mar 2020 21:59:52 +0200 Subject: [Watcher] Queue watcher_notifications Message-ID: Hi all, We are using Watcher on Train and we have noticed a problem with the watcher_notifications queue which fills up with messages from Cinder and watcher-decision-engine seems not to consume them. On the contrary it perfectly consumes nova notifications from the versioned_notifications queue. Any suggestions ? a. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3620 bytes Desc: S/MIME Cryptographic Signature URL: From openstack at nemebean.com Wed Mar 4 20:08:54 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 4 Mar 2020 14:08:54 -0600 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: On 3/4/20 12:57 PM, Zane Bitter wrote: > One cannot help wondering if we might get more Nova cores willing to > sign up for a 1-week commitment to be the "PTL" than we're getting for a > 6-months-and-maybe-indefinitely commitment. That's a really interesting idea. I'm not sure I'd want to go as short as one week for PTL, but shortening the term might make it easier for people to commit. It might be a small issue come project update and cycle highlights time since no one person may have the big picture of what happened in the project, but ideally those are collaborative things that the entire team has input on anyway. From skaplons at redhat.com Wed Mar 4 21:08:39 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 4 Mar 2020 22:08:39 +0100 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: <150614DB-C9BD-413C-9790-C419635A2AFC@inaugust.com> References: <9be364bd-49dd-ad97-3a21-ee0cc87a9298@ham.ie> <150614DB-C9BD-413C-9790-C419635A2AFC@inaugust.com> Message-ID: <1824F3CF-16D2-425C-8EE8-9A282A09DE4F@redhat.com> +1 from me for that idea. > On 4 Mar 2020, at 18:38, Monty Taylor wrote: > > > >> On Mar 4, 2020, at 10:49 AM, Graham Hayes wrote: >> >> On 04/03/2020 16:19, Monty Taylor wrote: >>> Hey everybody, >>> I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re >>> Already ensuring that SDK does what it needs to do to facilitate that goal. We also already share PTG space, and have requested a shared set of time at the upcoming Denver PTG. So really the separation is historical not practical, and these days having additional layers of governance is not super useful. >> >> This makes sense. >> >>> I propose that we do a simple merge of the teams. This means the current SDK cores will become cores on OSC, and as most of the OSC cores are already SDK cores, it means the SDK team gains amotoki - which is always a positive. >> >> Yeah - projects were supposed to be mainly about common groups of people >> working on stuff, so if the overlap is so close already, it seems like >> a no brainer. >> >>> Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. >> >> If this is good with the two teams, this is good with me :) >> Hopefully this can help with projects teams issues with OSC/SDK response >> times. >>> Monty > > I think it can. I’ve had some chats with some folks on the team and I think we all think this will help streamline and enable us to respond more quickly. > — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed Mar 4 21:09:59 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 4 Mar 2020 22:09:59 +0100 Subject: [sdk] Additions and subtractions from core team In-Reply-To: References: Message-ID: <7DF33892-5254-472F-BC0E-07211A05253E@redhat.com> Hi, > On 4 Mar 2020, at 17:56, Monty Taylor wrote: > > Heya, > > With the previous email about merging OSC and SDK teams, I’d also like to propose the following changes to the SDK core team (keeping in mind that likely means the core team of both OSC and SDK real soon now) > > Adds: > > Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really be a core in all projects anyway > Sean McGinnis - Sean has been reviewing things as a stable branch maint in both SDK and OSC, and as such has shown a good tendency to help things along when needed and to not approve things when he doesn’t know what’s up. Big +1 from me here. > > Subtractions: > > All of these people are awesome, but they’re all long gone: > > Brian Curtin > Clint Byrum > Everett Toews > Jamie Lennox > Jesse Noller > Ricardo Carillo Cruz > Richard Theis > Rosario Di Somma > Sam Yaple > Terry Howe That’s sad but if that is necessary than ok. > > Monty > — Slawek Kaplonski Senior software engineer Red Hat From sean.mcginnis at gmx.com Wed Mar 4 21:10:55 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 4 Mar 2020 15:10:55 -0600 Subject: [CINDER] Snapshots export In-Reply-To: <20200304155850.b4ydu4vfxthih7we@localhost> References: <20200304155850.b4ydu4vfxthih7we@localhost> Message-ID: On 3/4/20 9:58 AM, Gorka Eguileor wrote: > On 03/03, Alfredo De Luca wrote: >> Hi all. >> We have our env with Openstack (Train) and cinder with CEPH (nautilus) >> backend. >> We are creating automatic volumes snapshots and now we'd like to export >> them as a backup/restore plan. After exporting the snapshots we will use >> Acronis as backup tool. >> >> I couldn't find the right steps/commands to exports the snapshots. >> Any info? >> Cheers >> >> -- >> *Alfredo* > Hi Alfredo, > > What kind of backup/restore plan do you have planned? > > Because snapshots are not meant to be used in a Disaster Recovery > backup/restore plan, so the only thing available are the manage/unmanage > commands. > > These commands are meant to add an existing volume/snapshots into Cinder > together, not to unmanage/manage them independently. > > For example, you wouldn't be able to manage a snapshot if the volume is > not already managed. Also unmanaging the snapshot would block the > deletion of the RBD volume itself. > > Cheers, > Gorka. If the intent is to use the snapshots as a source to backup the volume data, leaving the actual volume attached and IO running but still getting a "static" view of the code, then you would need to create a volume from the chosen snapshot, mount that volume somewhere that is accessible to your backup software, perform the copy of the data, then delete the volume when complete. I haven't used Acronis myself, but the issue for some backup software could be that the volume it is backing up from is going to be different every time. Though you could make sure it is mounted at the same place so the backup software at least *thinks* it's backing up the same thing. Then restoring the data will likely require some manual intervention, but that's pretty much always the case when something goes wrong. I would just recommend you test the full disaster recovery scenario to make sure you have that figured out and working right before you actually need it. Sean From colleen at gazlene.net Wed Mar 4 21:26:45 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 04 Mar 2020 13:26:45 -0800 Subject: =?UTF-8?Q?[policy][keystone][nova][cyborg][barbican][neutron][manila][ci?= =?UTF-8?Q?nder]_Policy_Popup_team_progress_report?= Message-ID: <3f3a161a-ba33-4737-b0d1-556810aa9315@www.fastmail.com> This is an update on the progress made within the Policy Popup team[1] so far this cycle. [1] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team Why This Is Important ===================== Separating system, domain, and project-scope APIs and providing meaningful default roles is critical to facilitating secure cloud deployments and to fulfilling OpenStack's vision as a fully self-service infrastructure provider[2]. Until all projects have completed this policy migration, the "reader" role that exists in keystone is dangerously misleading, and the `[oslo_policy]/enforce_scope` option has limited usefulness as long as projects lack uniformity in how an administrator can use scoped APIs. [2] https://governance.openstack.org/tc/reference/technical-vision.html#self-service Project Progress ================ Nova ---- - Ussuri spec has merged[3] - 28 changes implementing the spec have been merged[4] - 39 additional changes have been proposed and are awaiting review[5] [3] https://review.opendev.org/686058 [4] https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+status:merged [5] https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+status:open Cyborg ------ - Ussuri spec has merged[6] and a tracking story has been created[7] - 2 changes to implement the spec have been proposed and are awaiting review[8] [6] https://review.opendev.org/699099 [7] https://storyboard.openstack.org/#!/story/2007024 [8] https://review.opendev.org/#/q/project:openstack/cyborg+topic:policy-popup+status:open Barbican -------- - A table has been created outlining the required policy changes[9] - No patches merged or proposed yet [9] https://wiki.openstack.org/wiki/Barbican/Policy Neutron ------- - No planning document - No patches merged or proposed yet Manila ------ - No planning document - No patches merged or proposed yet Cinder ------ - No planning document - No patches merged or proposed yet How You Can Help ================ If you are a contributor for these teams, please update the popup team wiki page[10] as your project starts to plan and implement policy changes. If you are a cloud operator, please help review the proposed policy rule changes to sanity-check the new scope and role defaults and to help influence these decisions. [10] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team Reminders ========= - Reach out at any time to the keystone team if you have questions on this popup team's goals. - Colleen still seeking to be replaced as co-chair, please let me know if you're interested. From openstack at nemebean.com Wed Mar 4 22:33:37 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 4 Mar 2020 16:33:37 -0600 Subject: [oslo][infra] OpenDev git repo for oslo.policy missing commit In-Reply-To: References: <1129d4b2-0a8d-d034-5ded-7e49e6e49a77@nemebean.com> Message-ID: <69ec9111-0617-3259-98f7-64e5125eba54@nemebean.com> On 3/3/20 5:42 PM, Clark Boylan wrote: > On Tue, Mar 3, 2020, at 3:05 PM, Ben Nemec wrote: >> Found a weird thing today. The OpenDev oslo.policy repo[0] is missing >> [1]. Even stranger, I see it on the Github mirror[2]. Any idea what >> happened here? > > Some other readers may notice that the commit actually does show up for them. The reason for this is the commit is only missing from one of eight backend gitea servers. You can observe this by visiting https://gitea0X.opendev.org:3000/openstack/oslo.policy/commits/branch/master and replacing the X with 1 through 8. Number 5 is the lucky server. > > My hunch is that this commit merging and subsequently being replicated coincided with a restart of gitea (or related service) on gitea05. And the replication event was missed. We've tried to ensure we replicate to catch up after explicit upgrades, which implies to me that maybe the db container updated. Note that https://review.opendev.org/#/c/705804/ merged on the same day but after the missing commit. > > In any case I've triggered a full rereplication to gitea05 to make sure we are caught up and will work through the others as well to ensure none are missed. You should be able to confirm that the commit is present in about 20 minutes. Great, thanks! I see it now. > > Longer term the plan here is to run a single Gitea cluster which will allow us to do rolling restarts of services without impacting replication. Unfortunately, this requires updates to Gitea to support that. > >> >> -Ben >> >> 0: https://opendev.org/openstack/oslo.policy/commits/branch/master >> 1: https://review.opendev.org/#/c/708212/ >> 2: https://github.com/openstack/oslo.policy/commits/master >> >> > From zbitter at redhat.com Wed Mar 4 22:33:57 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 4 Mar 2020 17:33:57 -0500 Subject: [stable][heat] Nominating Rabi Mishra for heat-stable-maint Message-ID: Rabi has been a core reviewer on Heat for more than 4 years, and is a former PTL. In that time he's done hundreds of backports: https://review.opendev.org/#/q/owner:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat (Only two of which we ended up deciding not to merge, the last of which was in 2016.) As well as a good number of reviews: https://review.opendev.org/#/q/reviewedby:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat+NOT+owner:%22Rabi+Mishra+%253Cramishra%2540redhat.com%253E%22 Rabi also maintains a downstream distribution of Heat, so he is fully aware of the pain that backporting inappropriate changes can cause. He is well aware of the stable branch guidelines and is, if anything, more conservative than I am in applying them. I'm 100% confident he will not be approving any feature backports. I'll be spending some extended time away from a keyboard in the near future, so it's important that we increase the bus factor (from 1) of heat-stable-maint team members. thanks, Zane. From gmann at ghanshyammann.com Wed Mar 4 22:40:39 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Mar 2020 16:40:39 -0600 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: References: Message-ID: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita wrote ---- > Hello QA team and devstack-plugin-ceph-core people, > > The Cinder team has some proposals we'd like to float. > > 1. The Cinder team is interested in becoming more active in the > maintenance of openstack/devstack-plugin-ceph [0]. Currently, the > devstack-plugin-ceph-core is > https://review.opendev.org/#/admin/groups/1196,members > The cinder-core is already represented by Eric and Sean; we'd like to > replace them by including the cinder-core group. +1. This is good diea and make sense, I will do the change. > > 2. The Cinder team is interested in becoming more active in the > maintenance of x/devstack-plugin-nfs [1]. Currently, the > devstack-plugin-nfs-core is > https://review.opendev.org/#/admin/groups/1330,members > It's already 75% cinder-core members; we'd like to replace the > individual members with the cinder-core group. We also propose that > devstack-core be added as an included group. > > 3. The Cinder team is interested in implementing a new devstack plugin: > openstack/devstack-plugin-open-cas > This will enable thorough testing of a new feature [2] being introduced > as experimental in Ussuri and expected to be finalized in Victoria. Our > plan would be to make both cinder-core and devstack-core included groups > for the gerrit group governing the new plugin. +1. You want this under Cinder governance or under QA ? > > 4. This is a minor point, but can the devstack-plugin-nfs repo be moved > back into the 'openstack' namespace? If this is usable plugin for nfs testing (I am not aware if we have any other) then it make sense to bring it to openstack governance. Same question here, do you want to put this under Cinder governance or QA. Those plugins under QA governance also ok for me with your proposal of calloborative maintainance by devstack-core and cinder-core. -gmann > > Let us know which of these proposals you find acceptable. > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > [1] https://opendev.org/x/devstack-plugin-nfs > [2] https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > From zbitter at redhat.com Wed Mar 4 22:43:27 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 4 Mar 2020 17:43:27 -0500 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: On 4/03/20 3:08 pm, Ben Nemec wrote: > > > On 3/4/20 12:57 PM, Zane Bitter wrote: >> One cannot help wondering if we might get more Nova cores willing to >> sign up for a 1-week commitment to be the "PTL" than we're getting for >> a 6-months-and-maybe-indefinitely commitment. > > That's a really interesting idea. I'm not sure I'd want to go as short > as one week for PTL, but shortening the term might make it easier for > people to commit. The key would be to make it short enough that you can be 100% confident the next person will take over and not leave you holding the bag forever. (Hi Rico!) I've no idea where the magic number would fall, and it's probably different for every team. I'm reasonably confident it's somewhere between 1 week and 6 months though. From dms at danplanet.com Thu Mar 5 00:34:11 2020 From: dms at danplanet.com (Dan Smith) Date: Wed, 04 Mar 2020 16:34:11 -0800 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: (Zane Bitter's message of "Wed, 4 Mar 2020 17:43:27 -0500") References: Message-ID: > The key would be to make it short enough that you can be 100% > confident the next person will take over and not leave you holding the > bag forever. (Hi Rico!) > > I've no idea where the magic number would fall, and it's probably > different for every team. I'm reasonably confident it's somewhere > between 1 week and 6 months though. I think one potential benefit from this comes in the form of killing one of the other thngs that Zane mentioned (which I totally agree with): The "(s)he hath spoken" part. I can imagine getting to the point where not everyone on the team remembers exactly "who is PTL this month" and certainly the long tail of minor contributors would lose visibility into this. I think that would dis-empower (in a good way) the PTL-this-month person both from the perspective of being the constant (even if unnecessary) decider, and the sounding board for everyone trying to push their pet efforts through the works. Although I'm calling "not it" for the month containing the PTG right here and now ;P --Dan From gmann at ghanshyammann.com Thu Mar 5 00:53:55 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 04 Mar 2020 18:53:55 -0600 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <170a82f47d1.bb51993a438690.220344225005930378@ghanshyammann.com> ---- On Wed, 04 Mar 2020 16:43:27 -0600 Zane Bitter wrote ---- > On 4/03/20 3:08 pm, Ben Nemec wrote: > > > > > > On 3/4/20 12:57 PM, Zane Bitter wrote: > >> One cannot help wondering if we might get more Nova cores willing to > >> sign up for a 1-week commitment to be the "PTL" than we're getting for > >> a 6-months-and-maybe-indefinitely commitment. > > > > That's a really interesting idea. I'm not sure I'd want to go as short > > as one week for PTL, but shortening the term might make it easier for > > people to commit. > > The key would be to make it short enough that you can be 100% confident > the next person will take over and not leave you holding the bag > forever. (Hi Rico!) > > I've no idea where the magic number would fall, and it's probably > different for every team. I'm reasonably confident it's somewhere > between 1 week and 6 months though. This seems a good way to distribute the PTL overload but I am thinking what if more than one would like to server as PTL for the cycle or whatever period we decide. I am not sure we will have this case in the current situation where almost all projects are without-election but still we should have some mechanism ready. Another idea I think about co-PTLship. I remember in previous or this cycle few projects want to have the co-PTL concept. Means officially have more than PTL. To solve the single point of contact issue we can have single PTL contact and other co-PTL distribute the responsibility for that cycle. -gmann > > > From rico.lin.guanyu at gmail.com Thu Mar 5 02:08:08 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 5 Mar 2020 10:08:08 +0800 Subject: [stable][heat] Nominating Rabi Mishra for heat-stable-maint In-Reply-To: References: Message-ID: Big +1 on this On Thu, Mar 5, 2020 at 6:39 AM Zane Bitter wrote: > Rabi has been a core reviewer on Heat for more than 4 years, and is a > former PTL. In that time he's done hundreds of backports: > > > https://review.opendev.org/#/q/owner:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat > > (Only two of which we ended up deciding not to merge, the last of which > was in 2016.) > > As well as a good number of reviews: > > > https://review.opendev.org/#/q/reviewedby:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat+NOT+owner:%22Rabi+Mishra+%253Cramishra%2540redhat.com%253E%22 > > Rabi also maintains a downstream distribution of Heat, so he is fully > aware of the pain that backporting inappropriate changes can cause. He > is well aware of the stable branch guidelines and is, if anything, more > conservative than I am in applying them. I'm 100% confident he will not > be approving any feature backports. > > I'll be spending some extended time away from a keyboard in the near > future, so it's important that we increase the bus factor (from 1) of > heat-stable-maint team members. > > thanks, > Zane. > > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rui.zang at yandex.com Thu Mar 5 02:24:55 2020 From: rui.zang at yandex.com (rui zang) Date: Thu, 05 Mar 2020 10:24:55 +0800 Subject: CPU Topology confusion In-Reply-To: References: Message-ID: <23998181583375095@myt3-4825096bdc88.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Thu Mar 5 02:29:17 2020 From: feilong at catalyst.net.nz (Feilong Wang) Date: Thu, 5 Mar 2020 15:29:17 +1300 Subject: [stable][heat] Nominating Rabi Mishra for heat-stable-maint In-Reply-To: References: Message-ID: <47d2af35-120a-ea46-61f6-bab5d0a3548b@catalyst.net.nz> big +1 On 5/03/20 11:33 AM, Zane Bitter wrote: > Rabi has been a core reviewer on Heat for more than 4 years, and is a > former PTL. In that time he's done hundreds of backports: > > https://review.opendev.org/#/q/owner:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat > > > (Only two of which we ended up deciding not to merge, the last of > which was in 2016.) > > As well as a good number of reviews: > > https://review.opendev.org/#/q/reviewedby:ramishra%2540redhat.com+branch:%22%255Estable/.*%22+project:openstack/heat+NOT+owner:%22Rabi+Mishra+%253Cramishra%2540redhat.com%253E%22 > > > Rabi also maintains a downstream distribution of Heat, so he is fully > aware of the pain that backporting inappropriate changes can cause. He > is well aware of the stable branch guidelines and is, if anything, > more conservative than I am in applying them. I'm 100% confident he > will not be approving any feature backports. > > I'll be spending some extended time away from a keyboard in the near > future, so it's important that we increase the bus factor (from 1) of > heat-stable-maint team members. > > thanks, > Zane. > > -- Cheers & Best regards, Feilong Wang (王飞龙) Head of R&D Catalyst Cloud - Cloud Native New Zealand -------------------------------------------------------------------------- Tel: +64-48032246 Email: flwang at catalyst.net.nz Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- From rico.lin.guanyu at gmail.com Thu Mar 5 02:29:42 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 5 Mar 2020 10:29:42 +0800 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: <170a82f47d1.bb51993a438690.220344225005930378@ghanshyammann.com> References: <170a82f47d1.bb51993a438690.220344225005930378@ghanshyammann.com> Message-ID: On Thu, Mar 5, 2020 at 8:59 AM Ghanshyam Mann wrote: > > Another idea I think about co-PTLship. I remember in previous or this > cycle few projects want to have the co-PTL concept. Means officially have > more than PTL. To solve the single point of contact issue we can have > single PTL contact and other co-PTL distribute the responsibility for that cycle. > IMO that's a good idea for sure, and that's how the Telemetry team is doing right now (two co-PTLs) But from my part, PTL is really not that huge needed for leadership but a lot of maintaining works. This makes me tend to agree that `Maintainers` is the right word (along with other nice choices). In the long term, merge `core team` into the `Maintainers`? :) And to specifically define, I don't think this change means the project is under maintenance mode. Features should still welcome to innovate each project we have. And we should specifically mention that to avoid any confusion people might have here. -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kpuusild at gmail.com Thu Mar 5 06:45:36 2020 From: kpuusild at gmail.com (Kevin Puusild) Date: Thu, 5 Mar 2020 08:45:36 +0200 Subject: NovaVMware - vCenter / DevStack - ERROR Message-ID: Hello i have written here before but i didn't get any help so i try again. Current Setup: Ubuntu 18.04 Server with 1 vCenter + 1 ESXi i am trying to setup DevStack with vCenter as a hypervisor (not ESXI) following this wiki article: https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide to prepare to setup my devstack and following this wiki articel https://docs.openstack.org/devstack/latest/ too (compaining these two articles) *localrc* file is containing below information: ENABLED_SERVICES=g-api,g-reg,key,n-api,n-crt,n-cpu,n-net,n-cond,n-sch,n-novnc,n-cauth,rabbit,mysql,horizon VIRT_DRIVER=vsphere VMWAREAPI_IP=#MY_VCENTER_IP VMWAREAPI_USER=#MY_VCENTER_ADMINISTRATOR_USER VMWAREAPI_PASSWORD=#MY_VCENTER_ADMINISTRATOR_PASSWORD VMWAREAPI_CLUSTER=#MY_VCENTER_CLUSTER_NAME DATABASE_PASSWORD=nova RABBIT_PASSWORD=nova SERVICE_TOKEN=nova SERVICE_PASSWORD=nova ADMIN_PASSWORD=nova HOST_IP=#MY_DEVSTACK_MACHINE_IP after running *stack.sh* script i get following error: ++functions:wait_for_compute:448 hostname +functions:wait_for_compute:448 compute_hostname=devstack +functions:wait_for_compute:450 timeout 60 bash -x ++functions:wait_for_compute:450 hostname +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +:: ID= +:: [[ '' == '' ]] +:: sleep 1 +:: [[ vsphere = \f\a\k\e ]] ++:: openstack --os-cloud devstack-admin --os-region RegionOne compute service list --host devstack --service nova-compute -c ID -f value +functions:wait_for_compute:450 rval=124 +functions:wait_for_compute:462 time_stop wait_for_service +functions-common:time_stop:2330 local name +functions-common:time_stop:2331 local end_time +functions-common:time_stop:2332 local elapsed_time +functions-common:time_stop:2333 local total +functions-common:time_stop:2334 local start_time +functions-common:time_stop:2336 name=wait_for_service +functions-common:time_stop:2337 start_time=1582790249013 +functions-common:time_stop:2339 [[ -z 1582790249013 ]] ++functions-common:time_stop:2342 date +%s%3N +functions-common:time_stop:2342 end_time=1582790309198 +functions-common:time_stop:2343 elapsed_time=60185 +functions-common:time_stop:2344 total=7679 +functions-common:time_stop:2346 _TIME_START[$name]= +functions-common:time_stop:2347 _TIME_TOTAL[$name]=67864 +functions:wait_for_compute:464 [[ 124 != 0 ]] +functions:wait_for_compute:465 echo 'Didn'\''t find service registered by hostname after 60 seconds' Didn't find service registered by hostname after 60 seconds +functions:wait_for_compute:466 openstack --os-cloud devstack-admin --os-region RegionOne compute service list +functions:wait_for_compute:468 return 124 +lib/nova:is_nova_ready:1 exit_trap +./stack.sh:exit_trap:533 local r=124 ++./stack.sh:exit_trap:534 jobs -p +./stack.sh:exit_trap:534 jobs= +./stack.sh:exit_trap:537 [[ -n '' ]] +./stack.sh:exit_trap:543 '[' -f /tmp/tmp.8iR1neh89b ']' +./stack.sh:exit_trap:544 rm /tmp/tmp.8iR1neh89b +./stack.sh:exit_trap:548 kill_spinner +./stack.sh:kill_spinner:443 '[' '!' -z '' ']' +./stack.sh:exit_trap:550 [[ 124 -ne 0 ]] +./stack.sh:exit_trap:551 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:553 type -p generate-subunit +./stack.sh:exit_trap:554 generate-subunit 1582789174 1136 fail +./stack.sh:exit_trap:556 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:559 /usr/bin/python3.6 /devstack/devstack/tools/worlddump.py -d /opt/stack/logs World dumping... see /opt/stack/logs/worlddump-2020-02-27-075831.txt for details +./stack.sh:exit_trap:568 exit 124 -- Kevin Puusild -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Mar 5 10:55:45 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 05 Mar 2020 10:55:45 +0000 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: <2c8ccd0c2d2d7a8ae6073de8b9fc80656fa49ce0.camel@redhat.com> On Wed, 2020-03-04 at 10:19 -0600, Monty Taylor wrote: > Hey everybody, > > I'd like to propose merging the SDK and OSC teams. We already share > an IRC channel, and already share a purpose in life. In OSC we have a > current goal of swapping out client implementation for SDK, and we're > Already ensuring that SDK does what it needs to do to facilitate that > goal. We also already share PTG space, and have requested a shared > set of time at the upcoming Denver PTG. So really the separation is > historical not practical, and these days having additional layers of > governance is not super useful. > > I propose that we do a simple merge of the teams. This means the > current SDK cores will become cores on OSC, and as most of the OSC > cores are already SDK cores, it means the SDK team gains amotoki - > which is always a positive. Big +1 > Dean hasn't had time to spend on OSC quite a bit, sadly, and while we > remain hopeful that this will change, we’re slowly coming to terms > with the possibility that it might not. With that in mind, I'll serve > as the PTL for the new combined team until the next election. > > Monty > From gr at ham.ie Thu Mar 5 11:06:33 2020 From: gr at ham.ie (Graham Hayes) Date: Thu, 5 Mar 2020 11:06:33 +0000 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> On 04/03/2020 22:43, Zane Bitter wrote: > On 4/03/20 3:08 pm, Ben Nemec wrote: >> >> >> On 3/4/20 12:57 PM, Zane Bitter wrote: >>> One cannot help wondering if we might get more Nova cores willing to >>> sign up for a 1-week commitment to be the "PTL" than we're getting >>> for a 6-months-and-maybe-indefinitely commitment. >> >> That's a really interesting idea. I'm not sure I'd want to go as short >> as one week for PTL, but shortening the term might make it easier for >> people to commit. > > The key would be to make it short enough that you can be 100% confident > the next person will take over and not leave you holding the bag > forever. (Hi Rico!) And also that the person you hand it off too won't have to hand it back. (Hi Tim!) > I've no idea where the magic number would fall, and it's probably > different for every team. I'm reasonably confident it's somewhere > between 1 week and 6 months though. Yeah - I am not sure the TC should mandate a number - some teams might be OK with the 6 months, while others will need to do 1 or 2 weeks From pete.vandergiessen at canonical.com Thu Mar 5 11:19:53 2020 From: pete.vandergiessen at canonical.com (Pete Vander Giessen) Date: Thu, 5 Mar 2020 12:19:53 +0100 Subject: [MicroStack] Beta updates and strict (devmode) confinement on edge Message-ID: Hi all, It has been a little while since I’ve posted any MicroStack news. MicroStack is a base set of OpenStack services, bundled into a snap. It’s intended for OpenStack workload development (as opposed to development of OpenStack itself), and also contains some experimental features geared toward edge clouds. We’ve just released a new MicroStack beta, with preview support for clustering (e.g., running a control plane node along with several compute nodes). You can install it with: sudo snap install --classic --beta microstack There are some known issues with refreshes from the current beta version of the snap to the new version. You may need to reinstall MicroStack if you’re running the current beta. To address some issues running MicroStack on non LTS Ubuntu distros, as well as lay the groundwork for a smoother refresh/upgrade process going forward, we’re working on making MicroStack into a strictly confined snap! If you’re feeling brave, you can try the (almost) strictly confined snap in developer mode with the following invocation: sudo snap install --devmode --edge microstack Work to put together a completely confined version of the snap, with no need for the devmode flag, is ongoing. You can find more information about MicroStack at https://microstack.run, and talk to the devs either on this mailing list, or on IRC: freenode #openstack-snaps . -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Mar 5 11:52:12 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 5 Mar 2020 12:52:12 +0100 Subject: [neutron][drivers team] Meeting 06.03.2020 cancelled Message-ID: <30F8DBAA-B735-4E72-A290-F23B1FC9AC58@redhat.com> Hi Neutrinos, Due to lack of agenda lets cancel tomorrow’s drivers meeting. Lets meet as usually next week on Friday 13.03.2020. But, as You have now one hour of “free” time tomorrow (:D) I would like to ask You to take a look and help triaging one new RFE: https://bugs.launchpad.net/neutron/+bug/1865889 I’m not the best expert in routed networks, so I would really like if Miguel, Brian and others could take a look at this :) Thx in advance. — Slawek Kaplonski Senior software engineer Red Hat From satish.txt at gmail.com Thu Mar 5 12:18:15 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 5 Mar 2020 07:18:15 -0500 Subject: CPU Topology confusion In-Reply-To: <23998181583375095@myt3-4825096bdc88.qloud-c.yandex.net> References: <23998181583375095@myt3-4825096bdc88.qloud-c.yandex.net> Message-ID: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> cpu-passthrough Sent from my iPhone > On Mar 4, 2020, at 9:24 PM, rui zang wrote: > >  > Hi, > > What is the value for the "cpu_mode" configuration option? > https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html > > Thanks, > Zang, Rui > > > 05.03.2020, 01:24, "Satish Patel" : > Folks, > > We are running openstack with KVM and i have noticed kvm presenting > wrong CPU Tolopoly to VM and because of that we are seeing bad > performance to our application. > > This is openstack compute: > > # lstopo-no-graphics --no-io > Machine (64GB total) > NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) > L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 > PU L#0 (P#0) > PU L#1 (P#20) > L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 > PU L#2 (P#1) > PU L#3 (P#21) > > This is VM running on above compute > > # lstopo-no-graphics --no-io > Machine (59GB total) > NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) > L2 L#0 (4096KB) + Core L#0 > L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) > L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) > L2 L#1 (4096KB) + Core L#1 > L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) > L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) > > if you noticed P#0 and P#1 has own (32KB) cache per thread that is > wrong presentation if you compare with physical CPU. > > This is a screenshot of AWS vs Openstack CPU Topology and looking at > openstack its presentation is little odd, is that normal? > > https://imgur.com/a/2sPwJVC > > I am running CentOS7.6 with kvm 2.12 version. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga at etom.cloud Thu Mar 5 12:40:44 2020 From: tolga at etom.cloud (tolga at etom.cloud) Date: Thu, 05 Mar 2020 15:40:44 +0300 Subject: [trove][charm] RabbitMQ Connection Error Message-ID: <28295851583412044@myt6-636ea6dfd460.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Mar 5 14:43:18 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 5 Mar 2020 09:43:18 -0500 Subject: CPU Topology confusion In-Reply-To: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: cpu_mode = cpu-passthrough cpu_model = none Do you think cpu_model make difference ? Sent from my iPhone > On Mar 5, 2020, at 7:18 AM, Satish Patel wrote: > >  > > cpu-passthrough > > Sent from my iPhone > >>> On Mar 4, 2020, at 9:24 PM, rui zang wrote: >>> >>  >> Hi, >> >> What is the value for the "cpu_mode" configuration option? >> https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html >> >> Thanks, >> Zang, Rui >> >> >> 05.03.2020, 01:24, "Satish Patel" : >> Folks, >> >> We are running openstack with KVM and i have noticed kvm presenting >> wrong CPU Tolopoly to VM and because of that we are seeing bad >> performance to our application. >> >> This is openstack compute: >> >> # lstopo-no-graphics --no-io >> Machine (64GB total) >> NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) >> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 >> PU L#0 (P#0) >> PU L#1 (P#20) >> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 >> PU L#2 (P#1) >> PU L#3 (P#21) >> >> This is VM running on above compute >> >> # lstopo-no-graphics --no-io >> Machine (59GB total) >> NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) >> L2 L#0 (4096KB) + Core L#0 >> L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) >> L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) >> L2 L#1 (4096KB) + Core L#1 >> L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) >> L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) >> >> if you noticed P#0 and P#1 has own (32KB) cache per thread that is >> wrong presentation if you compare with physical CPU. >> >> This is a screenshot of AWS vs Openstack CPU Topology and looking at >> openstack its presentation is little odd, is that normal? >> >> https://imgur.com/a/2sPwJVC >> >> I am running CentOS7.6 with kvm 2.12 version. >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Thu Mar 5 15:03:29 2020 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Thu, 5 Mar 2020 15:03:29 +0000 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: On Wed, Mar 4, 2020 at 1:19 AM Ghanshyam Mann wrote: > ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell wrote > ---- > > > > > > On 3 Mar 2020, at 19:55, Tim Bell wrote: > > > > > > On 3 Mar 2020, at 19:20, Albert Braden > wrote: > > Sean, thank you for clarifying that. > > > > Was my understanding that the community decided to focus on the unified > client incorrect? Is the unified/individual client debate still a matter of > controversy? Is it possible that the unified client will be deprecated in > favor of individual clients after more discussion? I haven’t looked at any > of the individual clients since 2018 (except for osc-placement which is > kind of a special case), because I thought they were all going away and > could be safely ignored until they did, and I haven’t included any > information about the individual clients in the documentation that I write > for our users, and if they ask I have been telling them to not use the > individual clients. Do I need to start looking at individual clients again, > and telling our users to use them in some cases? > > > > > > > > I remember a forum discussion where a community goal was proposed to > focus on OSC rather than individual project CLIs (I think Matt and I were > proposers). There were concerns on the effort to do this and that it would > potentially be multi-cycle. > > BTW, I found the etherpad from Berlin ( > https://etherpad.openstack.org/p/BER-t-series-goals) and the associated > mailing list discussion at > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > Yeah, we are in process of selecting the Victoria cycle community-wide > goal and this can be good candidate. I agree with the idea/requirement of a > multi-cycle goal. > Another option is to build a pop-up team for the Victoria cycle to start > burning down the keys issues/work. For both ways (either goal or pop-up > team), we need > some set of people to drive it. If anyone would like to volunteer for > this, we can start discussing the details. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html > > -gmann > > Yeah, lets propose this as community goal again as it worked so well last time. ಠ_ಠ I think your most help wanted list/pop-up team is much more realistic approach. Lets see if there is enough interest to actually make it happen. What comes to our previous experience with Glance and moving to endorse osc, I think I'm not alone stating that we can discuss this again after osc has kept feature parity (and I mean to current release, not feature parity 2 years ago kind of thing) and actively addressed raised issues at least for a couple of cycles. Obviously if you/your users wants to use it meanwhile, that your call. If we cannot get that level of commitment, how do we expect to support this long term? I'm not willing to put our users through that misery again as it happened last time as long as I'm core in this project. - jokke > > > > My experience in discussion with the CERN user community and other > OpenStack operators is that OSC is felt to be the right solution for the > end user facing parts of the cloud (admin commands could be another > discussion if necessary). Experienced admin operators can remember that > glance looks after images and nova looks after instances. Our average user > can get very confused, especially given that OSC supports additional > options for authentication (such as Kerberos and Certificates along with > clouds.yaml) so users need to re-authenticate with a different openrc to > work on their project. > > While I understand there are limited resources all round, I would > prefer that we focus on adding new project functions to OSC which will > eventually lead to feature parity. > > Attracting ‘drive-by’ contributions from operations staff for OSC work > (it's more likely to be achieved if it makes the operations work less e.g. > save on special end user documentation by contributing code). This is > demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' > functionality along with lots of random OSC updates as listed hat > https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) > > > BTW, I also would vote for =auto as the default. > > Tim > > We are on Rocky now but I expect that we will upgrade as necessary to > stay on supported versions. > > > > From: Sean McGinnis > > Sent: Tuesday, March 3, 2020 9:50 AM > > To: openstack-discuss at lists.openstack.org > > Subject: Re: OSC future (formerly [glance] Different checksum between > CLI and curl) > > > > On 3/3/20 11:28 AM, Albert Braden wrote: > > Am I understanding correctly that the Openstack community decided to > focus on the unified client, and to deprecate the individual clients, and > that the Glance team did not agree with this decision, and that the Glance > team is now having a pissing match with the rest of the community, and is > unilaterally deciding to continue developing the Glance client and refusing > to work on the unified client, or is something different going on? I would > ask everyone involved to remember that we operators are down here, and the > yellow rain falling on our heads does not smell very good. > > I definitely would not characterize it that way. > > With trying not to put too much personal bias into it, here's what I > would say the situation is: > > - Some part of the community has said OSC should be the only CLI and > that individual CLIs should go away > > - Glance is a very small team with very, very limited resources > > - The OSC team is a very small team with very, very limited resources > > - CLI capabilities need to be exposed for Glance changes and the > easiest way to get them out for the is by updating the Glance CLI > > - No one from the OSC team has been able to proactively help to make > sure these changes make it into the OSC client (see bullet 3) > > - There exists a sizable functionality gap between per-project CLIs and > what OSC provides, and although a few people have done a lot of great work > to close that gap, there is still a lot to be done and does not appear the > gap will close at any point in the near future based on the current trends > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Mar 5 15:27:21 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 5 Mar 2020 15:27:21 +0000 Subject: [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo] OpenStack modules broken in Ansible 2.8.9 Message-ID: Hi, The 2.8.9 release of Ansible has a regression [1] which breaks the OpenStack modules. I've proposed a simple fix, hopefully it will be included in a 2.8.10 release soon but in the meantime you may need to blacklist 2.8.9. [1] https://github.com/ansible/ansible/issues/68042 [2] https://github.com/ansible/ansible/pull/68043 Cheers, Mark From missile0407 at gmail.com Thu Mar 5 16:26:03 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Fri, 6 Mar 2020 00:26:03 +0800 Subject: CPU Topology confusion In-Reply-To: References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: Hi Satish, Since you already set "cpu_mode = host-passthrough", there's no need to set cpu_model. BTW, we're not known about the CPU topology a lot. But IME we always set "hw_cpu_sockets = 2" in specified image or flavor metadata if running Windows instance. In default, KVM always allocate all vcpus into sockets in CPU topology, and this will affect the Windows VM performance since Windows only support maximum 2 CPU sockets. Perhaps you can try limit socket numbers by setting hw_cpu_sockets in image metadata (or hw:cpu_sockets in flavor metadata.) Satish Patel 於 2020年3月5日 週四 下午10:46寫道: > > cpu_mode = cpu-passthrough > cpu_model = none > > Do you think cpu_model make difference ? > > > Sent from my iPhone > > On Mar 5, 2020, at 7:18 AM, Satish Patel wrote: > >  > > cpu-passthrough > > Sent from my iPhone > > On Mar 4, 2020, at 9:24 PM, rui zang wrote: > >  > Hi, > > What is the value for the "cpu_mode" configuration option? > > https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html > > Thanks, > Zang, Rui > > > 05.03.2020, 01:24, "Satish Patel" : > > Folks, > > We are running openstack with KVM and i have noticed kvm presenting > wrong CPU Tolopoly to VM and because of that we are seeing bad > performance to our application. > > This is openstack compute: > > # lstopo-no-graphics --no-io > Machine (64GB total) > NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) > L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 > PU L#0 (P#0) > PU L#1 (P#20) > L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 > PU L#2 (P#1) > PU L#3 (P#21) > > This is VM running on above compute > > # lstopo-no-graphics --no-io > Machine (59GB total) > NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) > L2 L#0 (4096KB) + Core L#0 > L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) > L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) > L2 L#1 (4096KB) + Core L#1 > L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) > L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) > > if you noticed P#0 and P#1 has own (32KB) cache per thread that is > wrong presentation if you compare with physical CPU. > > This is a screenshot of AWS vs Openstack CPU Topology and looking at > openstack its presentation is little odd, is that normal? > > https://imgur.com/a/2sPwJVC > > I am running CentOS7.6 with kvm 2.12 version. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Mar 5 16:40:38 2020 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 5 Mar 2020 10:40:38 -0600 Subject: [sdk] Additions and subtractions from core team In-Reply-To: <7DF33892-5254-472F-BC0E-07211A05253E@redhat.com> References: <7DF33892-5254-472F-BC0E-07211A05253E@redhat.com> Message-ID: <5A17C548-2F70-458D-B54E-91E160DF1AB0@inaugust.com> > On Mar 4, 2020, at 3:09 PM, Slawek Kaplonski wrote: > > Hi, > >> On 4 Mar 2020, at 17:56, Monty Taylor wrote: >> >> Heya, >> >> With the previous email about merging OSC and SDK teams, I’d also like to propose the following changes to the SDK core team (keeping in mind that likely means the core team of both OSC and SDK real soon now) >> >> Adds: >> >> Akihiro Motoki - The only OSC core not in sdk-core. amotoki should really be a core in all projects anyway >> Sean McGinnis - Sean has been reviewing things as a stable branch maint in both SDK and OSC, and as such has shown a good tendency to help things along when needed and to not approve things when he doesn’t know what’s up. > > Big +1 from me here. >> >> Subtractions: >> >> All of these people are awesome, but they’re all long gone: >> >> Brian Curtin >> Clint Byrum >> Everett Toews >> Jamie Lennox >> Jesse Noller >> Ricardo Carillo Cruz >> Richard Theis >> Rosario Di Somma >> Sam Yaple >> Terry Howe > > That’s sad but if that is necessary than ok. I agree, quite sad. :( >> >> Monty >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat This has been done. From a.settle at outlook.com Thu Mar 5 16:45:55 2020 From: a.settle at outlook.com (Alexandra Settle) Date: Thu, 5 Mar 2020 16:45:55 +0000 Subject: [all][tc] Stepping down from TC Message-ID: Hi all, This should come as no shock as I have been relatively quite for some time now, but I will not standing for the Technical Committee for a second term. I have thoroughly enjoyed my tenure, learning so much about open source governance than I ever thought I needed 😉 My work takes me elsewhere, as it did several years ago, and I simply do not have the time to manage both. I encourage anyone who is interested in governance, or is passionate about OpenStack and wants to learn more, to stand for the TC elections. As was proven by my own nomination and subsequent successful election, you do not have to be "purely technical" to stand and be a part of something great. Diversity of skill is so important to our survival. Thanks to all those that have supported me to get to the point, I appreciate you all and will miss working intimately with the community. Please do not hesitate to reach out and ask any questions if you are interested in the positions available, happy to help encourage and answer any questions you may have. All the best, Alex ________________________________ Alexandra Settle Senior Technical Writer London, United Kingdom (GMT) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Mar 5 16:50:36 2020 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 5 Mar 2020 10:50:36 -0600 Subject: [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo] OpenStack modules broken in Ansible 2.8.9 In-Reply-To: References: Message-ID: <08B1D0C9-E613-45D3-8A07-E5B7C8270326@inaugust.com> > On Mar 5, 2020, at 9:27 AM, Mark Goddard wrote: > > Hi, > > The 2.8.9 release of Ansible has a regression [1] which breaks the > OpenStack modules. I've proposed a simple fix, hopefully it will be > included in a 2.8.10 release soon but in the meantime you may need to > blacklist 2.8.9. > > [1] https://github.com/ansible/ansible/issues/68042 > [2] https://github.com/ansible/ansible/pull/68043 We have jobs in OpenDev Zuul that are supposed to help catch this sort of thing … and they weren’t configured to run on stable-2.8. :( That has been rectified, as well as stable-2.9. For post-2.9 the modules live in OpenDev Gerrit in the new ansible-collections-openstack collection - so we should be in a better position to keep stuff covered. From satish.txt at gmail.com Thu Mar 5 17:11:37 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 5 Mar 2020 12:11:37 -0500 Subject: CPU Topology confusion In-Reply-To: References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: Eddie, I have tried everything to match or fix CPU Topology layout but its never come down to correct as i mentioned in screenshot, I have check on Alicloud and they are also running KVM and their virtual machine lstopo output is really match with physical machine, like L1i / L1d cache layout etc. if you look at following output its strange i am using "-cpu host" option but still there are lots of missing flags on my virtual machine cpuinfo, is that normal? This is my VM output (virtual machine) # grep flags /proc/cpuinfo | uniq flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm arat fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt This is compute machine (physical host) # grep flags /proc/cpuinfo | uniq flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear spec_ctrl intel_stibp flush_l1d On Thu, Mar 5, 2020 at 11:26 AM Eddie Yen wrote: > > Hi Satish, > > Since you already set "cpu_mode = host-passthrough", there's no need > to set cpu_model. > > BTW, we're not known about the CPU topology a lot. But IME we always > set "hw_cpu_sockets = 2" in specified image or flavor metadata if running > Windows instance. In default, KVM always allocate all vcpus into sockets > in CPU topology, and this will affect the Windows VM performance since > Windows only support maximum 2 CPU sockets. > > Perhaps you can try limit socket numbers by setting hw_cpu_sockets in > image metadata (or hw:cpu_sockets in flavor metadata.) > > Satish Patel 於 2020年3月5日 週四 下午10:46寫道: >> >> >> cpu_mode = cpu-passthrough >> cpu_model = none >> >> Do you think cpu_model make difference ? >> >> >> Sent from my iPhone >> >> On Mar 5, 2020, at 7:18 AM, Satish Patel wrote: >> >>  >> >> cpu-passthrough >> >> Sent from my iPhone >> >> On Mar 4, 2020, at 9:24 PM, rui zang wrote: >> >>  >> Hi, >> >> What is the value for the "cpu_mode" configuration option? >> https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html >> >> Thanks, >> Zang, Rui >> >> >> 05.03.2020, 01:24, "Satish Patel" : >> >> Folks, >> >> We are running openstack with KVM and i have noticed kvm presenting >> wrong CPU Tolopoly to VM and because of that we are seeing bad >> performance to our application. >> >> This is openstack compute: >> >> # lstopo-no-graphics --no-io >> Machine (64GB total) >> NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) >> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 >> PU L#0 (P#0) >> PU L#1 (P#20) >> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 >> PU L#2 (P#1) >> PU L#3 (P#21) >> >> This is VM running on above compute >> >> # lstopo-no-graphics --no-io >> Machine (59GB total) >> NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) >> L2 L#0 (4096KB) + Core L#0 >> L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) >> L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) >> L2 L#1 (4096KB) + Core L#1 >> L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) >> L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) >> >> if you noticed P#0 and P#1 has own (32KB) cache per thread that is >> wrong presentation if you compare with physical CPU. >> >> This is a screenshot of AWS vs Openstack CPU Topology and looking at >> openstack its presentation is little odd, is that normal? >> >> https://imgur.com/a/2sPwJVC >> >> I am running CentOS7.6 with kvm 2.12 version. >> From whayutin at redhat.com Thu Mar 5 17:14:31 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 5 Mar 2020 10:14:31 -0700 Subject: [ansible-sig][kolla][openstack-ansible][osc][sdk][tripleo] OpenStack modules broken in Ansible 2.8.9 In-Reply-To: <08B1D0C9-E613-45D3-8A07-E5B7C8270326@inaugust.com> References: <08B1D0C9-E613-45D3-8A07-E5B7C8270326@inaugust.com> Message-ID: On Thu, Mar 5, 2020 at 9:51 AM Monty Taylor wrote: > > > > On Mar 5, 2020, at 9:27 AM, Mark Goddard wrote: > > > > Hi, > > > > The 2.8.9 release of Ansible has a regression [1] which breaks the > > OpenStack modules. I've proposed a simple fix, hopefully it will be > > included in a 2.8.10 release soon but in the meantime you may need to > > blacklist 2.8.9. > > > > [1] https://github.com/ansible/ansible/issues/68042 > > [2] https://github.com/ansible/ansible/pull/68043 > > We have jobs in OpenDev Zuul that are supposed to help catch this sort of > thing … and they weren’t configured to run on stable-2.8. :( That has been > rectified, as well as stable-2.9. > > For post-2.9 the modules live in OpenDev Gerrit in the new > ansible-collections-openstack collection - so we should be in a better > position to keep stuff covered. > > > Thanks for the heads up and for fixing the tests you guys!! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Thu Mar 5 17:54:49 2020 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 5 Mar 2020 11:54:49 -0600 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK Message-ID: Heya, I’d like to try something. I’d like to try adding some project-specific people to the core team so that they can more directly help maintain the support for their service in SDK. In some of these cases the person I’m suggestion has next to no review experience in SDK. I think let’s be fine with that for now - we’re still a 2x +2 in general thing - but I know currently when reviewing neutron or ironic changes I always want to see a +2 from slaweq or dtantsur … so in the spirit of trying new things and trying to move the project forward in a healthy and welcoming way - how about we give this a try? The idea here is that we’re trusting people to use their good judgement and to only use their new +2 powers for good in their project. Over time, if they feel like they’ve gotten a handle on things more widely, there’s nothing stopping them from reviewing other patches - but I think that most of us aren’t looking for additional review work anyway. Specifically this would be: Shogo Saito - congress Adam Harwell - octavia Graham Hayes - designate Bharat Kumar - magnum Erik Olof Gunnar Andersson - senlin Tim Burke - swift I think we should also add a file in the repo that lists “subject matter experts” for each service we’ve got support for, where we have them. My list of current cores who I’d ping for specific service suitability are: Sean McGinnis - cinder Slawek Kaplonski - neutron Dmitry Tantsur - ironic Eric Fried - nova (at least until tomorrow my friend) How does that sound to folks? Monty From kennelson11 at gmail.com Thu Mar 5 18:01:53 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 5 Mar 2020 10:01:53 -0800 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: References: Message-ID: Seems like a good way to kind of unblock/get things moving a little faster for some more projects. Also, we did something similar to this in releases where a few new cores were added, but we were instructed to not +W things. Basically we could give the first +2 to move things along, but weren't making the merge decision and I would say its gone rather well. -Kendall (diablo_rojo) On Thu, Mar 5, 2020 at 9:56 AM Monty Taylor wrote: > Heya, > > I’d like to try something. > > I’d like to try adding some project-specific people to the core team so > that they can more directly help maintain the support for their service in > SDK. In some of these cases the person I’m suggestion has next to no review > experience in SDK. I think let’s be fine with that for now - we’re still a > 2x +2 in general thing - but I know currently when reviewing neutron or > ironic changes I always want to see a +2 from slaweq or dtantsur … so in > the spirit of trying new things and trying to move the project forward in a > healthy and welcoming way - how about we give this a try? > > The idea here is that we’re trusting people to use their good judgement > and to only use their new +2 powers for good in their project. Over time, > if they feel like they’ve gotten a handle on things more widely, there’s > nothing stopping them from reviewing other patches - but I think that most > of us aren’t looking for additional review work anyway. > > Specifically this would be: > > Shogo Saito - congress > Adam Harwell - octavia > Graham Hayes - designate > Bharat Kumar - magnum > Erik Olof Gunnar Andersson - senlin > Tim Burke - swift > > I think we should also add a file in the repo that lists “subject matter > experts” for each service we’ve got support for, where we have them. My > list of current cores who I’d ping for specific service suitability are: > > Sean McGinnis - cinder > Slawek Kaplonski - neutron > Dmitry Tantsur - ironic > Eric Fried - nova (at least until tomorrow my friend) > > How does that sound to folks? > > Monty > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Mar 5 18:29:57 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 5 Mar 2020 10:29:57 -0800 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> References: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> Message-ID: On Thu, Mar 5, 2020 at 3:07 AM Graham Hayes wrote: > On 04/03/2020 22:43, Zane Bitter wrote: > > On 4/03/20 3:08 pm, Ben Nemec wrote: > >> > >> > >> On 3/4/20 12:57 PM, Zane Bitter wrote: > >>> One cannot help wondering if we might get more Nova cores willing to > >>> sign up for a 1-week commitment to be the "PTL" than we're getting > >>> for a 6-months-and-maybe-indefinitely commitment. > >> > >> That's a really interesting idea. I'm not sure I'd want to go as short > >> as one week for PTL, but shortening the term might make it easier for > >> people to commit. > > > > The key would be to make it short enough that you can be 100% confident > > the next person will take over and not leave you holding the bag > > forever. (Hi Rico!) > > And also that the person you hand it off too won't have to hand it back. > (Hi Tim!) > > > I've no idea where the magic number would fall, and it's probably > > different for every team. I'm reasonably confident it's somewhere > > between 1 week and 6 months though. > > Yeah - I am not sure the TC should mandate a number - some teams > might be OK with the 6 months, while others will need to do 1 or 2 weeks > > I would like to think elections would NOT get held every 1-2 weeks or whatever the chosen PTL term is for a project? Its just a like...signup sheet sort of thing? What if more than one person wants to sign up for the same week( I can't think of why this would happen, just thinking about all the details)? -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Thu Mar 5 18:53:01 2020 From: gr at ham.ie (Graham Hayes) Date: Thu, 5 Mar 2020 18:53:01 +0000 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> Message-ID: On 05/03/2020 18:29, Kendall Nelson wrote: > > > On Thu, Mar 5, 2020 at 3:07 AM Graham Hayes > wrote: > > On 04/03/2020 22:43, Zane Bitter wrote: > > On 4/03/20 3:08 pm, Ben Nemec wrote: > >> > >> > >> On 3/4/20 12:57 PM, Zane Bitter wrote: > >>> One cannot help wondering if we might get more Nova cores > willing to > >>> sign up for a 1-week commitment to be the "PTL" than we're getting > >>> for a 6-months-and-maybe-indefinitely commitment. > >> > >> That's a really interesting idea. I'm not sure I'd want to go as > short > >> as one week for PTL, but shortening the term might make it > easier for > >> people to commit. > > > > The key would be to make it short enough that you can be 100% > confident > > the next person will take over and not leave you holding the bag > > forever. (Hi Rico!) > > And also that the person you hand it off too won't have to hand it back. > (Hi Tim!) > > > I've no idea where the magic number would fall, and it's probably > > different for every team. I'm reasonably confident it's somewhere > > between 1 week and 6 months though. > > Yeah - I am not sure the TC should mandate a number - some teams > might be OK with the 6 months, while others will need to do 1 or 2 weeks > > > I would like to think elections would NOT get held every 1-2 weeks or > whatever the chosen PTL term is for a project? Its just a like...signup > sheet sort of thing? What if more than one person wants to sign up for > the same week( I can't think of why this would happen, just thinking > about all the details)? I will admit to not thinking the whole way through to elections ... Yes - ideally, it would not be an election every 2 weeks - that would be an insane overhead. How to resolve conflicts in this is more problematic alright .... > -Kendall (diablo_rojo) From duc.openstack at gmail.com Thu Mar 5 19:14:53 2020 From: duc.openstack at gmail.com (Duc Truong) Date: Thu, 5 Mar 2020 11:14:53 -0800 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: References: Message-ID: +1 from me for Erik on Senlin. On Thu, Mar 5, 2020 at 10:02 AM Kendall Nelson wrote: > > Seems like a good way to kind of unblock/get things moving a little faster for some more projects. Also, we did something similar to this in releases where a few new cores were added, but we were instructed to not +W things. Basically we could give the first +2 to move things along, but weren't making the merge decision and I would say its gone rather well. > > -Kendall (diablo_rojo) > > On Thu, Mar 5, 2020 at 9:56 AM Monty Taylor wrote: >> >> Heya, >> >> I’d like to try something. >> >> I’d like to try adding some project-specific people to the core team so that they can more directly help maintain the support for their service in SDK. In some of these cases the person I’m suggestion has next to no review experience in SDK. I think let’s be fine with that for now - we’re still a 2x +2 in general thing - but I know currently when reviewing neutron or ironic changes I always want to see a +2 from slaweq or dtantsur … so in the spirit of trying new things and trying to move the project forward in a healthy and welcoming way - how about we give this a try? >> >> The idea here is that we’re trusting people to use their good judgement and to only use their new +2 powers for good in their project. Over time, if they feel like they’ve gotten a handle on things more widely, there’s nothing stopping them from reviewing other patches - but I think that most of us aren’t looking for additional review work anyway. >> >> Specifically this would be: >> >> Shogo Saito - congress >> Adam Harwell - octavia >> Graham Hayes - designate >> Bharat Kumar - magnum >> Erik Olof Gunnar Andersson - senlin >> Tim Burke - swift >> >> I think we should also add a file in the repo that lists “subject matter experts” for each service we’ve got support for, where we have them. My list of current cores who I’d ping for specific service suitability are: >> >> Sean McGinnis - cinder >> Slawek Kaplonski - neutron >> Dmitry Tantsur - ironic >> Eric Fried - nova (at least until tomorrow my friend) >> >> How does that sound to folks? >> >> Monty From mike243512 at gmail.com Thu Mar 5 13:24:13 2020 From: mike243512 at gmail.com (Mike Manning) Date: Thu, 5 Mar 2020 08:24:13 -0500 Subject: [training-labs] Current training lab for Rocky not working on Windows Message-ID: Hello, I've attempted to install the training-labs for the Rocky release onto my Windows 10 laptop on VirtualBox 6.0, but it is not working properly. The windows batch files are not functioning properly, and at least one reason is that the shell scripts that the batch files are copying are using "ifup" and "ifdown" which are not supported in Ubuntu 18.04. This one issue seems to imply that the scripts and batch files may need some review. Also, after install the ifupdown package on the system, the batch files and shell scripts still get hung up often on the existence of a file named "done" that never gets created. Can someone help me with this installation of the training-labs software in preparation for the COA exam? Mike Manning mike243512 at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Mar 5 19:49:11 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 5 Mar 2020 14:49:11 -0500 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> Message-ID: <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita wrote ---- > > Hello QA team and devstack-plugin-ceph-core people, > > > > The Cinder team has some proposals we'd like to float. > > > > 1. The Cinder team is interested in becoming more active in the > > maintenance of openstack/devstack-plugin-ceph [0]. Currently, the > > devstack-plugin-ceph-core is > > https://review.opendev.org/#/admin/groups/1196,members > > The cinder-core is already represented by Eric and Sean; we'd like to > > replace them by including the cinder-core group. > > +1. This is good diea and make sense, I will do the change. Great, thanks! > > > > 2. The Cinder team is interested in becoming more active in the > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > devstack-plugin-nfs-core is > > https://review.opendev.org/#/admin/groups/1330,members > > It's already 75% cinder-core members; we'd like to replace the > > individual members with the cinder-core group. We also propose that > > devstack-core be added as an included group. > > > > 3. The Cinder team is interested in implementing a new devstack plugin: > > openstack/devstack-plugin-open-cas > > This will enable thorough testing of a new feature [2] being introduced > > as experimental in Ussuri and expected to be finalized in Victoria. Our > > plan would be to make both cinder-core and devstack-core included groups > > for the gerrit group governing the new plugin. > > +1. You want this under Cinder governance or under QA ? I think it makes sense for these to be under QA governance -- QA would own the repo with both QA and Cinder having permission to make changes. > > > > 4. This is a minor point, but can the devstack-plugin-nfs repo be moved > > back into the 'openstack' namespace? > > If this is usable plugin for nfs testing (I am not aware if we have any other) then > it make sense to bring it to openstack governance. > Same question here, do you want to put this under Cinder governance or QA. Same here, I think QA should "own" the repo, but Cinder will have permission to make changes there. > > Those plugins under QA governance also ok for me with your proposal of calloborative maintainance by > devstack-core and cinder-core. > > -gmann Thanks for the quick response! > > > > Let us know which of these proposals you find acceptable. > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > [1] https://opendev.org/x/devstack-plugin-nfs > > [2] https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > From johnsomor at gmail.com Thu Mar 5 19:54:20 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Mar 2020 11:54:20 -0800 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: <2c8ccd0c2d2d7a8ae6073de8b9fc80656fa49ce0.camel@redhat.com> References: <2c8ccd0c2d2d7a8ae6073de8b9fc80656fa49ce0.camel@redhat.com> Message-ID: We have been drifting this way for a while, so "yes please". Michael On Thu, Mar 5, 2020 at 2:59 AM Stephen Finucane wrote: > > On Wed, 2020-03-04 at 10:19 -0600, Monty Taylor wrote: > > Hey everybody, > > > > I'd like to propose merging the SDK and OSC teams. We already share > > an IRC channel, and already share a purpose in life. In OSC we have a > > current goal of swapping out client implementation for SDK, and we're > > Already ensuring that SDK does what it needs to do to facilitate that > > goal. We also already share PTG space, and have requested a shared > > set of time at the upcoming Denver PTG. So really the separation is > > historical not practical, and these days having additional layers of > > governance is not super useful. > > > > I propose that we do a simple merge of the teams. This means the > > current SDK cores will become cores on OSC, and as most of the OSC > > cores are already SDK cores, it means the SDK team gains amotoki - > > which is always a positive. > > Big +1 > > > Dean hasn't had time to spend on OSC quite a bit, sadly, and while we > > remain hopeful that this will change, we’re slowly coming to terms > > with the possibility that it might not. With that in mind, I'll serve > > as the PTL for the new combined team until the next election. > > > > Monty > > > > From nate.johnston at redhat.com Thu Mar 5 19:54:32 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Thu, 5 Mar 2020 14:54:32 -0500 Subject: [all][tc] Stepping down from TC In-Reply-To: References: Message-ID: <20200305195432.qdsmkyr7jebx3y5c@firewall> Thank you for everything you have done over the years Alex. When I went to my first summit (Tokyo), you and Lana were two of the first people I met and your warm welcome helped put me on a course to become a part of the community. Thank you for being a person always looking out for the user and the newbie and making all of OpenStack better for it. Nate On Thu, Mar 05, 2020 at 04:45:55PM +0000, Alexandra Settle wrote: > Hi all, > > This should come as no shock as I have been relatively quite for some time > now, but I will not standing for the Technical Committee for a second term. > > I have thoroughly enjoyed my tenure, learning so much about open source > governance than I ever thought I needed 😉 > > My work takes me elsewhere, as it did several years ago, and I simply do not have > the time to manage both. > > I encourage anyone who is interested in governance, or is passionate about OpenStack > and wants to learn more, to stand for the TC elections. As was proven by my own > nomination and subsequent successful election, you do not have to be "purely technical" > to stand and be a part of something great. Diversity of skill is so important to our > survival. > > Thanks to all those that have supported me to get to the point, I appreciate you all and > will miss working intimately with the community. > > Please do not hesitate to reach out and ask any questions if you are interested in the > positions available, happy to help encourage and answer any questions you may have. > > All the best, > > Alex > > ________________________________ > Alexandra Settle > Senior Technical Writer > London, United Kingdom (GMT) > From flux.adam at gmail.com Thu Mar 5 20:11:05 2020 From: flux.adam at gmail.com (Adam Harwell) Date: Fri, 6 Mar 2020 05:11:05 +0900 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: Well, part of maintaining feature parity is that the features should be added to the OSC by the project team at the time they're made -- you're already doing the work to add them to your own client, so instead, do the same amount of work but add them in OSC instead! It doesn't seem incredibly onerous to me. If the OSC plugin for your project IS the official client, then there's no duplication of effort. I think saying "someone else had better implement our features in a timely fashion" is a bit irresponsible. Though, this is coming from working on a project where we aren't used to being included as a "core piece" and having any work done for us, ever... Also, things are also definitely moving in a better direction now with the probable addition of project team liasons as cores in SDK/OSC, which should alleviate a lot of the issues around "response time" on reviews, when you do put in the effort to add features yourself. --Adam On Fri, Mar 6, 2020, 00:15 Erno Kuvaja wrote: > On Wed, Mar 4, 2020 at 1:19 AM Ghanshyam Mann > wrote: > >> ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell >> wrote ---- >> > >> > >> > On 3 Mar 2020, at 19:55, Tim Bell wrote: >> > >> > >> > On 3 Mar 2020, at 19:20, Albert Braden >> wrote: >> > Sean, thank you for clarifying that. >> > >> > Was my understanding that the community decided to focus on the >> unified client incorrect? Is the unified/individual client debate still a >> matter of controversy? Is it possible that the unified client will be >> deprecated in favor of individual clients after more discussion? I haven’t >> looked at any of the individual clients since 2018 (except for >> osc-placement which is kind of a special case), because I thought they were >> all going away and could be safely ignored until they did, and I haven’t >> included any information about the individual clients in the documentation >> that I write for our users, and if they ask I have been telling them to not >> use the individual clients. Do I need to start looking at individual >> clients again, and telling our users to use them in some cases? >> > >> > >> > >> > I remember a forum discussion where a community goal was proposed to >> focus on OSC rather than individual project CLIs (I think Matt and I were >> proposers). There were concerns on the effort to do this and that it would >> potentially be multi-cycle. >> > BTW, I found the etherpad from Berlin ( >> https://etherpad.openstack.org/p/BER-t-series-goals) and the associated >> mailing list discussion at >> http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html >> >> Yeah, we are in process of selecting the Victoria cycle community-wide >> goal and this can be good candidate. I agree with the idea/requirement of a >> multi-cycle goal. >> Another option is to build a pop-up team for the Victoria cycle to start >> burning down the keys issues/work. For both ways (either goal or pop-up >> team), we need >> some set of people to drive it. If anyone would like to volunteer for >> this, we can start discussing the details. >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html >> >> -gmann >> >> Yeah, lets propose this as community goal again as it worked so well last > time. ಠ_ಠ > > I think your most help wanted list/pop-up team is much more realistic > approach. Lets see if there is enough interest to actually make it happen. > What comes to our previous experience with Glance and moving to endorse > osc, I think I'm not alone stating that we can discuss this again after osc > has kept feature parity (and I mean to current release, not feature parity > 2 years ago kind of thing) and actively addressed raised issues at least > for a couple of cycles. Obviously if you/your users wants to use it > meanwhile, that your call. If we cannot get that level of commitment, how > do we expect to support this long term? > > I'm not willing to put our users through that misery again as it happened > last time as long as I'm core in this project. > > - jokke > > >> > >> > My experience in discussion with the CERN user community and other >> OpenStack operators is that OSC is felt to be the right solution for the >> end user facing parts of the cloud (admin commands could be another >> discussion if necessary). Experienced admin operators can remember that >> glance looks after images and nova looks after instances. Our average user >> can get very confused, especially given that OSC supports additional >> options for authentication (such as Kerberos and Certificates along with >> clouds.yaml) so users need to re-authenticate with a different openrc to >> work on their project. >> > While I understand there are limited resources all round, I would >> prefer that we focus on adding new project functions to OSC which will >> eventually lead to feature parity. >> > Attracting ‘drive-by’ contributions from operations staff for OSC work >> (it's more likely to be achieved if it makes the operations work less e.g. >> save on special end user documentation by contributing code). This is >> demonstrated from the CERN team contribution to the OSC ‘coe' and ‘share' >> functionality along with lots of random OSC updates as listed hat >> https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) >> >> > BTW, I also would vote for =auto as the default. >> > Tim >> > We are on Rocky now but I expect that we will upgrade as necessary to >> stay on supported versions. >> > >> > From: Sean McGinnis >> > Sent: Tuesday, March 3, 2020 9:50 AM >> > To: openstack-discuss at lists.openstack.org >> > Subject: Re: OSC future (formerly [glance] Different checksum between >> CLI and curl) >> > >> > On 3/3/20 11:28 AM, Albert Braden wrote: >> > Am I understanding correctly that the Openstack community decided to >> focus on the unified client, and to deprecate the individual clients, and >> that the Glance team did not agree with this decision, and that the Glance >> team is now having a pissing match with the rest of the community, and is >> unilaterally deciding to continue developing the Glance client and refusing >> to work on the unified client, or is something different going on? I would >> ask everyone involved to remember that we operators are down here, and the >> yellow rain falling on our heads does not smell very good. >> > I definitely would not characterize it that way. >> > With trying not to put too much personal bias into it, here's what I >> would say the situation is: >> > - Some part of the community has said OSC should be the only CLI and >> that individual CLIs should go away >> > - Glance is a very small team with very, very limited resources >> > - The OSC team is a very small team with very, very limited resources >> > - CLI capabilities need to be exposed for Glance changes and the >> easiest way to get them out for the is by updating the Glance CLI >> > - No one from the OSC team has been able to proactively help to make >> sure these changes make it into the OSC client (see bullet 3) >> > - There exists a sizable functionality gap between per-project CLIs >> and what OSC provides, and although a few people have done a lot of great >> work to close that gap, there is still a lot to be done and does not appear >> the gap will close at any point in the near future based on the current >> trends >> > >> > >> > >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Thu Mar 5 20:24:54 2020 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 5 Mar 2020 14:24:54 -0600 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> Message-ID: On 3/5/20 12:29 PM, Kendall Nelson wrote: > > > On Thu, Mar 5, 2020 at 3:07 AM Graham Hayes > wrote: > > On 04/03/2020 22:43, Zane Bitter wrote: > > On 4/03/20 3:08 pm, Ben Nemec wrote: > >> > >> > >> On 3/4/20 12:57 PM, Zane Bitter wrote: > >>> One cannot help wondering if we might get more Nova cores > willing to > >>> sign up for a 1-week commitment to be the "PTL" than we're getting > >>> for a 6-months-and-maybe-indefinitely commitment. > >> > >> That's a really interesting idea. I'm not sure I'd want to go as > short > >> as one week for PTL, but shortening the term might make it > easier for > >> people to commit. > > > > The key would be to make it short enough that you can be 100% > confident > > the next person will take over and not leave you holding the bag > > forever. (Hi Rico!) > > And also that the person you hand it off too won't have to hand it back. > (Hi Tim!) > > > I've no idea where the magic number would fall, and it's probably > > different for every team. I'm reasonably confident it's somewhere > > between 1 week and 6 months though. > > Yeah - I am not sure the TC should mandate a number - some teams > might be OK with the 6 months, while others will need to do 1 or 2 weeks > > > I would like to think elections would NOT get held every 1-2 weeks or > whatever the chosen PTL term is for a project? Its just a like...signup > sheet sort of thing? What if more than one person wants to sign up for > the same week( I can't think of why this would happen, just thinking > about all the details)? Yeah, this logistical problem is one of the reasons I didn't want it to be a one week rotation. I guess maybe you could solve that by holding elections each cycle, but selecting a pool of PTLs who could then trade off as desired, but at that point I feel like you're back to not having a PTL and instead having a maintainer team. It also seems like a potentially complicated election system. Plus it kind of introduces a problem with the PTL being the point of contact for the project. If it's changing weekly then you lose most of the benefits of having a single person other people know they can go to. I'm also wondering if this actually solves the "Hi Rico!" case anyway. :-) If a PTL leaves the position because their focus has changed, they aren't likely to take it back even if the new person only technically signs up for a week. When your PTL candidate pool is exactly 1 then it doesn't matter if they're getting elected weekly or six monthly. I guess the idea is for this to increase the pool size, and whether that will work probably depends on the circumstances for each project. Anyway, I still think it's an interesting idea, even if I have come up with a few new concerns after further consideration. Maybe it's something a few teams could experiment with initially and see if it works and what timeframes are appropriate? > > -Kendall (diablo_rojo) From anlin.kong at gmail.com Thu Mar 5 20:54:53 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Fri, 6 Mar 2020 09:54:53 +1300 Subject: [trove][charm] RabbitMQ Connection Error In-Reply-To: <28295851583412044@myt6-636ea6dfd460.qloud-c.yandex.net> References: <28295851583412044@myt6-636ea6dfd460.qloud-c.yandex.net> Message-ID: Hi Tolga, I am not familiar with how Trove Charm works, but I've never seen such issue in my devstack for Trove testing. I would recommend you install Trove by devstack and check differences of the config options between installations. Sorry maybe I didn't provide much help but please don't hesitate to let me know if you have any other questions. - Best regards, Lingxian Kong Catalyst Cloud On Fri, Mar 6, 2020 at 1:45 AM wrote: > Hi everyone, > > I updated retired Trove Charm. The deployment runs on bionic-train release. > > I able to run trove-api successfully however trove-conductor and > trove-taskmanager services are returning following error: > > > ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno > 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: > [Errno 111] ECONNREFUSED > > > I validated trove user has registered to RabbitMQ successfully and it can > be reachable from trove node. > > I have also checked active connections via netstat -nputw command but > there is no sign to connection attempt to RabbitMQ Server. > > Here is latest trove.conf for reference. > > http://paste.openstack.org/show/790336/ > > I also noticed that both services are using trove.conf instead of other > conf files like trove-taskmanager.conf. > > Thanks, > > PS: I would like to start to maintain charm-trove project. Please also > inform me about the process. > > -- > Tolga KAPROL > ETOM Teknoloji ARGE > Etom.io > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Mar 5 22:26:16 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 05 Mar 2020 16:26:16 -0600 Subject: [all][tc] Stepping down from TC In-Reply-To: References: Message-ID: <170acce76ec.c8b9b5e4488373.3611812206878074759@ghanshyammann.com> Thanks, Alex for all the contributions to TC. It was great working with you. -gmann ---- On Thu, 05 Mar 2020 10:45:55 -0600 Alexandra Settle wrote ---- > div.zm_-5094580936618874778_parse_-1866959152293090290 P { margin-top: 0; margin-bottom: 0 }Hi all, > This should come as no shock as I have been relatively quite for some timenow, but I will not standing for the Technical Committee for a second term. > I have thoroughly enjoyed my tenure, learning so much about open sourcegovernance than I ever thought I needed 😉 > My work takes me elsewhere, as it did several years ago, and I simply do not havethe time to manage both. > I encourage anyone who is interested in governance, or is passionate about OpenStackand wants to learn more, to stand for the TC elections. As was proven by my ownnomination and subsequent successful election, you do not have to be "purely technical"to stand and be a part of something great. Diversity of skill is so important to oursurvival. > Thanks to all those that have supported me to get to the point, I appreciate you all andwill miss working intimately with the community. > Please do not hesitate to reach out and ask any questions if you are interested in thepositions available, happy to help encourage and answer any questions you may have. > All the best, > Alex > > Alexandra Settle > Senior Technical Writer > London, United Kingdom (GMT) > From melwittt at gmail.com Thu Mar 5 22:57:29 2020 From: melwittt at gmail.com (melanie witt) Date: Thu, 5 Mar 2020 14:57:29 -0800 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: On 3/5/20 12:11, Adam Harwell wrote: > Well, part of maintaining feature parity is that the features should be > added to the OSC by the project team at the time they're made -- you're > already doing the work to add them to your own client, so instead, do > the same amount of work but add them in OSC instead! It doesn't seem > incredibly onerous to me. If the OSC plugin for your project IS the > official client, then there's no duplication of effort. I think saying > "someone else had better implement our features in a timely fashion" is > a bit irresponsible. Though, this is coming from working on a project > where we aren't used to being included as a "core piece" and having any > work done for us, ever... I think this is the key point regarding the lack of feature parity in OSC for some projects. If you are a new-enough project to have begun your CLI as an OSC plugin (examples: ironic client, placement client, and more) then adding a feature to the client is one shot. You add support in the plugin and you're done. If you are an older project (examples: nova client, glance client) then you have a two-step process for adding a feature to OSC. For older projects, OSC imports the legacy clients and calls their python bindings to make the API calls. So for nova, if you want to add a feature to the client, you have to add it to the legacy nova client. This is required. Then, to add it to OSC you have to add it to OSC and have OSC call the newly added legacy binding for the feature. This is [technically] optional. This is why parity is missing. It pains me a bit to write it ^ because you may be thinking, "what's so difficult about going the extra step to add a feature to OSC after adding it to nova client?" I don't know. Maybe people are too stressed and busy. If it's not "required", it gets deferred. Maybe people don't feel familiar enough with OSC to add the feature there as well. There could be a lot of different reasons. So, not trying to make excuses here but just sharing my opinion on why adding features to OSC is not so simple for some projects. Cheers, -melanie > Also, things are also definitely moving in a better direction now with > the probable addition of project team liasons as cores in SDK/OSC, which > should alleviate a lot of the issues around "response time" on reviews, > when you do put in the effort to add features yourself. > >     --Adam > > On Fri, Mar 6, 2020, 00:15 Erno Kuvaja > wrote: > > On Wed, Mar 4, 2020 at 1:19 AM Ghanshyam Mann > > wrote: > > ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell > > wrote ---- >  > >  > >  > On 3 Mar 2020, at 19:55, Tim Bell > wrote: >  > >  > >  > On 3 Mar 2020, at 19:20, Albert Braden > > > wrote: >  > Sean, thank you for clarifying that. >  > >  > Was my understanding that the community decided to focus on > the unified client incorrect? Is the unified/individual client > debate still a matter of controversy? Is it possible that the > unified client will be deprecated in favor of individual clients > after more discussion? I haven’t looked at any of the individual > clients since 2018 (except for osc-placement which is kind of a > special case), because I thought they were all going away and > could be safely ignored until they did, and I haven’t included > any information about the individual clients in the > documentation that I write for our users, and if they ask I have > been telling them to not use the individual clients. Do I need > to start looking at individual clients again, and telling our > users to use them in some cases? >  > >  > >  > >  > I remember a forum discussion where a community goal was > proposed to focus on OSC rather than individual project CLIs (I > think Matt and I were proposers).  There were concerns on the > effort to do this and that it would potentially be multi-cycle. >  > BTW, I found the etherpad from Berlin > (https://etherpad.openstack.org/p/BER-t-series-goals) and the > associated mailing list discussion at > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > Yeah, we are in process of selecting the Victoria cycle > community-wide goal and this can be good candidate. I agree with > the idea/requirement of a multi-cycle goal. > Another option is to build a pop-up team for the Victoria cycle > to start burning down the keys issues/work. For both ways > (either goal or pop-up team), we need > some set of people to drive it. If anyone would like to > volunteer for this, we can start discussing the details. > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html > > -gmann > > Yeah, lets propose this as community goal again as it worked so well > last time. |ಠ_ಠ| > > I think your most help wanted list/pop-up team is much more > realistic approach. Lets see if there is enough interest to actually > make it happen. What comes to our previous experience with Glance > and moving to endorse osc, I think I'm not alone stating that we can > discuss this again after osc has kept feature parity (and I mean to > current release, not feature parity 2 years ago kind of thing) and > actively addressed raised issues at least for a couple of cycles. > Obviously if you/your users wants to use it meanwhile, that your > call. If we cannot get that level of commitment, how do we expect to > support this long term? > > I'm not willing to put our users through that misery again as it > happened last time as long as I'm core in this project. > > - jokke > >  > >  > My experience in discussion with the CERN user community and > other OpenStack operators is that OSC is felt to be the right > solution for the end user facing parts of the cloud (admin > commands could be another discussion if necessary). Experienced > admin operators can remember that glance looks after images and > nova looks after instances. Our average user can get very > confused, especially given that OSC supports additional options > for authentication (such as Kerberos and Certificates along with > clouds.yaml) so users need to re-authenticate with a different > openrc to work on their project. >  > While I understand there are limited resources all round, I > would prefer that we focus on adding new project functions to > OSC which will eventually lead to feature parity. >  > Attracting ‘drive-by’ contributions from operations staff > for OSC work (it's more likely to be achieved if it makes the > operations work less e.g. save on special end user documentation > by contributing code).  This is demonstrated from the CERN team > contribution to the OSC  ‘coe' and ‘share' functionality along > with lots of random OSC updates as listed hat > https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient) > >  > BTW, I also would vote for =auto as the default. >  > Tim >  > We are on Rocky now but I expect that we will upgrade as > necessary to stay on supported versions. >  > >  > From: Sean McGinnis > >  > Sent: Tuesday, March 3, 2020 9:50 AM >  > To: openstack-discuss at lists.openstack.org > >  > Subject: Re: OSC future (formerly [glance] Different > checksum between CLI and curl) >  > >  > On 3/3/20 11:28 AM, Albert Braden wrote: >  > Am I understanding correctly that the Openstack community > decided to focus on the unified client, and to deprecate the > individual clients, and that the Glance team did not agree with > this decision, and that the Glance team is now having a pissing > match with the rest of the community, and is unilaterally > deciding to continue developing the Glance client and refusing > to work on the unified client, or is something different going > on? I would ask everyone involved to remember that we operators > are down here, and the yellow rain falling on our heads does not > smell very good. >  > I definitely would not characterize it that way. >  > With trying not to put too much personal bias into it, > here's what I would say the situation is: >  > - Some part of the community has said OSC should be the only > CLI and that individual CLIs should go away >  > - Glance is a very small team with very, very limited resources >  > - The OSC team is a very small team with very, very limited > resources >  > - CLI capabilities need to be exposed for Glance changes and > the easiest way to get them out for the is by updating the > Glance CLI >  > - No one from the OSC team has been able to proactively help > to make sure these changes make it into the OSC client (see > bullet 3) >  > - There exists a sizable functionality gap between > per-project CLIs and what OSC provides, and although a few > people have done a lot of great work to close that gap, there is > still a lot to be done and does not appear the gap will close at > any point in the near future based on the current trends >  > >  > >  > >  > >  > > From dtroyer at gmail.com Thu Mar 5 23:15:12 2020 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 5 Mar 2020 17:15:12 -0600 Subject: [osc][sdk] Merging OpenStack SDK and OpenStack Client teams In-Reply-To: References: Message-ID: On Wed, Mar 4, 2020 at 10:24 AM Monty Taylor wrote: > I’d like to propose merging the SDK and OSC teams. We already share an IRC channel, and already share a purpose in life. In OSC we have a current goal of swapping out client implementation for SDK, and we’re ++ > Dean hasn’t had time to spend on OSC quite a bit, sadly, and while we remain hopeful that this will change, we’re slowly coming to terms with the possibility that it might not. With that in mind, I’ll serve as the PTL for the new combined team until the next election. ++ This is the right move whether I am able to continue to work on OpenStack or not. Thank you for picking up results from a cab-ride-to-SFO conversation with Dolph. It is in good hands with the combined team. dt -- Dean Troyer dtroyer at gmail.com From johnsomor at gmail.com Fri Mar 6 00:53:05 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 5 Mar 2020 16:53:05 -0800 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: References: Message-ID: Naming names is a highly effective demotivator for those whose contributions are not recognized. https://www.stackalytics.com/?module=openstacksdk&metric=loc Maybe the project teams should select their own liaison? Michael On Thu, Mar 5, 2020 at 9:58 AM Monty Taylor wrote: > > Heya, > > I’d like to try something. > > I’d like to try adding some project-specific people to the core team so that they can more directly help maintain the support for their service in SDK. In some of these cases the person I’m suggestion has next to no review experience in SDK. I think let’s be fine with that for now - we’re still a 2x +2 in general thing - but I know currently when reviewing neutron or ironic changes I always want to see a +2 from slaweq or dtantsur … so in the spirit of trying new things and trying to move the project forward in a healthy and welcoming way - how about we give this a try? > > The idea here is that we’re trusting people to use their good judgement and to only use their new +2 powers for good in their project. Over time, if they feel like they’ve gotten a handle on things more widely, there’s nothing stopping them from reviewing other patches - but I think that most of us aren’t looking for additional review work anyway. > > Specifically this would be: > > Shogo Saito - congress > Adam Harwell - octavia > Graham Hayes - designate > Bharat Kumar - magnum > Erik Olof Gunnar Andersson - senlin > Tim Burke - swift > > I think we should also add a file in the repo that lists “subject matter experts” for each service we’ve got support for, where we have them. My list of current cores who I’d ping for specific service suitability are: > > Sean McGinnis - cinder > Slawek Kaplonski - neutron > Dmitry Tantsur - ironic > Eric Fried - nova (at least until tomorrow my friend) > > How does that sound to folks? > > Monty From missile0407 at gmail.com Fri Mar 6 01:15:43 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Fri, 6 Mar 2020 09:15:43 +0800 Subject: CPU Topology confusion In-Reply-To: References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: Hi Satish, Using host-passthrough on KVM is not only passthrough the physical host CPU model, but will also "try" passthrough the same CPU flags. That means the vcpu will contain the flags same as host CPU, but it still depends on how KVM can support. In other words, KVM will only set the flags what the host CPU have and what KVM itself can support. Since QEMU has released to 4.2.0, perhaps you can try the latest version and doing the pure KVM running to see if it bring up the performance and consider upgrade KVM on compute nodes. Satish Patel 於 2020年3月6日 週五 上午1:11寫道: > Eddie, > > I have tried everything to match or fix CPU Topology layout but its > never come down to correct as i mentioned in screenshot, I have check > on Alicloud and they are also running KVM and their virtual machine > lstopo output is really match with physical machine, like L1i / L1d > cache layout etc. > > if you look at following output its strange i am using "-cpu host" > option but still there are lots of missing flags on my virtual machine > cpuinfo, is that normal? > > This is my VM output (virtual machine) > > # grep flags /proc/cpuinfo | uniq > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm > constant_tsc arch_perfmon rep_good nopl xtopology eagerfpu pni > pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt > tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm > arat fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt > > This is compute machine (physical host) > > # grep flags /proc/cpuinfo | uniq > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx > pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl > xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor > ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 > sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c > rdrand lahf_lm abm epb invpcid_single intel_ppin ssbd ibrs ibpb stibp > tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 > smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida > arat pln pts md_clear spec_ctrl intel_stibp flush_l1d > > On Thu, Mar 5, 2020 at 11:26 AM Eddie Yen wrote: > > > > Hi Satish, > > > > Since you already set "cpu_mode = host-passthrough", there's no need > > to set cpu_model. > > > > BTW, we're not known about the CPU topology a lot. But IME we always > > set "hw_cpu_sockets = 2" in specified image or flavor metadata if running > > Windows instance. In default, KVM always allocate all vcpus into sockets > > in CPU topology, and this will affect the Windows VM performance since > > Windows only support maximum 2 CPU sockets. > > > > Perhaps you can try limit socket numbers by setting hw_cpu_sockets in > > image metadata (or hw:cpu_sockets in flavor metadata.) > > > > Satish Patel 於 2020年3月5日 週四 下午10:46寫道: > >> > >> > >> cpu_mode = cpu-passthrough > >> cpu_model = none > >> > >> Do you think cpu_model make difference ? > >> > >> > >> Sent from my iPhone > >> > >> On Mar 5, 2020, at 7:18 AM, Satish Patel wrote: > >> > >>  > >> > >> cpu-passthrough > >> > >> Sent from my iPhone > >> > >> On Mar 4, 2020, at 9:24 PM, rui zang wrote: > >> > >>  > >> Hi, > >> > >> What is the value for the "cpu_mode" configuration option? > >> > https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html > >> > >> Thanks, > >> Zang, Rui > >> > >> > >> 05.03.2020, 01:24, "Satish Patel" : > >> > >> Folks, > >> > >> We are running openstack with KVM and i have noticed kvm presenting > >> wrong CPU Tolopoly to VM and because of that we are seeing bad > >> performance to our application. > >> > >> This is openstack compute: > >> > >> # lstopo-no-graphics --no-io > >> Machine (64GB total) > >> NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) > >> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 > >> PU L#0 (P#0) > >> PU L#1 (P#20) > >> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 > >> PU L#2 (P#1) > >> PU L#3 (P#21) > >> > >> This is VM running on above compute > >> > >> # lstopo-no-graphics --no-io > >> Machine (59GB total) > >> NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) > >> L2 L#0 (4096KB) + Core L#0 > >> L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) > >> L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) > >> L2 L#1 (4096KB) + Core L#1 > >> L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) > >> L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) > >> > >> if you noticed P#0 and P#1 has own (32KB) cache per thread that is > >> wrong presentation if you compare with physical CPU. > >> > >> This is a screenshot of AWS vs Openstack CPU Topology and looking at > >> openstack its presentation is little odd, is that normal? > >> > >> https://imgur.com/a/2sPwJVC > >> > >> I am running CentOS7.6 with kvm 2.12 version. > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri Mar 6 03:01:00 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 5 Mar 2020 22:01:00 -0500 Subject: CPU Topology confusion In-Reply-To: References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: I am running CentOS 7.6 and it doesn't have RPM available for 4.x release i have to compile it or try to install fedora one. Let me tell you why i am doing all these exercise. We are planning to run Erlang (mongooseIM) application on openstack instance, before i move to production i started doing some load-testing on openstack vm and this is what i did. - I have HP Gen9 server which has 40 core CPU (with HT) so i add this machine in openstack and reserve 8 cpu for host using vcpu_pin_set. - I have created 32 vcpu core virtual machine on this compute node with --property hw:numa_nodes=2 option and also added dedicated and hugepage properties for best performance - In lstopo command i can see two numa with 2 socket / 8 core / 2 thread per numa topology on my guest VM - I have installed erlang (mongooseIM) application and start load-testing and found very very poor result (i would say worst) - For experiment i told erlang to bind all process to numa0 (0-16 vcpu) and found benchmark result was good much better. - Problem is if i run erlang on single numa then i am wasting my CPU core for second numa (i want to utilize all CPU to get best performance) - For experiment i have disable hyperthreading on openstack compute node and build new VM with hw:numa_node=2 option and found best result in benchmark. Now question is why erlang doesn't like dual numa openstack vm with HT enable, it seems erlang looking at CPU Topology and something is missing or broken in TOPOLOGY of kvm when trying to utilize both numa and result poor performance. last few days i am trying to solve this mystery and even i contact erlang developer but didn't get any help. On Thu, Mar 5, 2020 at 8:15 PM Eddie Yen wrote: > > Hi Satish, > > Using host-passthrough on KVM is not only passthrough the physical > host CPU model, but will also "try" passthrough the same CPU flags. > That means the vcpu will contain the flags same as host CPU, but it > still depends on how KVM can support. In other words, KVM will only > set the flags what the host CPU have and what KVM itself can support. > > Since QEMU has released to 4.2.0, perhaps you can try the latest > version and doing the pure KVM running to see if it bring up the > performance and consider upgrade KVM on compute nodes. > > Satish Patel 於 2020年3月6日 週五 上午1:11寫道: >> >> Eddie, >> >> I have tried everything to match or fix CPU Topology layout but its >> never come down to correct as i mentioned in screenshot, I have check >> on Alicloud and they are also running KVM and their virtual machine >> lstopo output is really match with physical machine, like L1i / L1d >> cache layout etc. >> >> if you look at following output its strange i am using "-cpu host" >> option but still there are lots of missing flags on my virtual machine >> cpuinfo, is that normal? >> >> This is my VM output (virtual machine) >> >> # grep flags /proc/cpuinfo | uniq >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov >> pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm >> constant_tsc arch_perfmon rep_good nopl xtopology eagerfpu pni >> pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt >> tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm >> arat fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt >> >> This is compute machine (physical host) >> >> # grep flags /proc/cpuinfo | uniq >> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov >> pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx >> pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl >> xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor >> ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 >> sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c >> rdrand lahf_lm abm epb invpcid_single intel_ppin ssbd ibrs ibpb stibp >> tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 >> smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida >> arat pln pts md_clear spec_ctrl intel_stibp flush_l1d >> >> On Thu, Mar 5, 2020 at 11:26 AM Eddie Yen wrote: >> > >> > Hi Satish, >> > >> > Since you already set "cpu_mode = host-passthrough", there's no need >> > to set cpu_model. >> > >> > BTW, we're not known about the CPU topology a lot. But IME we always >> > set "hw_cpu_sockets = 2" in specified image or flavor metadata if running >> > Windows instance. In default, KVM always allocate all vcpus into sockets >> > in CPU topology, and this will affect the Windows VM performance since >> > Windows only support maximum 2 CPU sockets. >> > >> > Perhaps you can try limit socket numbers by setting hw_cpu_sockets in >> > image metadata (or hw:cpu_sockets in flavor metadata.) >> > >> > Satish Patel 於 2020年3月5日 週四 下午10:46寫道: >> >> >> >> >> >> cpu_mode = cpu-passthrough >> >> cpu_model = none >> >> >> >> Do you think cpu_model make difference ? >> >> >> >> >> >> Sent from my iPhone >> >> >> >> On Mar 5, 2020, at 7:18 AM, Satish Patel wrote: >> >> >> >>  >> >> >> >> cpu-passthrough >> >> >> >> Sent from my iPhone >> >> >> >> On Mar 4, 2020, at 9:24 PM, rui zang wrote: >> >> >> >>  >> >> Hi, >> >> >> >> What is the value for the "cpu_mode" configuration option? >> >> https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html >> >> >> >> Thanks, >> >> Zang, Rui >> >> >> >> >> >> 05.03.2020, 01:24, "Satish Patel" : >> >> >> >> Folks, >> >> >> >> We are running openstack with KVM and i have noticed kvm presenting >> >> wrong CPU Tolopoly to VM and because of that we are seeing bad >> >> performance to our application. >> >> >> >> This is openstack compute: >> >> >> >> # lstopo-no-graphics --no-io >> >> Machine (64GB total) >> >> NUMANode L#0 (P#0 32GB) + Package L#0 + L3 L#0 (25MB) >> >> L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0 >> >> PU L#0 (P#0) >> >> PU L#1 (P#20) >> >> L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1 >> >> PU L#2 (P#1) >> >> PU L#3 (P#21) >> >> >> >> This is VM running on above compute >> >> >> >> # lstopo-no-graphics --no-io >> >> Machine (59GB total) >> >> NUMANode L#0 (P#0 29GB) + Package L#0 + L3 L#0 (16MB) >> >> L2 L#0 (4096KB) + Core L#0 >> >> L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0) >> >> L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1) >> >> L2 L#1 (4096KB) + Core L#1 >> >> L1d L#2 (32KB) + L1i L#2 (32KB) + PU L#2 (P#2) >> >> L1d L#3 (32KB) + L1i L#3 (32KB) + PU L#3 (P#3) >> >> >> >> if you noticed P#0 and P#1 has own (32KB) cache per thread that is >> >> wrong presentation if you compare with physical CPU. >> >> >> >> This is a screenshot of AWS vs Openstack CPU Topology and looking at >> >> openstack its presentation is little odd, is that normal? >> >> >> >> https://imgur.com/a/2sPwJVC >> >> >> >> I am running CentOS7.6 with kvm 2.12 version. >> >> From openstack at fried.cc Fri Mar 6 03:40:54 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 5 Mar 2020 21:40:54 -0600 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK Message-ID: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> Any chance of reusing the “PTL ack” bot that has recently appeared in the releases repo? But as a “SME ack” that would recognize anyone from $project’s core team? (How does the releases bot know which project the patch is for? Might have to be a bit fuzzy on that logic for SDK/OSC.) Then the team could adopt a policy of single core approval if the patch has this SME +1, and no real danger of “abuse”. Eric Fried From agarwalvishakha18 at gmail.com Wed Mar 4 11:42:15 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Wed, 4 Mar 2020 17:12:15 +0530 Subject: [keystone] Keystone Team Update - Week of 2 February 2020 Message-ID: # Keystone Team Update - Week of 2 March 2020 ## News ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [1] [1] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 4 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 24 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli * Special Requests https://review.opendev.org/#/c/710734/ Correcting api-ref for users ## Bugs This week we opened 1 new bug and closed 1. Bugs opened (1) Bug #1865121 (keystone:Undecided): 'openstack token issue' command doesn't issue token for MFA enabled user - Opened by Abhishek Sharma M https://bugs.launchpad.net/keystone/+bug/1865121 Bugs closed (1) Bug #1865121 (keystone:Undecided) https://bugs.launchpad.net/keystone/+bug/1865121 ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html Feature proposal freeze is NEXT WEEK (March 9- March 16). Spec implementations that are not submitted or still in a WIP state by the end of the week will need to be postponed until next cycle unless we agree on an exception. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From artem.goncharov at gmail.com Fri Mar 6 06:19:19 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 6 Mar 2020 07:19:19 +0100 Subject: OSC future (formerly [glance] Different checksum between CLI and curl) In-Reply-To: References: <2beb58bd79afea58ec342fe3c565f3b4e4bc3005.camel@redhat.com> <714d6f56-5e6b-2784-483e-e767f76442cd@gmx.com> <36FB0C7D-C3E1-4C3A-B923-1F68764D44A8@cern.ch> <170a31cf7da.c168b6b0389449.3073076279707922843@ghanshyammann.com> Message-ID: On Fri, 6 Mar 2020, 00:03 melanie witt, wrote: > On 3/5/20 12:11, Adam Harwell wrote: > > Well, part of maintaining feature parity is that the features should be > > added to the OSC by the project team at the time they're made -- you're > > already doing the work to add them to your own client, so instead, do > > the same amount of work but add them in OSC instead! It doesn't seem > > incredibly onerous to me. If the OSC plugin for your project IS the > > official client, then there's no duplication of effort. I think saying > > "someone else had better implement our features in a timely fashion" is > > a bit irresponsible. Though, this is coming from working on a project > > where we aren't used to being included as a "core piece" and having any > > work done for us, ever... > > I think this is the key point regarding the lack of feature parity in > OSC for some projects. > > If you are a new-enough project to have begun your CLI as an OSC plugin > (examples: ironic client, placement client, and more) then adding a > feature to the client is one shot. You add support in the plugin and > you're done. > > If you are an older project (examples: nova client, glance client) then > you have a two-step process for adding a feature to OSC. For older > projects, OSC imports the legacy clients and calls their python bindings > to make the API calls. So for nova, if you want to add a feature to the > client, you have to add it to the legacy nova client. This is required. > Then, to add it to OSC you have to add it to OSC and have OSC call the > newly added legacy binding for the feature. This is [technically] > optional. This is why parity is missing. > Hopefully in some days for glance this will not be necessary anymore, since there is a patch waiting to be merged, which switches OSC to SDK for Glance and removes glanceclient from dependencies. For neutron it is not required since long time. But still, what you say was real way to implement things, but it doesn't mean it is a correct or the easiest way. Instead if team once invests time to bring OSC Plugin, which bases on SDK in parity ... - the old client might be deprecated and you don't have this double efforts. The SDK/OSC team can not and should not try to catch all projects with their features each release - it is impossible (there are just 3 active cores in SDK now but hundreds of projects we might want to support). This team enables projects in a "unified technology stack" so that they become responsible for implementing all new features there. > It pains me a bit to write it ^ because you may be thinking, "what's so > difficult about going the extra step to add a feature to OSC after > adding it to nova client?" I don't know. Maybe people are too stressed > and busy. If it's not "required", it gets deferred. Maybe people don't > feel familiar enough with OSC to add the feature there as well. There > could be a lot of different reasons. > > So, not trying to make excuses here but just sharing my opinion on why > adding features to OSC is not so simple for some projects. > It's not if you prioritize things not correctly and chooses complex direction. I'm laughing so loud, that this "storm" started just out of one question "OSC works correct, but curl not, so what am I doing wrong". It just one more time convinces me in some form of radicalisation of some projects against bringing things in order. Regards, Artem > Cheers, > -melanie > > > Also, things are also definitely moving in a better direction now with > > the probable addition of project team liasons as cores in SDK/OSC, which > > should alleviate a lot of the issues around "response time" on reviews, > > when you do put in the effort to add features yourself. > > > > --Adam > > > > On Fri, Mar 6, 2020, 00:15 Erno Kuvaja > > wrote: > > > > On Wed, Mar 4, 2020 at 1:19 AM Ghanshyam Mann > > > wrote: > > > > ---- On Tue, 03 Mar 2020 13:00:35 -0600 Tim Bell > > > wrote ---- > > > > > > > > > On 3 Mar 2020, at 19:55, Tim Bell > > wrote: > > > > > > > > > On 3 Mar 2020, at 19:20, Albert Braden > > > > > wrote: > > > Sean, thank you for clarifying that. > > > > > > Was my understanding that the community decided to focus on > > the unified client incorrect? Is the unified/individual client > > debate still a matter of controversy? Is it possible that the > > unified client will be deprecated in favor of individual clients > > after more discussion? I haven’t looked at any of the individual > > clients since 2018 (except for osc-placement which is kind of a > > special case), because I thought they were all going away and > > could be safely ignored until they did, and I haven’t included > > any information about the individual clients in the > > documentation that I write for our users, and if they ask I have > > been telling them to not use the individual clients. Do I need > > to start looking at individual clients again, and telling our > > users to use them in some cases? > > > > > > > > > > > > I remember a forum discussion where a community goal was > > proposed to focus on OSC rather than individual project CLIs (I > > think Matt and I were proposers). There were concerns on the > > effort to do this and that it would potentially be multi-cycle. > > > BTW, I found the etherpad from Berlin > > (https://etherpad.openstack.org/p/BER-t-series-goals) and the > > associated mailing list discussion at > > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/135107.html > > > > Yeah, we are in process of selecting the Victoria cycle > > community-wide goal and this can be good candidate. I agree with > > the idea/requirement of a multi-cycle goal. > > Another option is to build a pop-up team for the Victoria cycle > > to start burning down the keys issues/work. For both ways > > (either goal or pop-up team), we need > > some set of people to drive it. If anyone would like to > > volunteer for this, we can start discussing the details. > > > > [1] > > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012866.html > > > > -gmann > > > > Yeah, lets propose this as community goal again as it worked so well > > last time. |ಠ_ಠ| > > > > I think your most help wanted list/pop-up team is much more > > realistic approach. Lets see if there is enough interest to actually > > make it happen. What comes to our previous experience with Glance > > and moving to endorse osc, I think I'm not alone stating that we can > > discuss this again after osc has kept feature parity (and I mean to > > current release, not feature parity 2 years ago kind of thing) and > > actively addressed raised issues at least for a couple of cycles. > > Obviously if you/your users wants to use it meanwhile, that your > > call. If we cannot get that level of commitment, how do we expect to > > support this long term? > > > > I'm not willing to put our users through that misery again as it > > happened last time as long as I'm core in this project. > > > > - jokke > > > > > > > > My experience in discussion with the CERN user community and > > other OpenStack operators is that OSC is felt to be the right > > solution for the end user facing parts of the cloud (admin > > commands could be another discussion if necessary). Experienced > > admin operators can remember that glance looks after images and > > nova looks after instances. Our average user can get very > > confused, especially given that OSC supports additional options > > for authentication (such as Kerberos and Certificates along with > > clouds.yaml) so users need to re-authenticate with a different > > openrc to work on their project. > > > While I understand there are limited resources all round, I > > would prefer that we focus on adding new project functions to > > OSC which will eventually lead to feature parity. > > > Attracting ‘drive-by’ contributions from operations staff > > for OSC work (it's more likely to be achieved if it makes the > > operations work less e.g. save on special end user documentation > > by contributing code). This is demonstrated from the CERN team > > contribution to the OSC ‘coe' and ‘share' functionality along > > with lots of random OSC updates as listed hat > > > https://www.stackalytics.com/?company=cern&metric=commits&module=python-openstackclient > ) > > > > > BTW, I also would vote for =auto as the default. > > > Tim > > > We are on Rocky now but I expect that we will upgrade as > > necessary to stay on supported versions. > > > > > > From: Sean McGinnis > > > > > Sent: Tuesday, March 3, 2020 9:50 AM > > > To: openstack-discuss at lists.openstack.org > > > > > Subject: Re: OSC future (formerly [glance] Different > > checksum between CLI and curl) > > > > > > On 3/3/20 11:28 AM, Albert Braden wrote: > > > Am I understanding correctly that the Openstack community > > decided to focus on the unified client, and to deprecate the > > individual clients, and that the Glance team did not agree with > > this decision, and that the Glance team is now having a pissing > > match with the rest of the community, and is unilaterally > > deciding to continue developing the Glance client and refusing > > to work on the unified client, or is something different going > > on? I would ask everyone involved to remember that we operators > > are down here, and the yellow rain falling on our heads does not > > smell very good. > > > I definitely would not characterize it that way. > > > With trying not to put too much personal bias into it, > > here's what I would say the situation is: > > > - Some part of the community has said OSC should be the only > > CLI and that individual CLIs should go away > > > - Glance is a very small team with very, very limited > resources > > > - The OSC team is a very small team with very, very limited > > resources > > > - CLI capabilities need to be exposed for Glance changes and > > the easiest way to get them out for the is by updating the > > Glance CLI > > > - No one from the OSC team has been able to proactively help > > to make sure these changes make it into the OSC client (see > > bullet 3) > > > - There exists a sizable functionality gap between > > per-project CLIs and what OSC provides, and although a few > > people have done a lot of great work to close that gap, there is > > still a lot to be done and does not appear the gap will close at > > any point in the near future based on the current trends > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Mar 6 07:31:07 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 6 Mar 2020 08:31:07 +0100 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> References: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> Message-ID: <019B33AF-7ADC-4C08-A131-C9CF0B919D07@redhat.com> Hi, I’m fine with adding those new people to the core team. As Monty said, I’m not doing too many reviews in sdk project but I’m trying to always check neutron related changes and I think that having such expert for other projects would be good too. > On 6 Mar 2020, at 04:40, Eric Fried wrote: > > Any chance of reusing the “PTL ack” bot that has recently appeared in the releases repo? But as a “SME ack” that would recognize anyone from $project’s core team? (How does the releases bot know which project the patch is for? Might have to be a bit fuzzy on that logic for SDK/OSC.) That also seems like potential solution but as changes in this repo are a bit differently then in e.g. releases repo how bot will exactly know which PTL should approve the patch? It may be much harder to do here than in releases repo, no? > > Then the team could adopt a policy of single core approval if the patch has this SME +1, and no real danger of “abuse”. > > Eric Fried > — Slawek Kaplonski Senior software engineer Red Hat From rui.zang at yandex.com Fri Mar 6 07:52:40 2020 From: rui.zang at yandex.com (rui zang) Date: Fri, 06 Mar 2020 15:52:40 +0800 Subject: CPU Topology confusion In-Reply-To: References: <3F1F97A3-4DEE-4CA9-9147-892D6E7355E7@gmail.com> Message-ID: <1166211583479894@vla4-4046ec513d04.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From qiujunting at inspur.com Fri Mar 6 07:57:24 2020 From: qiujunting at inspur.com (=?gb2312?B?SnVudGluZ3FpdSBRaXVqdW50aW5nICjH8b785sMp?=) Date: Fri, 6 Mar 2020 07:57:24 +0000 Subject: [nova][qa] When creating a instance using a flavor with "hw:cpu_policy=dedicated" "hw:cpu_realtime=yes" and "hw:cpu_realtime_mask=^0-1" failed. Message-ID: When I create a instance using a flavor with "hw:cpu_policy=dedicated" "hw:cpu_realtime=yes" and "hw:cpu_realtime_mask=^0-1". The error is "libvirtError:Cannot set scheduler parameters for pid 3666:operation not permitted". https://bugs.launchpad.net/starlingx/+bug/1866311 The instance xml as following: 4 The error log as following: ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1099, in createWithFlags ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] libvirtError: Cannot set scheduler parameters for pid 6504: Operation not permitted ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Mar 6 08:11:19 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Fri, 6 Mar 2020 08:11:19 +0000 Subject: [nova][ptl] Temporary Nova PTL until election Message-ID: <1583482276.12170.14@est.tech> Hi, Since Eric announced that he has to leave us [1] I have been working internally with my employee to be able to take over the Nova PTL position. Now I've got the necessary approvals. The official PTL election is close [2] and I'm ready to fill the PTL gap until the proper PTL election in April. Is this a workable solution for the community? Cheers, gibi [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html [2] https://governance.openstack.org/election/ From zhangbailin at inspur.com Fri Mar 6 08:49:06 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Fri, 6 Mar 2020 08:49:06 +0000 Subject: =?gb2312?B?tPC4tDogW25vdmFdW3B0bF0gVGVtcG9yYXJ5IE5vdmEgUFRMIHVudGlsIGVs?= =?gb2312?Q?ection?= In-Reply-To: <1583482276.12170.14@est.tech> References: <1583482276.12170.14@est.tech> Message-ID: > 主题: [lists.openstack.org代发][nova][ptl] Temporary Nova PTL until election > > Hi, > Since Eric announced that he has to leave us [1] I have been working internally with my employee to be able to take > over the Nova PTL position. Now I've got the necessary approvals. The official PTL election is close [2] and I'm ready to > fill the PTL gap until the proper PTL election in April. > > Is this a workable solution for the community? +1, Yes, at 2020-03-05 (Thursday) nova meeting talked this topic [#1]. After long-term cooperation with gibi in community work, I agree that he will take over the PTL vacancy brought by Eric, and I think we need to do so. [#1]http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2020-03-05.log.html#t2020-03-05T14:19:38-2-3 > Cheers, > gibi > [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html > [2] https://governance.openstack.org/election/ From alfredo.deluca at gmail.com Fri Mar 6 08:52:02 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Fri, 6 Mar 2020 09:52:02 +0100 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Hi Donny. thanks for your inputs. Appreciate it On Sun, Feb 23, 2020 at 7:16 AM Donny Davis wrote: > I use local NVME storage for FortNebula. Its very fast, and for "cloudy" > things I prefer availability of data to be above the infrastructure layer. > I used to use Ceph for all things, but in my experience... if performance > is a requirement, local storage is pretty hard to beat. > I am in the process of moving the object store to ceph, and all seems to > be well in terms of performance using ceph for that use case. > > > > On Tue, Feb 18, 2020 at 6:41 AM Alfredo De Luca > wrote: > >> Thanks Burak and Ignazio. >> Appreciate it >> >> >> >> On Thu, Feb 13, 2020 at 10:19 PM Burak Hoban >> wrote: >> >>> Hi guys, >>> >>> We use Dell EMC VxFlex OS, which in its current version allows for free >>> use and commercial (in version 3.5 a licence is needed, but its perpetual). >>> It's similar to Ceph but more geared towards scale and performance etc (it >>> use to be called ScaleIO). >>> >>> Other than that, I know of a couple sites using SAN storage, but a lot >>> of people just seem to use Ceph. >>> >>> Cheers, >>> >>> Burak >>> >>> ------------------------------ >>> >>> Message: 2 >>> Date: Thu, 13 Feb 2020 18:20:29 +0100 >>> From: Ignazio Cassano >>> To: Alfredo De Luca >>> Cc: openstack-discuss >>> Subject: Re: [CINDER] Distributed storage alternatives >>> Message-ID: >>> < >>> CAB7j8cXLQWh5fx-E9AveUEa6OncDwCL6BOGc-Pm2TX4FKwnUKg at mail.gmail.com> >>> Content-Type: text/plain; charset="utf-8" >>> >>> Hello Alfredo, I think best opensource solution is ceph. >>> As far as commercial solutions are concerned we are working with network >>> appliance (netapp) and emc unity. >>> Regards >>> Ignazio >>> >>> Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha >>> scritto: >>> >>> > Hi all. >>> > we 'd like to explore storage back end alternatives to CEPH for >>> > Openstack >>> > >>> > I am aware of GlusterFS but what would you recommend for distributed >>> > storage like Ceph and specifically for block device provisioning? >>> > Of course must be: >>> > >>> > 1. *Reliable* >>> > 2. *Fast* >>> > 3. *Capable of good performance over WAN given a good network back >>> > end* >>> > >>> > Both open source and commercial technologies and ideas are welcome. >>> > >>> > Cheers >>> > >>> > -- >>> > *Alfredo* >>> > >>> > >>> >>> _____________________________________________________________________ >>> >>> The information transmitted in this message and its attachments (if any) >>> is intended >>> only for the person or entity to which it is addressed. >>> The message may contain confidential and/or privileged material. Any >>> review, >>> retransmission, dissemination or other use of, or taking of any action >>> in reliance >>> upon this information, by persons or entities other than the intended >>> recipient is >>> prohibited. >>> >>> If you have received this in error, please contact the sender and delete >>> this e-mail >>> and associated material from any computer. >>> >>> The intended recipient of this e-mail may only use, reproduce, disclose >>> or distribute >>> the information contained in this e-mail and any attached files, with >>> the permission >>> of the sender. >>> >>> This message has been scanned for viruses. >>> _____________________________________________________________________ >>> >> >> >> -- >> *Alfredo* >> >> > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Fri Mar 6 09:11:11 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Fri, 6 Mar 2020 09:11:11 +0000 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <1583482276.12170.14@est.tech> References: <1583482276.12170.14@est.tech> Message-ID: <20200306091111.kxkd6mtk4mwqo5vs@lyarwood.usersys.redhat.com> On 06-03-20 08:11:19, Balázs Gibizer wrote: > Hi, > > Since Eric announced that he has to leave us [1] I have been working > internally with my employee to be able to take over the Nova PTL > position. Now I've got the necessary approvals. The official PTL > election is close [2] and I'm ready to fill the PTL gap until the > proper PTL election in April. > > Is this a workable solution for the community? Yes definitely for me, thanks for stepping up gibi! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Fri Mar 6 09:35:29 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 6 Mar 2020 10:35:29 +0100 Subject: oslo.cache 2.1.0 breaks oslo_cache.memcache_pool In-Reply-To: References: Message-ID: Oslo.cache version 2.1.0 is now blacklisted [1] to avoid similar issues for now. A new `memcache_pool` backend job have been introduced (not yet merged) through a patch [2] to oslo.cache to help us to catch similar errors during CI. Now we have 2 ways to definitely fix the situation on oslo.cache: - fix the broken code [3] and release a new patched version (2.1.1), still WIP; - revert the initial changes [4] and release a new version free from this bug (2.2.0). After some discussions with other oslo cores we want to give priority to the fix [3] first. Do not hesitate to correct me if something is wrong here. Cheers, PS: improvements and fix described in my previous email have been merged together [3]. [1] https://review.opendev.org/#/c/711427/ [2] https://review.opendev.org/#/c/711422/ [3] https://review.opendev.org/#/c/711220/ [4] https://review.opendev.org/#/c/711439/ Le mer. 4 mars 2020 à 19:23, Herve Beraud a écrit : > I proposed the following two patches to address the issue and improve this > module beyond the current issue: > - https://review.opendev.org/711220 (the fix) > - https://review.opendev.org/711247 (the improvements) > > After these patches will be merged and the issue fixed we will blacklist > the version 2.1.0 of oslo.cache and propose a new release with the previous > fixes embedded. > > Do not hesitate to review them and leave comments. > > Thanks for your reading. > > Le mer. 4 mars 2020 à 14:16, Herve Beraud a écrit : > >> Fix proposed https://review.opendev.org/#/c/711220/ >> >> Le mer. 4 mars 2020 à 13:42, Moises Guimaraes de Medeiros < >> moguimar at redhat.com> a écrit : >> >>> `dead_timeout`++ >>> >>> On Wed, Mar 4, 2020 at 1:36 PM Herve Beraud wrote: >>> >>>> `dead_timeout` [1] looks more appropriate in this case. >>>> >>>> [1] >>>> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L58 >>>> >>>> Le mer. 4 mars 2020 à 13:28, Herve Beraud a >>>> écrit : >>>> >>>>> What do you think about adding a mapping between `retry_timeout` [1] >>>>> and `dead_retry` [2]? >>>>> >>>>> [1] >>>>> https://github.com/pinterest/pymemcache/blob/master/pymemcache/client/hash.py#L56 >>>>> [2] >>>>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>>>> >>>>> Le mer. 4 mars 2020 à 13:20, Herve Beraud a >>>>> écrit : >>>>> >>>>>> I think our issue is due to the fact that python-memcached accept a >>>>>> param named `dead_retry` [1] which is not defined in pymemcache. >>>>>> >>>>>> We just need to define it in our oslo.cache mapping. During testing >>>>>> we faced the same kind of issue with connection timeout. >>>>>> >>>>>> [1] >>>>>> https://github.com/linsomniac/python-memcached/blob/bad41222379102e3f18f6f2f7be3ee608de6fbff/memcache.py#L183 >>>>>> [2] >>>>>> https://github.com/openstack/oslo.cache/blob/8a8248d764bbb1db6c0089a58745803c03e38fdb/oslo_cache/_memcache_pool.py#L193,L201 >>>>>> >>>>>> Le mer. 4 mars 2020 à 12:21, Radosław Piliszek < >>>>>> radoslaw.piliszek at gmail.com> a écrit : >>>>>> >>>>>>> Please be informed that oslo.cache 2.1.0 breaks >>>>>>> oslo_cache.memcache_pool >>>>>>> >>>>>>> Kolla-Ansible gate is already RED and a quick codesearch revealed >>>>>>> other deployment methods might be in trouble soon as well. >>>>>>> >>>>>>> This does not affect devstack/tempest as they use >>>>>>> dogpile.cache.memcached instead. >>>>>>> >>>>>>> The error is TypeError: __init__() got an unexpected keyword argument >>>>>>> 'dead_retry' >>>>>>> >>>>>>> For details see: https://bugs.launchpad.net/oslo.cache/+bug/1866008 >>>>>>> >>>>>>> -yoctozepto >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Hervé Beraud >>>>>> Senior Software Engineer >>>>>> Red Hat - Openstack Oslo >>>>>> irc: hberaud >>>>>> -----BEGIN PGP SIGNATURE----- >>>>>> >>>>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>>>> v6rDpkeNksZ9fFSyoY2o >>>>>> =ECSj >>>>>> -----END PGP SIGNATURE----- >>>>>> >>>>>> >>>>> >>>>> -- >>>>> Hervé Beraud >>>>> Senior Software Engineer >>>>> Red Hat - Openstack Oslo >>>>> irc: hberaud >>>>> -----BEGIN PGP SIGNATURE----- >>>>> >>>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>>> v6rDpkeNksZ9fFSyoY2o >>>>> =ECSj >>>>> -----END PGP SIGNATURE----- >>>>> >>>>> >>>> >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer >>>> Red Hat - Openstack Oslo >>>> irc: hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> >>> Moisés Guimarães >>> >>> Software Engineer >>> >>> Red Hat >>> >>> >>> >> >> >> -- >> Hervé Beraud >> Senior Software Engineer >> Red Hat - Openstack Oslo >> irc: hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Mar 6 11:26:53 2020 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Mar 2020 12:26:53 +0100 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <1583482276.12170.14@est.tech> References: <1583482276.12170.14@est.tech> Message-ID: <20200306112653.GA30104@paraplu> On Fri, Mar 06, 2020 at 08:11:19AM +0000, Balázs Gibizer wrote: > Hi, > > Since Eric announced that he has to leave us [1] I have been working > internally with my employee to be able to take over the Nova PTL > position. Now I've got the necessary approvals. The official PTL > election is close [2] and I'm ready to fill the PTL gap until the > proper PTL election in April. > > Is this a workable solution for the community? Absolutely! Thanks for raising your hand to do the unthankful work. -- /kashyap From openstack at fried.cc Fri Mar 6 13:01:15 2020 From: openstack at fried.cc (Eric Fried) Date: Fri, 6 Mar 2020 07:01:15 -0600 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <1583482276.12170.14@est.tech> References: <1583482276.12170.14@est.tech> Message-ID: <4692F106-6B00-41FC-9BA9-1DF62A24EDAB@fried.cc> Big +1 from me. Many thanks, gibi. Not that you‘ll need it, but please don’t hesitate to reach out to me if you have questions. efried_gone > On Mar 6, 2020, at 02:16, Balázs Gibizer wrote: > > Hi, > > Since Eric announced that he has to leave us [1] I have been working > internally with my employee to be able to take over the Nova PTL > position. Now I've got the necessary approvals. The official PTL > election is close [2] and I'm ready to fill the PTL gap until the > proper PTL election in April. > > Is this a workable solution for the community? > > Cheers, > gibi > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html > [2] https://governance.openstack.org/election/ > > > From gmann at ghanshyammann.com Fri Mar 6 13:51:25 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 06 Mar 2020 07:51:25 -0600 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <4692F106-6B00-41FC-9BA9-1DF62A24EDAB@fried.cc> References: <1583482276.12170.14@est.tech> <4692F106-6B00-41FC-9BA9-1DF62A24EDAB@fried.cc> Message-ID: <170b01d7595.10341333e516143.4131462912712933865@ghanshyammann.com> ---- On Fri, 06 Mar 2020 07:01:15 -0600 Eric Fried wrote ---- > Big +1 from me. Many thanks, gibi. Not that you‘ll need it, but please don’t hesitate to reach out to me if you have questions. Indeed. Thanks gibi for helping out here. -gmann > > efried_gone > > > On Mar 6, 2020, at 02:16, Balázs Gibizer wrote: > > > > Hi, > > > > Since Eric announced that he has to leave us [1] I have been working > > internally with my employee to be able to take over the Nova PTL > > position. Now I've got the necessary approvals. The official PTL > > election is close [2] and I'm ready to fill the PTL gap until the > > proper PTL election in April. > > > > Is this a workable solution for the community? > > > > Cheers, > > gibi > > > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html > > [2] https://governance.openstack.org/election/ > > > > > > > > > From mordred at inaugust.com Fri Mar 6 15:01:22 2020 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 6 Mar 2020 09:01:22 -0600 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: References: Message-ID: > On Mar 5, 2020, at 6:53 PM, Michael Johnson wrote: > > Naming names is a highly effective demotivator for those whose > contributions are not recognized. > > https://www.stackalytics.com/?module=openstacksdk&metric=loc > > Maybe the project teams should select their own liaison? > Great point - and totally! I was mostly keying off of recent interactions but you’re absolutely right about that. Also - apologies for my brain not immediately calling you up there… it’s been a week. I think adding you is a great idea. There’s some other thoughts in the responses here that are interesting that we should try as well, but I think those will take a bit longer to sort out. > On Thu, Mar 5, 2020 at 9:58 AM Monty Taylor wrote: >> >> Heya, >> >> I’d like to try something. >> >> I’d like to try adding some project-specific people to the core team so that they can more directly help maintain the support for their service in SDK. In some of these cases the person I’m suggestion has next to no review experience in SDK. I think let’s be fine with that for now - we’re still a 2x +2 in general thing - but I know currently when reviewing neutron or ironic changes I always want to see a +2 from slaweq or dtantsur … so in the spirit of trying new things and trying to move the project forward in a healthy and welcoming way - how about we give this a try? >> >> The idea here is that we’re trusting people to use their good judgement and to only use their new +2 powers for good in their project. Over time, if they feel like they’ve gotten a handle on things more widely, there’s nothing stopping them from reviewing other patches - but I think that most of us aren’t looking for additional review work anyway. >> >> Specifically this would be: >> >> Shogo Saito - congress >> Adam Harwell - octavia >> Graham Hayes - designate >> Bharat Kumar - magnum >> Erik Olof Gunnar Andersson - senlin >> Tim Burke - swift >> >> I think we should also add a file in the repo that lists “subject matter experts” for each service we’ve got support for, where we have them. My list of current cores who I’d ping for specific service suitability are: >> >> Sean McGinnis - cinder >> Slawek Kaplonski - neutron >> Dmitry Tantsur - ironic >> Eric Fried - nova (at least until tomorrow my friend) >> >> How does that sound to folks? >> >> Monty > From openstack at fried.cc Fri Mar 6 15:06:03 2020 From: openstack at fried.cc (Eric Fried) Date: Fri, 6 Mar 2020 09:06:03 -0600 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: <019B33AF-7ADC-4C08-A131-C9CF0B919D07@redhat.com> References: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> <019B33AF-7ADC-4C08-A131-C9CF0B919D07@redhat.com> Message-ID: <55bfaec1-1880-e779-2a5d-288528c4c26e@fried.cc> > That also seems like potential solution but as changes in this repo are a bit differently then in e.g. releases repo how bot will exactly know which PTL should approve the patch? It may be much harder to do here than in releases repo, no? Yeah, I agree, which is why I asked how the releases repo does it. I can envision the bot knowing which packages/files pertain to a given project and doing the mapping that way. And like if multiple projects' files are touched, the bot doesn't register the SME+1 until there's a +1 from each project. But yeah, this is getting a bit heavy weight. Then again, it allays diablo_rojo's concern about SMEs' ability to +W, and johnsom's concern about impacting morale by singling out individuals. Anyway, just a thought. efried_gone . From mordred at inaugust.com Fri Mar 6 15:12:18 2020 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 6 Mar 2020 09:12:18 -0600 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: <019B33AF-7ADC-4C08-A131-C9CF0B919D07@redhat.com> References: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> <019B33AF-7ADC-4C08-A131-C9CF0B919D07@redhat.com> Message-ID: > On Mar 6, 2020, at 1:31 AM, Slawek Kaplonski wrote: > > Hi, > > I’m fine with adding those new people to the core team. As Monty said, I’m not doing too many reviews in sdk project but I’m trying to always check neutron related changes and I think that having such expert for other projects would be good too. > >> On 6 Mar 2020, at 04:40, Eric Fried wrote: >> >> Any chance of reusing the “PTL ack” bot that has recently appeared in the releases repo? But as a “SME ack” that would recognize anyone from $project’s core team? (How does the releases bot know which project the patch is for? Might have to be a bit fuzzy on that logic for SDK/OSC.) > > That also seems like potential solution but as changes in this repo are a bit differently then in e.g. releases repo how bot will exactly know which PTL should approve the patch? It may be much harder to do here than in releases repo, no I agree with Slawek - I like this idea but I think it *is* a harder one to accomplish. Also, sometimes patches are straightforward enough that we don’t really need a service-specific ack from … especially when the service has good and clear API documentation. I think I’m leaning towards liking Michael’s suggestion a bit better as a next step - having the teams suggest a person, liaison-style - mostly because it’s easy. But for now, even with that - and even with my flub of completely blanking on Michael, let’s see how playing it by ear goes before we add more process. >> >> Then the team could adopt a policy of single core approval if the patch has this SME +1, and no real danger of “abuse”. >> >> Eric Fried >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > From fungi at yuggoth.org Fri Mar 6 15:25:33 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Mar 2020 15:25:33 +0000 Subject: [sdk][congress][octavia][designate][magnum][senlin][swift] Adding project-specific cores to SDK In-Reply-To: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> References: <7BBB4E00-5AA1-441B-9872-F6AF5415FCDB@fried.cc> Message-ID: <20200306152533.jbkcwtitvsgds44k@yuggoth.org> On 2020-03-05 21:40:54 -0600 (-0600), Eric Fried wrote: > Any chance of reusing the “PTL ack” bot that has recently appeared > in the releases repo? But as a “SME ack” that would recognize > anyone from $project’s core team? (How does the releases bot know > which project the patch is for? Might have to be a bit fuzzy on > that logic for SDK/OSC.) [...] The openstack/releases repository has its critical data organized by OpenStack governance project team deliverable, and there's a Zuul job which gets triggered on every review comment which looks to see if the Gerrit account leaving the comment maps through https://opendev.org/openstack/releases/src/branch/master/data/release_liaisons.yaml and https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml to match the reviewer to the deliverable(s) covered by the proposed change. To do something similar for the OSC/SDK repos, you'd need 1. a means of programmatically identifying the relevant project team for any proposed change, and 2. some list of unique Gerrit account identifiers (most likely E-mail addresses) of the people whose reviews are relevant for them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Fri Mar 6 15:28:59 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 6 Mar 2020 16:28:59 +0100 Subject: [neutron] Propose Lajos Katona for Neutron core team Message-ID: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Hi neutrinos, I would like to propose Lajos Katona (irc: lajoskatona) as a member of the Neutron core team. Lajos is Neutron contributor Neutron since around Queens cycle and now he is one of the most active reviewers in the Neutron group projects. He was one of the key contributors in cooperation with Nova and Placement teams to deliver guaranteed minimum bandwidth feature in OpenStack. He is very active and helpful with triaging and fixing Neutron bugs and issues in our CI. During last few cycles he proved that he has wide knowledge about Neutron code base. He is currently also a maintainer of some neutron stadium projects which shows that he has knowledge about code base not only about neutron but also Neutron stadium. The quality and number of his reviews are comparable to other members of the Neutron core team: https://www.stackalytics.com/?release=ussuri&module=neutron-group and are higher every cycle :) I think he will be great addition to our core team. I will keep this nomination open for a week or until all current cores will respond. — Slawek Kaplonski Senior software engineer Red Hat From nate.johnston at redhat.com Fri Mar 6 15:42:09 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 6 Mar 2020 10:42:09 -0500 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Message-ID: <20200306154209.763y4s3xfcymtqtt@firewall> Lajos is a great contributor in all ways. I have been watching his reviews and he is an insightful, thorough reviewer. Big +1 from me. Nate On Fri, Mar 06, 2020 at 04:28:59PM +0100, Slawek Kaplonski wrote: > Hi neutrinos, > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the Neutron core team. > Lajos is Neutron contributor Neutron since around Queens cycle and now he is one of the most active reviewers in the Neutron group projects. > He was one of the key contributors in cooperation with Nova and Placement teams to deliver guaranteed minimum bandwidth feature in OpenStack. > He is very active and helpful with triaging and fixing Neutron bugs and issues in our CI. > > During last few cycles he proved that he has wide knowledge about Neutron code base. He is currently also a maintainer of some neutron stadium projects which shows that he has knowledge about code base not only about neutron but also Neutron stadium. > > The quality and number of his reviews are comparable to other members of the Neutron core team: https://www.stackalytics.com/?release=ussuri&module=neutron-group and are higher every cycle :) > I think he will be great addition to our core team. > > I will keep this nomination open for a week or until all current cores will respond. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From miguel at mlavalle.com Fri Mar 6 16:26:07 2020 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 6 Mar 2020 10:26:07 -0600 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Message-ID: Hi, Yes I agree, Lajos will be a great addition to the Neutron core team. Big +1 from me Regards On Fri, Mar 6, 2020 at 9:29 AM Slawek Kaplonski wrote: > Hi neutrinos, > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the > Neutron core team. > Lajos is Neutron contributor Neutron since around Queens cycle and now he > is one of the most active reviewers in the Neutron group projects. > He was one of the key contributors in cooperation with Nova and Placement > teams to deliver guaranteed minimum bandwidth feature in OpenStack. > He is very active and helpful with triaging and fixing Neutron bugs and > issues in our CI. > > During last few cycles he proved that he has wide knowledge about Neutron > code base. He is currently also a maintainer of some neutron stadium > projects which shows that he has knowledge about code base not only about > neutron but also Neutron stadium. > > The quality and number of his reviews are comparable to other members of > the Neutron core team: > https://www.stackalytics.com/?release=ussuri&module=neutron-group and are > higher every cycle :) > I think he will be great addition to our core team. > > I will keep this nomination open for a week or until all current cores > will respond. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Mar 6 16:34:59 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Fri, 06 Mar 2020 16:34:59 +0000 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <20200306154209.763y4s3xfcymtqtt@firewall> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> <20200306154209.763y4s3xfcymtqtt@firewall> Message-ID: <484abed16b6277582a765578b08bbc7fc14742e9.camel@redhat.com> Indeed Lajos is a detail oriented reviewer and has contributed for a long time to this project. Welcome aboard! +1 On Fri, 2020-03-06 at 10:42 -0500, Nate Johnston wrote: > Lajos is a great contributor in all ways. I have been watching his reviews and > he is an insightful, thorough reviewer. Big +1 from me. > > Nate > > On Fri, Mar 06, 2020 at 04:28:59PM +0100, Slawek Kaplonski wrote: > > Hi neutrinos, > > > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the Neutron core team. > > Lajos is Neutron contributor Neutron since around Queens cycle and now he is one of the most > > active reviewers in the Neutron group projects. > > He was one of the key contributors in cooperation with Nova and Placement teams to deliver > > guaranteed minimum bandwidth feature in OpenStack. > > He is very active and helpful with triaging and fixing Neutron bugs and issues in our CI. > > > > During last few cycles he proved that he has wide knowledge about Neutron code base. He is > > currently also a maintainer of some neutron stadium projects which shows that he has knowledge > > about code base not only about neutron but also Neutron stadium. > > > > The quality and number of his reviews are comparable to other members of the Neutron core team: > > https://www.stackalytics.com/?release=ussuri&module=neutron-group and are higher every cycle :) > > I think he will be great addition to our core team. > > > > I will keep this nomination open for a week or until all current cores will respond. > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > From mark at stackhpc.com Fri Mar 6 16:39:51 2020 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 6 Mar 2020 16:39:51 +0000 Subject: [kolla][uc] Kolla SIG Message-ID: Hi, I'd like to propose the creation of a Special Interest Group (SIG) [0] for Kolla. The main aim of the group would be to improve communication between operators and developers. The SIG would host regular virtual project onboarding, project update, and feedback sessions, ideally via video calls. This should remove the necessity of being physically present at Open Infra Summits for participation in the project. I like to think of this as the fifth open [1] (name TBD). I propose that in addition to the above sessions, the SIG should host more informal discussions, probably every 2-4 weeks with the aim of meeting other community members, discussing successes and failures, sharing knowledge, and generally getting to know each other a bit better. These could be via video call, IRC, or a mix. Finally - I propose that we build and maintain a list of Kolla users, including details of their environments and areas of interest and expertise. Of course this would be opt-in. This would help us to connect with subject matter experts and interested parties to help answer queries in IRC, or when making changes to a specific area. This is all up for discussion, and subject to sufficient interest. If you are interested, please add your name and email address to the Etherpad [2], along with any comments, thoughts or suggestions. [0] https://governance.openstack.org/sigs/ [1] https://www.openstack.org/four-opens/ [2] https://etherpad.openstack.org/p/kolla-sig Cheers, Mark From kchamart at redhat.com Fri Mar 6 16:45:37 2020 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Mar 2020 17:45:37 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <20200306164537.GB30104@paraplu> On Mon, Mar 02, 2020 at 04:45:47PM -0500, Mohammed Naser wrote: [...] > https://governance.openstack.org/tc/reference/charter.html#project-team-leads > > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. Personally, I've operated with the mindset of "multiple maintainers" / SMEs approach. Also that's the model I've seen successfully used in other upstream communities (10+ years old) that I participate in. It makes a lot of sense in the long-term, and sustainability-wise. IOW, "Vehement ACK" for the multiple maintiners model. [...] -- /kashyap From smooney at redhat.com Fri Mar 6 18:35:11 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 06 Mar 2020 18:35:11 +0000 Subject: [nova][qa] When creating a instance using a flavor with "hw:cpu_policy=dedicated" "hw:cpu_realtime=yes" and "hw:cpu_realtime_mask=^0-1" failed. In-Reply-To: References: Message-ID: <318b8178190191764df5e4de63e145b888f9203b.camel@redhat.com> On Fri, 2020-03-06 at 07:57 +0000, Juntingqiu Qiujunting (邱军婷) wrote: > When I create a instance using a flavor with "hw:cpu_policy=dedicated" "hw:cpu_realtime=yes" and > "hw:cpu_realtime_mask=^0-1". > > > > The error is "libvirtError:Cannot set scheduler parameters for pid 3666:operation not permitted". i would guess that ytou are hiting an selinux or apparmor issue. in either case this is not a nova/openstack bug but rather a libvirt/host os configuration issue. i would check the output of dmesg and see if there are any messages that relate to this. > > > > https://bugs.launchpad.net/starlingx/+bug/1866311 > > > > The instance xml as following: > > 4 > > > > > > > > > > > The error log as following: > > ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] File "/usr/lib64/python2.7/site- > packages/libvirt.py", line 1099, in createWithFlags > ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] if ret == -1: raise libvirtError > ('virDomainCreateWithFlags() failed', dom=self) > ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] libvirtError: Cannot set scheduler > parameters for pid 6504: Operation not permitted > ERROR nova.compute.manager [instance: 74dcc0a1-9ed4-4d13-bef7-ac9623bca2d0] > > > > > From haleyb.dev at gmail.com Fri Mar 6 19:45:04 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Fri, 6 Mar 2020 14:45:04 -0500 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Message-ID: +1 from me, Lajos would be a great addition to the team. On 3/6/20 10:28 AM, Slawek Kaplonski wrote: > Hi neutrinos, > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the Neutron core team. > Lajos is Neutron contributor Neutron since around Queens cycle and now he is one of the most active reviewers in the Neutron group projects. > He was one of the key contributors in cooperation with Nova and Placement teams to deliver guaranteed minimum bandwidth feature in OpenStack. > He is very active and helpful with triaging and fixing Neutron bugs and issues in our CI. > > During last few cycles he proved that he has wide knowledge about Neutron code base. He is currently also a maintainer of some neutron stadium projects which shows that he has knowledge about code base not only about neutron but also Neutron stadium. > > The quality and number of his reviews are comparable to other members of the Neutron core team: https://www.stackalytics.com/?release=ussuri&module=neutron-group and are higher every cycle :) > I think he will be great addition to our core team. > > I will keep this nomination open for a week or until all current cores will respond. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From zbitter at redhat.com Fri Mar 6 19:52:21 2020 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 6 Mar 2020 14:52:21 -0500 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: <1da63dab-c35e-6377-d5d8-e075a5c37408@ham.ie> Message-ID: <271de548-3f54-4b9e-99bb-ee819378ae77@redhat.com> On 5/03/20 1:29 pm, Kendall Nelson wrote: > I would like to think elections would NOT get held every 1-2 weeks or > whatever the chosen PTL term is for a project? Its just a like...signup > sheet sort of thing? I'd imagine in this model the most likely implementation would be a roster where core team members sign up for a slot. But I also imagine it being left up to the project to decide the mechanics. > What if more than one person wants to sign up for > the same week( I can't think of why this would happen, just thinking > about all the details)? I think just let people sort it out amongst themselves. Nobody is going to get too exercised over a problem that will literally resolve _itself_ in 1-2 weeks. From mark at openstack.org Fri Mar 6 20:56:41 2020 From: mark at openstack.org (Mark Collier) Date: Fri, 6 Mar 2020 14:56:41 -0600 (CST) Subject: FW: 2020 OSF Events & coronavirus Message-ID: <1583528201.853712216@emailsrvr.com> I wanted to make sure everyone saw this thread on the foundation mailing list, since I know not everyone is subscribed to both lists: Archive: http://lists.openstack.org/pipermail/foundation/2020-March/002852.html Please join that ML thread to share feedback on this topic, or you can reach out directly to myself or jonathan at openstack.org I saw first hand how we all pulled together during the Snowpenstack in Dublin, so I know we'll once again pull together as a community to get through this! Mark On Friday, March 6, 2020 12:55pm, "Mark Collier" said: > Stackers, >   > Before I get into the current plans for the OSF events in Vancouver and Berlin, I > wanted to say a few words in general about the virus impacting so many people > right now. >   > First, I wanted to acknowledge the very difficult situation many are facing > because of COVID-19 (Coronavirus), across open source communities and local > communities in general (tech or otherwise). I also want to say to everyone who is > on the front lines managing events, from the full time staffers to the volunteers, > to the contractors and production partners, that we have some idea of what you're > going through and we know this is a tough time. If there's anything we can do to > help, please reach out. In the best of times, event organization can be grueling > and thankless, and so I just want to say THANK YOU to everyone who does the > organizing work in the communities we all care so much about. >   > OSF 2020 EVENTS >   > When it comes to the 2020 events OSF is managing, namely the OpenDev + PTG in > Vancouver June 8-11 and the Open Infrastructure Summit in Berlin October 19-23, > please read and bookmark this status page which we will continue to update:  > https://www.openstack.org/events/covid-19-coronavirus-disease-updates >   > When it comes to our community, the health of every individual is of paramount > concern. We have always aimed to produce events "of the community, by the > community" and the upcoming event in Vancouver is no exception. The OpenDev tracks > each morning will be programmed by volunteers from the community, and the project > teams will be organizing their own conversations as well each afternoon M-W, and > all day Thursday.  >   > But the larger question is here: should the show go on?  > > The short answer is that as of now, the Vancouver and Berlin events are still > scheduled to happen in June (8-11) and October (19-23), respectively.  >   > However, we are willing to cancel or approach the events in a different way (i.e. > virtual) if the facts indicate that is the best path, and we know the facts are > changing rapidly. One of the most critical inputs we need is to hear from each of > you. We know that many of you rely on the twice-annual events to get together and > make rapid progress on the software, which is one reason we are not making any > decisions in haste. We also know that many of you may be unable or unwilling to > travel in June, and that is critical information to hear as we get closer to the > event so that we can make the most informed decision.  >   > I also wanted to answer a FAQ by letting everyone know that if either event is > cancelled, event tickets and sponsorships will be fully refunded. Please note that > if you're making travel arrangements (e.g. flights, hotels) those are outside of > our control. >   > So as we continue to monitor the news and listen to health experts to make an > informed decision on any changes to our event plans, we'd like to hear from > everyone in the community who has a stake in these events. Our most pressing topic > is of course Vanvouver, but if you have questions or concerns about the Berlin > plan feel free to share those as well. >   > If you'd like to connect directly, you can always contact Executive Director > Jonathan Bryce (jonathan at openstack.org) or myself (mark at openstack.org). >   > Key Links: > - STATUS PAGE: > https://www.openstack.org/events/covid-19-coronavirus-disease-updates > - Vancouver OpenDev + PTG https://www.openstack.org/events/opendev-ptg-2020/ > - Berlin Open Infrastructure Summit: https://www.openstack.org/summit/berlin-2020/ > > Key Dates for OpenDev + PTG in Vancouver: > - Schedule will be published in early April > - Early bird deadline is April 4 > - Final day to sponsor will be May 4 > - Final registration price increase will be in early May >   > Mark Collier > COO, OpenStack Foundation > @sparkycollier > > From Albert.Braden at synopsys.com Fri Mar 6 23:05:19 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 6 Mar 2020 23:05:19 +0000 Subject: Goodbye for now Message-ID: My contract at Synopsys ends today, so I will have to continue my efforts to sign up as an Openstack developer at my next role. Thanks to everyone for all of the help and advice. Best wishes! Albert -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Mar 6 23:12:37 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 6 Mar 2020 15:12:37 -0800 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> Message-ID: On Thu, Mar 5, 2020 at 11:53 AM Brian Rosmaita wrote: > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita < > rosmaita.fossdev at gmail.com> wrote ---- > > > Hello QA team and devstack-plugin-ceph-core people, > > > > > > The Cinder team has some proposals we'd like to float. > > > > > > 1. The Cinder team is interested in becoming more active in the > > > maintenance of openstack/devstack-plugin-ceph [0]. Currently, the > > > devstack-plugin-ceph-core is > > > https://review.opendev.org/#/admin/groups/1196,members > > > The cinder-core is already represented by Eric and Sean; we'd like to > > > replace them by including the cinder-core group. > > > > +1. This is good diea and make sense, I will do the change. > > Great, thanks! > I agree this is a great idea to have more members of Cinder joining the devstack-plugin-ceph team. I would like to have atleast a sub team of manila core reviewers added to this project if it makes sense. The Manila CephFS drivers (cephfs-native and cephfs-nfs) are currently being tested with the help of the devstack integration in devstack-plugin-ceph. We have Tom Barron (tbarron) in the team, i'd like to propose myself (gouthamr) and Victoria Martinez de la Cruz (vkmc) Please let me know what you think of the idea. > > > > > > > 2. The Cinder team is interested in becoming more active in the > > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > > devstack-plugin-nfs-core is > > > https://review.opendev.org/#/admin/groups/1330,members > > > It's already 75% cinder-core members; we'd like to replace the > > > individual members with the cinder-core group. We also propose that > > > devstack-core be added as an included group. > > > > > > 3. The Cinder team is interested in implementing a new devstack > plugin: > > > openstack/devstack-plugin-open-cas > > > This will enable thorough testing of a new feature [2] being > introduced > > > as experimental in Ussuri and expected to be finalized in Victoria. > Our > > > plan would be to make both cinder-core and devstack-core included > groups > > > for the gerrit group governing the new plugin. > > > > +1. You want this under Cinder governance or under QA ? > > I think it makes sense for these to be under QA governance -- QA would > own the repo with both QA and Cinder having permission to make changes. > > > > > > > 4. This is a minor point, but can the devstack-plugin-nfs repo be > moved > > > back into the 'openstack' namespace? > > > > If this is usable plugin for nfs testing (I am not aware if we have any > other) then > > it make sense to bring it to openstack governance. > > Same question here, do you want to put this under Cinder governance or > QA. > > Same here, I think QA should "own" the repo, but Cinder will have > permission to make changes there. > > > > > Those plugins under QA governance also ok for me with your proposal of > calloborative maintainance by > > devstack-core and cinder-core. > > > > -gmann > > Thanks for the quick response! > > > > > > > Let us know which of these proposals you find acceptable. > > > > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > > [1] https://opendev.org/x/devstack-plugin-nfs > > > [2] > https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sat Mar 7 03:07:42 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 6 Mar 2020 22:07:42 -0500 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Message-ID: +1 from me. Best regards, Hongbin On Fri, Mar 6, 2020 at 10:34 AM Slawek Kaplonski wrote: > Hi neutrinos, > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the > Neutron core team. > Lajos is Neutron contributor Neutron since around Queens cycle and now he > is one of the most active reviewers in the Neutron group projects. > He was one of the key contributors in cooperation with Nova and Placement > teams to deliver guaranteed minimum bandwidth feature in OpenStack. > He is very active and helpful with triaging and fixing Neutron bugs and > issues in our CI. > > During last few cycles he proved that he has wide knowledge about Neutron > code base. He is currently also a maintainer of some neutron stadium > projects which shows that he has knowledge about code base not only about > neutron but also Neutron stadium. > > The quality and number of his reviews are comparable to other members of > the Neutron core team: > https://www.stackalytics.com/?release=ussuri&module=neutron-group and are > higher every cycle :) > I think he will be great addition to our core team. > > I will keep this nomination open for a week or until all current cores > will respond. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsharma1818 at outlook.com Sat Mar 7 13:29:33 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Sat, 7 Mar 2020 13:29:33 +0000 Subject: [Horizon] Unable to access the dashboard page Message-ID: Hi, I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components - Keystone - Glance - Placement - Nova - Networking After installing each component, I have also verified its operation and it seems to be working successfully. However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." In the apache access logs, I see below error: " 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" " If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. Please assist. Thanks, Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sat Mar 7 15:15:55 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 7 Mar 2020 10:15:55 -0500 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: Message-ID: Try /dashboard Donny Davis c: 805 814 6800 On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma wrote: > Hi, > > I am trying to get OpenStack (Train) up and running on CentOS 7 by > following the "OpenStack Installation Guide" provided on OpenStack's > website and have completed installation of below components > > > - Keystone > - Glance > - Placement > - Nova > - Networking > > > After installing each component, I have also verified its operation and it > seems to be working successfully. > > However, there is a problem I am facing after installing "Horizon" for > dashboard services. In order to verify its operation, one is supposed to > browse to the URL "http://controller/horizon/" where "controller" could > be the hostname or IP address of the Node which is running the controller > > Browsing to the above URL throws an error "The requested URL /horizon was > not found on this server." > > In the apache access logs, I see below error: > > " > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 > 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" > 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; > x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 > Safari/537.36" > " > > If I browse to the URL "http://controller/", the default "Testing123" > page of apache gets loaded. > > > Please assist. > > Thanks, > Rahul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsharma1818 at outlook.com Sat Mar 7 17:41:31 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Sat, 7 Mar 2020 17:41:31 +0000 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: , Message-ID: Hi, Tried with /dashboard but it ain't working Getting error "The requested URL /auth/login/ was not found on this server." in the browser (Error 404) ________________________________ From: Donny Davis Sent: Saturday, March 7, 2020 8:45 PM To: Rahul Sharma Cc: OpenStack Discuss Subject: Re: [Horizon] Unable to access the dashboard page Try /dashboard Donny Davis c: 805 814 6800 On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma > wrote: Hi, I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components - Keystone - Glance - Placement - Nova - Networking After installing each component, I have also verified its operation and it seems to be working successfully. However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." In the apache access logs, I see below error: " 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" " If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. Please assist. Thanks, Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Mar 7 17:45:17 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 7 Mar 2020 18:45:17 +0100 Subject: [queens] [neutron]security_groups_log] Message-ID: Hello, I have queens installation based on centos7. Before implementing security groups logs, I had the following configuration in /etc/neutron/plugins/ml2/openvswitch_agent.ini: firewall_driver = iptables_hybrid Enabling security groups log I had to change it in : firewall_driver = openvswitch It seems to work end security logs are logged . After restarting kvm nodes and controllers, virtual machines do not live migrate. The firewall driver change could be the cause of my problem ? firewall_driver = openvswitch is mandatory for security groups log ? Please, any help ? I cannot reproduce the problem rebooting all my nodes. I rebooted them because I hat to transfer from a rack to another. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Mar 7 17:48:44 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 7 Mar 2020 18:48:44 +0100 Subject: [queens] [neutron]security_groups_log] issues Message-ID: Hello, I have queens installation based on centos7. Before implementing security groups logs, I had the following configuration in /etc/neutron/plugins/ml2/openvswitch_agent.ini: firewall_driver = iptables_hybrid Enabling security groups log I had to change it in : firewall_driver = openvswitch It seems to work end security logs are logged . After restarting kvm nodes and controllers, virtual machines do not live migrate. I rolled back my configuration without security groups logs feature and now all works fine. The firewall driver change could be the cause of my problem ? firewall_driver = openvswitch is mandatory for security groups log ? Please, any help ? I cannot reproduce the problem rebooting all my nodes. I rebooted them because I hat to transfer from a rack to another. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Mar 7 18:02:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 7 Mar 2020 19:02:25 +0100 Subject: [queens] [neutron]security_groups_log] In-Reply-To: References: Message-ID: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> Hi, > On 7 Mar 2020, at 18:45, Ignazio Cassano wrote: > > Hello, I have queens installation based on centos7. > > Before implementing security groups logs, I had the following configuration in > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > firewall_driver = iptables_hybrid > > > Enabling security groups log I had to change it in : > > firewall_driver = openvswitch > > > It seems to work end security logs are logged . > After restarting kvm nodes and controllers, virtual machines do not live migrate. > The firewall driver change could be the cause of my problem ? Yes, In queens there wasn’t yet migration between various firewall drivers so that can be an issue. It should works fine since Rocky release with “multiple port bindings” feature. > firewall_driver = openvswitch is mandatory for security groups log ? Yes, logging isn’t supported by iptables_hybrid driver. > > Please, any help ? > > > I cannot reproduce the problem rebooting all my nodes. > I rebooted them because I hat to transfer from a rack to another. > > Ignazio > > — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Sat Mar 7 18:09:59 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 7 Mar 2020 19:09:59 +0100 Subject: [queens] [neutron]security_groups_log] In-Reply-To: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> References: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> Message-ID: Many thanks Slawek. Your help is always very appreciated. Ignazio Il giorno sab 7 mar 2020 alle ore 19:02 Slawek Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > > On 7 Mar 2020, at 18:45, Ignazio Cassano > wrote: > > > > Hello, I have queens installation based on centos7. > > > > Before implementing security groups logs, I had the following > configuration in > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > firewall_driver = iptables_hybrid > > > > > > Enabling security groups log I had to change it in : > > > > firewall_driver = openvswitch > > > > > > It seems to work end security logs are logged . > > After restarting kvm nodes and controllers, virtual machines do not live > migrate. > > The firewall driver change could be the cause of my problem ? > > Yes, In queens there wasn’t yet migration between various firewall drivers > so that can be an issue. It should works fine since Rocky release with > “multiple port bindings” feature. > > > firewall_driver = openvswitch is mandatory for security groups log ? > > Yes, logging isn’t supported by iptables_hybrid driver. > > > > > Please, any help ? > > > > > > I cannot reproduce the problem rebooting all my nodes. > > I rebooted them because I hat to transfer from a rack to another. > > > > Ignazio > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Mar 7 20:45:52 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 7 Mar 2020 21:45:52 +0100 Subject: [queens] [neutron]security_groups_log] In-Reply-To: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> References: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> Message-ID: Slawek, forgive me if I take advantage of your patience. Before rebooting nodes, I modified nodes and controllers with security groups logs, modifying neutron.conf, ml2 and openvswitch agents, moving from iptables_hybrid to openvswitch firewall etc etc..... I only restarted neutron components and before rebooting nodes and controllers, I saw security groups logs and I was able to migrate instances. Why after rebooting not ? And, please, what about “multiple port bindings” ? Thanks Ignazio Il giorno sab 7 mar 2020 alle ore 19:02 Slawek Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > > On 7 Mar 2020, at 18:45, Ignazio Cassano > wrote: > > > > Hello, I have queens installation based on centos7. > > > > Before implementing security groups logs, I had the following > configuration in > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > firewall_driver = iptables_hybrid > > > > > > Enabling security groups log I had to change it in : > > > > firewall_driver = openvswitch > > > > > > It seems to work end security logs are logged . > > After restarting kvm nodes and controllers, virtual machines do not live > migrate. > > The firewall driver change could be the cause of my problem ? > > Yes, In queens there wasn’t yet migration between various firewall drivers > so that can be an issue. It should works fine since Rocky release with > “multiple port bindings” feature. > > > firewall_driver = openvswitch is mandatory for security groups log ? > > Yes, logging isn’t supported by iptables_hybrid driver. > > > > > Please, any help ? > > > > > > I cannot reproduce the problem rebooting all my nodes. > > I rebooted them because I hat to transfer from a rack to another. > > > > Ignazio > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Mar 8 03:00:40 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 07 Mar 2020 21:00:40 -0600 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> Message-ID: <170b81663ff.c070e14d541882.3735232197665233208@ghanshyammann.com> ---- On Thu, 05 Mar 2020 13:49:11 -0600 Brian Rosmaita wrote ---- > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita wrote ---- > > > Hello QA team and devstack-plugin-ceph-core people, > > > > > > The Cinder team has some proposals we'd like to float. > > > > > > 1. The Cinder team is interested in becoming more active in the > > > maintenance of openstack/devstack-plugin-ceph [0]. Currently, the > > > devstack-plugin-ceph-core is > > > https://review.opendev.org/#/admin/groups/1196,members > > > The cinder-core is already represented by Eric and Sean; we'd like to > > > replace them by including the cinder-core group. > > > > +1. This is good diea and make sense, I will do the change. > > Great, thanks! Done. > > > > > > > 2. The Cinder team is interested in becoming more active in the > > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > > devstack-plugin-nfs-core is > > > https://review.opendev.org/#/admin/groups/1330,members > > > It's already 75% cinder-core members; we'd like to replace the > > > individual members with the cinder-core group. We also propose that > > > devstack-core be added as an included group. > > > > > > 3. The Cinder team is interested in implementing a new devstack plugin: > > > openstack/devstack-plugin-open-cas > > > This will enable thorough testing of a new feature [2] being introduced > > > as experimental in Ussuri and expected to be finalized in Victoria. Our > > > plan would be to make both cinder-core and devstack-core included groups > > > for the gerrit group governing the new plugin. > > > > +1. You want this under Cinder governance or under QA ? > > I think it makes sense for these to be under QA governance -- QA would > own the repo with both QA and Cinder having permission to make changes. Sure. Please let me know once it is ready or propose it under QA and I will review that. > > > > > > > 4. This is a minor point, but can the devstack-plugin-nfs repo be moved > > > back into the 'openstack' namespace? > > > > If this is usable plugin for nfs testing (I am not aware if we have any other) then > > it make sense to bring it to openstack governance. > > Same question here, do you want to put this under Cinder governance or QA. > > Same here, I think QA should "own" the repo, but Cinder will have > permission to make changes there. Sounds good. I proposed the patches: https://review.opendev.org/#/q/topic:devstack-plugin-nfs+(status:open+OR+status:merged) -gmann > > > > > Those plugins under QA governance also ok for me with your proposal of calloborative maintainance by > > devstack-core and cinder-core. > > > > -gmann > > Thanks for the quick response! > > > > > > > Let us know which of these proposals you find acceptable. > > > > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > > [1] https://opendev.org/x/devstack-plugin-nfs > > > [2] https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > > > > > > > From donny at fortnebula.com Sun Mar 8 12:42:53 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 8 Mar 2020 08:42:53 -0400 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: Message-ID: You may also want to check this setting https://docs.openstack.org/horizon/train/configuration/settings.html#allowed-hosts Donny Davis c: 805 814 6800 On Sat, Mar 7, 2020, 12:41 PM Rahul Sharma wrote: > Hi, > > Tried with /dashboard but it ain't working > > Getting error "The requested URL /auth/login/ was not found on this > server." in the browser (Error 404) > > > ------------------------------ > *From:* Donny Davis > *Sent:* Saturday, March 7, 2020 8:45 PM > *To:* Rahul Sharma > *Cc:* OpenStack Discuss > *Subject:* Re: [Horizon] Unable to access the dashboard page > > Try /dashboard > > Donny Davis > c: 805 814 6800 > > On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma wrote: > > Hi, > > I am trying to get OpenStack (Train) up and running on CentOS 7 by > following the "OpenStack Installation Guide" provided on OpenStack's > website and have completed installation of below components > > > - Keystone > - Glance > - Placement > - Nova > - Networking > > > After installing each component, I have also verified its operation and it > seems to be working successfully. > > However, there is a problem I am facing after installing "Horizon" for > dashboard services. In order to verify its operation, one is supposed to > browse to the URL "http://controller/horizon/" where "controller" could > be the hostname or IP address of the Node which is running the controller > > Browsing to the above URL throws an error "The requested URL /horizon was > not found on this server." > > In the apache access logs, I see below error: > > " > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 > 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 > (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" > 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; > x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 > Safari/537.36" > " > > If I browse to the URL "http://controller/", the default "Testing123" > page of apache gets loaded. > > > Please assist. > > Thanks, > Rahul > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sun Mar 8 14:07:47 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 8 Mar 2020 15:07:47 +0100 Subject: [queens] [neutron]security_groups_log] In-Reply-To: References: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> Message-ID: Hi, > On 7 Mar 2020, at 21:45, Ignazio Cassano wrote: > > Slawek, forgive me if I take advantage of your patience. > > Before rebooting nodes, I modified nodes and controllers with security groups logs, modifying neutron.conf, ml2 and openvswitch agents, moving from iptables_hybrid to openvswitch firewall etc etc..... > I only restarted neutron components and before rebooting nodes and controllers, I saw security groups logs and I was able to migrate instances. > Why after rebooting not ? To be honest I don’t know why it’s like that. You probably will need to give more info there, what errors You have exactly during the migration. > And, please, what about “multiple port bindings” ? Spec for this feature is at https://specs.openstack.org/openstack/neutron-specs/specs/ocata/portbinding_information_for_nova.html - You should find more details about it there. > > Thanks > Ignazio > > > > Il giorno sab 7 mar 2020 alle ore 19:02 Slawek Kaplonski ha scritto: > Hi, > > > On 7 Mar 2020, at 18:45, Ignazio Cassano wrote: > > > > Hello, I have queens installation based on centos7. > > > > Before implementing security groups logs, I had the following configuration in > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > firewall_driver = iptables_hybrid > > > > > > Enabling security groups log I had to change it in : > > > > firewall_driver = openvswitch > > > > > > It seems to work end security logs are logged . > > After restarting kvm nodes and controllers, virtual machines do not live migrate. > > The firewall driver change could be the cause of my problem ? > > Yes, In queens there wasn’t yet migration between various firewall drivers so that can be an issue. It should works fine since Rocky release with “multiple port bindings” feature. > > > firewall_driver = openvswitch is mandatory for security groups log ? > > Yes, logging isn’t supported by iptables_hybrid driver. > > > > > Please, any help ? > > > > > > I cannot reproduce the problem rebooting all my nodes. > > I rebooted them because I hat to transfer from a rack to another. > > > > Ignazio > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From amy at demarco.com Sun Mar 8 14:32:57 2020 From: amy at demarco.com (Amy) Date: Sun, 8 Mar 2020 09:32:57 -0500 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: Message-ID: Another thing to check is the indentation in your config file. Amy (spotz) > On Mar 8, 2020, at 7:46 AM, Donny Davis wrote: > >  > You may also want to check this setting > https://docs.openstack.org/horizon/train/configuration/settings.html#allowed-hosts > > Donny Davis > c: 805 814 6800 > >> On Sat, Mar 7, 2020, 12:41 PM Rahul Sharma wrote: >> Hi, >> >> Tried with /dashboard but it ain't working >> >> Getting error "The requested URL /auth/login/ was not found on this server." in the browser (Error 404) >> >> >> From: Donny Davis >> Sent: Saturday, March 7, 2020 8:45 PM >> To: Rahul Sharma >> Cc: OpenStack Discuss >> Subject: Re: [Horizon] Unable to access the dashboard page >> >> Try /dashboard >> >> Donny Davis >> c: 805 814 6800 >> >> On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma wrote: >> Hi, >> >> I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components >> >> >> - Keystone >> - Glance >> - Placement >> - Nova >> - Networking >> >> >> After installing each component, I have also verified its operation and it seems to be working successfully. >> >> However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller >> >> Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." >> >> In the apache access logs, I see below error: >> >> " >> 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" >> 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" >> " >> >> If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. >> >> >> Please assist. >> >> Thanks, >> Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sun Mar 8 16:17:53 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sun, 8 Mar 2020 17:17:53 +0100 Subject: [queens] [neutron]security_groups_log] In-Reply-To: References: <2B63B1E5-F914-41C5-9A37-C2E0E96665A5@redhat.com> Message-ID: Hi, I think the problem is the migration from iptables_hybrid to openvswitch firewall : https://docs.openstack.org/neutron/rocky/contributor/internals/openvswitch_firewall.html Thanks Ignazio Il Dom 8 Mar 2020, 15:07 Slawek Kaplonski ha scritto: > Hi, > > > On 7 Mar 2020, at 21:45, Ignazio Cassano > wrote: > > > > Slawek, forgive me if I take advantage of your patience. > > > > Before rebooting nodes, I modified nodes and controllers with security > groups logs, modifying neutron.conf, ml2 and openvswitch agents, moving > from iptables_hybrid to openvswitch firewall etc etc..... > > I only restarted neutron components and before rebooting nodes and > controllers, I saw security groups logs and I was able to migrate instances. > > Why after rebooting not ? > > To be honest I don’t know why it’s like that. You probably will need to > give more info there, what errors You have exactly during the migration. > > > And, please, what about “multiple port bindings” ? > > Spec for this feature is at > https://specs.openstack.org/openstack/neutron-specs/specs/ocata/portbinding_information_for_nova.html > - You should find more details about it there. > > > > > Thanks > > Ignazio > > > > > > > > Il giorno sab 7 mar 2020 alle ore 19:02 Slawek Kaplonski < > skaplons at redhat.com> ha scritto: > > Hi, > > > > > On 7 Mar 2020, at 18:45, Ignazio Cassano > wrote: > > > > > > Hello, I have queens installation based on centos7. > > > > > > Before implementing security groups logs, I had the following > configuration in > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > Enabling security groups log I had to change it in : > > > > > > firewall_driver = openvswitch > > > > > > > > > It seems to work end security logs are logged . > > > After restarting kvm nodes and controllers, virtual machines do not > live migrate. > > > The firewall driver change could be the cause of my problem ? > > > > Yes, In queens there wasn’t yet migration between various firewall > drivers so that can be an issue. It should works fine since Rocky release > with “multiple port bindings” feature. > > > > > firewall_driver = openvswitch is mandatory for security groups log ? > > > > Yes, logging isn’t supported by iptables_hybrid driver. > > > > > > > > Please, any help ? > > > > > > > > > I cannot reproduce the problem rebooting all my nodes. > > > I rebooted them because I hat to transfer from a rack to another. > > > > > > Ignazio > > > > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Sun Mar 8 21:52:10 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Sun, 8 Mar 2020 22:52:10 +0100 Subject: [CINDER] Snapshots export In-Reply-To: References: <20200304155850.b4ydu4vfxthih7we@localhost> Message-ID: Hi Sean. Sorry for the late reply. What we want to do is backing up snapshots in case of a complete compute lost of as a plan for disaster recovery. So after recreating the environment we can restore snapshots and start the VMs again. Cheers On Wed, Mar 4, 2020 at 10:14 PM Sean McGinnis wrote: > On 3/4/20 9:58 AM, Gorka Eguileor wrote: > > On 03/03, Alfredo De Luca wrote: > >> Hi all. > >> We have our env with Openstack (Train) and cinder with CEPH (nautilus) > >> backend. > >> We are creating automatic volumes snapshots and now we'd like to export > >> them as a backup/restore plan. After exporting the snapshots we will use > >> Acronis as backup tool. > >> > >> I couldn't find the right steps/commands to exports the snapshots. > >> Any info? > >> Cheers > >> > >> -- > >> *Alfredo* > > Hi Alfredo, > > > > What kind of backup/restore plan do you have planned? > > > > Because snapshots are not meant to be used in a Disaster Recovery > > backup/restore plan, so the only thing available are the manage/unmanage > > commands. > > > > These commands are meant to add an existing volume/snapshots into Cinder > > together, not to unmanage/manage them independently. > > > > For example, you wouldn't be able to manage a snapshot if the volume is > > not already managed. Also unmanaging the snapshot would block the > > deletion of the RBD volume itself. > > > > Cheers, > > Gorka. > > If the intent is to use the snapshots as a source to backup the volume > data, leaving the actual volume attached and IO running but still > getting a "static" view of the code, then you would need to create a > volume from the chosen snapshot, mount that volume somewhere that is > accessible to your backup software, perform the copy of the data, then > delete the volume when complete. > > I haven't used Acronis myself, but the issue for some backup software > could be that the volume it is backing up from is going to be different > every time. Though you could make sure it is mounted at the same place > so the backup software at least *thinks* it's backing up the same thing. > > Then restoring the data will likely require some manual intervention, > but that's pretty much always the case when something goes wrong. I > would just recommend you test the full disaster recovery scenario to > make sure you have that figured out and working right before you > actually need it. > > Sean > > > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sun Mar 8 22:20:54 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 8 Mar 2020 18:20:54 -0400 Subject: [CINDER] Snapshots export In-Reply-To: References: <20200304155850.b4ydu4vfxthih7we@localhost> Message-ID: On Sun, Mar 8, 2020, 5:55 PM Alfredo De Luca wrote: > Hi Sean. Sorry for the late reply. > What we want to do is backing up snapshots in case of a complete compute > lost of as a plan for disaster recovery. > So after recreating the environment we can restore snapshots and start the > VMs again. > > Cheers > > > On Wed, Mar 4, 2020 at 10:14 PM Sean McGinnis > wrote: > >> On 3/4/20 9:58 AM, Gorka Eguileor wrote: >> > On 03/03, Alfredo De Luca wrote: >> >> Hi all. >> >> We have our env with Openstack (Train) and cinder with CEPH (nautilus) >> >> backend. >> >> We are creating automatic volumes snapshots and now we'd like to export >> >> them as a backup/restore plan. After exporting the snapshots we will >> use >> >> Acronis as backup tool. >> >> >> >> I couldn't find the right steps/commands to exports the snapshots. >> >> Any info? >> >> Cheers >> >> >> >> -- >> >> *Alfredo* >> > Hi Alfredo, >> > >> > What kind of backup/restore plan do you have planned? >> > >> > Because snapshots are not meant to be used in a Disaster Recovery >> > backup/restore plan, so the only thing available are the manage/unmanage >> > commands. >> > >> > These commands are meant to add an existing volume/snapshots into Cinder >> > together, not to unmanage/manage them independently. >> > >> > For example, you wouldn't be able to manage a snapshot if the volume is >> > not already managed. Also unmanaging the snapshot would block the >> > deletion of the RBD volume itself. >> > >> > Cheers, >> > Gorka. >> >> If the intent is to use the snapshots as a source to backup the volume >> data, leaving the actual volume attached and IO running but still >> getting a "static" view of the code, then you would need to create a >> volume from the chosen snapshot, mount that volume somewhere that is >> accessible to your backup software, perform the copy of the data, then >> delete the volume when complete. >> >> I haven't used Acronis myself, but the issue for some backup software >> could be that the volume it is backing up from is going to be different >> every time. Though you could make sure it is mounted at the same place >> so the backup software at least *thinks* it's backing up the same thing. >> >> Then restoring the data will likely require some manual intervention, >> but that's pretty much always the case when something goes wrong. I >> would just recommend you test the full disaster recovery scenario to >> make sure you have that figured out and working right before you >> actually need it. >> >> Sean >> >> >> > > -- > *Alfredo* > > Is there a reason not to use the cinder backup feature? This function works for me backing up ceph volumes to swift. Once the backup is in swift it's very easy to pull it down to replicate it somewhere else. There are also other backup targets using the built in provider. It's worth checking out. Donny Davis > c: 805 814 6800 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Mar 9 03:00:32 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 9 Mar 2020 12:00:32 +0900 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: Message-ID: I think you are hitting https://bugs.launchpad.net/horizon/+bug/1853651. The bug says WEBROOT needs to be configured. It was reported against the horizon installation guide on RHEL/CentOS, but I believe it is a bug on CentOS packaging as a package should work with the default config provided by the package. On Sun, Mar 8, 2020 at 2:45 AM Rahul Sharma wrote: > > Hi, > > Tried with /dashboard but it ain't working > > Getting error "The requested URL /auth/login/ was not found on this server." in the browser (Error 404) > > > ________________________________ > From: Donny Davis > Sent: Saturday, March 7, 2020 8:45 PM > To: Rahul Sharma > Cc: OpenStack Discuss > Subject: Re: [Horizon] Unable to access the dashboard page > > Try /dashboard > > Donny Davis > c: 805 814 6800 > > On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma wrote: > > Hi, > > I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components > > > - Keystone > - Glance > - Placement > - Nova > - Networking > > > After installing each component, I have also verified its operation and it seems to be working successfully. > > However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller > > Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." > > In the apache access logs, I see below error: > > " > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > " > > If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. > > > Please assist. > > Thanks, > Rahul From jean-philippe at evrard.me Mon Mar 9 07:45:24 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 09 Mar 2020 08:45:24 +0100 Subject: [tc] March meeting Message-ID: <26f010de7fedd1125073c0d7b8221f9cb86988c5.camel@evrard.me> Hello, Here are the minutes for the march meeting [1]. There are a few action points for you, please have a look! Regards, Jean-Philippe Evrard (evrardjp) [1]: http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-03-05-14.00.html From jean-philippe at evrard.me Mon Mar 9 07:54:01 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 09 Mar 2020 08:54:01 +0100 Subject: [tc] April meeting Message-ID: <8ac6572483e17fb25a67f2858e69f4117fa8d624.camel@evrard.me> Hello everyone, It would be nice if you could update the agenda on the wiki [1] for the April meeting, happening on April, 2nd. Thank you in advance, Jean-Philippe Evrard (evrardjp). [1]: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting From jean-philippe at evrard.me Mon Mar 9 08:58:13 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 09 Mar 2020 09:58:13 +0100 Subject: [all][tc] What happened in OpenStack Governance recently Message-ID: Hello, It's been a while there was no update on "what happened in governance recently?" (February). So here is a summary for you! We had two meetings since the last community update, February [1] and March [2]. We have established our first (business-focused) upstream investment opportunities for 2020, which currently consists of: - Goal champions - Consistent and secure policy defaults - QA developers You can see the latest version in [3]. The timing for the next PTL and TC elections is now determined [4]. We've introduced the ideas framework, allowing you to propose, follow, and track large scale changes of OpenStack, with their conversations history [5]. We are starting to split OpenDev out of OpenStack-infra. Our next release names are Victoria and Wallaby. In terms of community goals [6], we've changed the U goal for documentation, so please have a look if you haven't already [7][8], and/or talk to the goal champion Kendall (diablo_rojo). The 'Switch legacy Zuul jobs to native' goal has been selected for Victoria. Please talk to its champion, Luigi Toscano (tosky)! On the projects' side, we've clarified the guidelines to drop official project teams [9]. We want to retire teams more actively if necessary, to allow us to increase our focus. Some projects have changed: - networking-ovn won't get releases anymore, as it's merged back into neutron in U [10] - neutron-lbaas was retired in Ussuri [11] Have a good day to you all! Regards, Jean-Philippe Evrard (evrardjp) [1]: http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-02-06-14.00.html [2]: http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-03-05-14.00.html [3]: https://governance.openstack.org/tc/reference/upstream-investment-opportunities/index.html [4]: https://review.opendev.org/#/c/708470/3/configuration.yaml [5]: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012847.html [6]: https://governance.openstack.org/tc/goals/selected/index.html [7]: https://review.opendev.org/#/c/708672/ [8]: https://review.opendev.org/#/c/709617/ [9]: https://review.opendev.org/#/c/707421/ [10]: https://review.opendev.org/#/c/705781/ [11]: https://review.opendev.org/#/c/705780/ From balazs.gibizer at est.tech Mon Mar 9 08:59:00 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Mar 2020 09:59:00 +0100 Subject: [nova] US meeting slot Message-ID: <1583744340.12170.17@est.tech> Hi, Nova has alternate meeting slots on Thursdays to try to cover contributors from different time zones. * 14:00 UTC * 21:00 UTC As I'm taking over the PTL role form Eric I need to figure out how to run the nova meetings. I cannot really run the UTC 21:00 as it is pretty late for me. (I will run the 14:00 UTC slot). I see different options: a) Somebody from the US side of the globe volunteers to run the 21:00 UTC slot. Please speak up if you would like to run it. I can help you with agenda refresh and technicalities if needed. b) Have only one meeting time, and move that to 16:00 UTC. In this case I will be able to run it most of the weeks. c) Do not have a dedicated meeting slot but switch to office hours. Here we also need to find a time slot. I think 16:00 UTC could work there as well. Please share your view! Any other proposal is very welcome. Cheers, gibi From Cyrille.CARTIER at antemeta.fr Mon Mar 9 09:00:18 2020 From: Cyrille.CARTIER at antemeta.fr (Cyrille CARTIER) Date: Mon, 9 Mar 2020 09:00:18 +0000 Subject: [CINDER] Snapshots export In-Reply-To: References: <20200304155850.b4ydu4vfxthih7we@localhost> Message-ID: Hi Alfredo, In addition of cinder backup feature, you may try Freezer Project. With Freezer, you’ll be able to backup in swift or on a remote storage. Cheers, Cyrille De : Donny Davis [mailto:donny at fortnebula.com] Envoyé : dimanche 8 mars 2020 23:21 À : Alfredo De Luca Cc : Sean McGinnis ; openstack-discuss Objet : Re: [CINDER] Snapshots export On Sun, Mar 8, 2020, 5:55 PM Alfredo De Luca > wrote: Hi Sean. Sorry for the late reply. What we want to do is backing up snapshots in case of a complete compute lost of as a plan for disaster recovery. So after recreating the environment we can restore snapshots and start the VMs again. Cheers On Wed, Mar 4, 2020 at 10:14 PM Sean McGinnis > wrote: On 3/4/20 9:58 AM, Gorka Eguileor wrote: > On 03/03, Alfredo De Luca wrote: >> Hi all. >> We have our env with Openstack (Train) and cinder with CEPH (nautilus) >> backend. >> We are creating automatic volumes snapshots and now we'd like to export >> them as a backup/restore plan. After exporting the snapshots we will use >> Acronis as backup tool. >> >> I couldn't find the right steps/commands to exports the snapshots. >> Any info? >> Cheers >> >> -- >> *Alfredo* > Hi Alfredo, > > What kind of backup/restore plan do you have planned? > > Because snapshots are not meant to be used in a Disaster Recovery > backup/restore plan, so the only thing available are the manage/unmanage > commands. > > These commands are meant to add an existing volume/snapshots into Cinder > together, not to unmanage/manage them independently. > > For example, you wouldn't be able to manage a snapshot if the volume is > not already managed. Also unmanaging the snapshot would block the > deletion of the RBD volume itself. > > Cheers, > Gorka. If the intent is to use the snapshots as a source to backup the volume data, leaving the actual volume attached and IO running but still getting a "static" view of the code, then you would need to create a volume from the chosen snapshot, mount that volume somewhere that is accessible to your backup software, perform the copy of the data, then delete the volume when complete. I haven't used Acronis myself, but the issue for some backup software could be that the volume it is backing up from is going to be different every time. Though you could make sure it is mounted at the same place so the backup software at least *thinks* it's backing up the same thing. Then restoring the data will likely require some manual intervention, but that's pretty much always the case when something goes wrong. I would just recommend you test the full disaster recovery scenario to make sure you have that figured out and working right before you actually need it. Sean -- Alfredo Is there a reason not to use the cinder backup feature? This function works for me backing up ceph volumes to swift. Once the backup is in swift it's very easy to pull it down to replicate it somewhere else. There are also other backup targets using the built in provider. It's worth checking out. Donny Davis c: 805 814 6800 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Mar 9 09:04:21 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 9 Mar 2020 10:04:21 +0100 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: <1583528201.853712216@emailsrvr.com> References: <1583528201.853712216@emailsrvr.com> Message-ID: Hi, It seems that the critical bit we don't know for sure is whether and to what extent the virus is affected by warmer weather. If it behaves similarly to flu, June should be safe, otherwise we will likely have to cancel it. Nothing can be reliably said about Berlin at this point... Personally, I've been a long-term proponent of virtual events for many reasons, health situation included. As much as I would miss hanging out with everyone... Dmitry On Fri, Mar 6, 2020 at 9:58 PM Mark Collier wrote: > I wanted to make sure everyone saw this thread on the foundation mailing > list, since I know not everyone is subscribed to both lists: > > Archive: > http://lists.openstack.org/pipermail/foundation/2020-March/002852.html > > Please join that ML thread to share feedback on this topic, or you can > reach out directly to myself or jonathan at openstack.org > > I saw first hand how we all pulled together during the Snowpenstack in > Dublin, so I know we'll once again pull together as a community to get > through this! > > Mark > > > > On Friday, March 6, 2020 12:55pm, "Mark Collier" > said: > > > Stackers, > > > > Before I get into the current plans for the OSF events in Vancouver and > Berlin, I > > wanted to say a few words in general about the virus impacting so many > people > > right now. > > > > First, I wanted to acknowledge the very difficult situation many are > facing > > because of COVID-19 (Coronavirus), across open source communities and > local > > communities in general (tech or otherwise). I also want to say to > everyone who is > > on the front lines managing events, from the full time staffers to the > volunteers, > > to the contractors and production partners, that we have some idea of > what you're > > going through and we know this is a tough time. If there's anything we > can do to > > help, please reach out. In the best of times, event organization can be > grueling > > and thankless, and so I just want to say THANK YOU to everyone who does > the > > organizing work in the communities we all care so much about. > > > > OSF 2020 EVENTS > > > > When it comes to the 2020 events OSF is managing, namely the OpenDev + > PTG in > > Vancouver June 8-11 and the Open Infrastructure Summit in Berlin October > 19-23, > > please read and bookmark this status page which we will continue to > update: > > https://www.openstack.org/events/covid-19-coronavirus-disease-updates > > > > When it comes to our community, the health of every individual is of > paramount > > concern. We have always aimed to produce events "of the community, by the > > community" and the upcoming event in Vancouver is no exception. The > OpenDev tracks > > each morning will be programmed by volunteers from the community, and > the project > > teams will be organizing their own conversations as well each afternoon > M-W, and > > all day Thursday. > > > > But the larger question is here: should the show go on? > > > > The short answer is that as of now, the Vancouver and Berlin events are > still > > scheduled to happen in June (8-11) and October (19-23), respectively. > > > > However, we are willing to cancel or approach the events in a different > way (i.e. > > virtual) if the facts indicate that is the best path, and we know the > facts are > > changing rapidly. One of the most critical inputs we need is to hear > from each of > > you. We know that many of you rely on the twice-annual events to get > together and > > make rapid progress on the software, which is one reason we are not > making any > > decisions in haste. We also know that many of you may be unable or > unwilling to > > travel in June, and that is critical information to hear as we get > closer to the > > event so that we can make the most informed decision. > > > > I also wanted to answer a FAQ by letting everyone know that if either > event is > > cancelled, event tickets and sponsorships will be fully refunded. Please > note that > > if you're making travel arrangements (e.g. flights, hotels) those are > outside of > > our control. > > > > So as we continue to monitor the news and listen to health experts to > make an > > informed decision on any changes to our event plans, we'd like to hear > from > > everyone in the community who has a stake in these events. Our most > pressing topic > > is of course Vanvouver, but if you have questions or concerns about the > Berlin > > plan feel free to share those as well. > > > > If you'd like to connect directly, you can always contact Executive > Director > > Jonathan Bryce (jonathan at openstack.org) or myself (mark at openstack.org). > > > > Key Links: > > - STATUS PAGE: > > https://www.openstack.org/events/covid-19-coronavirus-disease-updates > > - Vancouver OpenDev + PTG > https://www.openstack.org/events/opendev-ptg-2020/ > > - Berlin Open Infrastructure Summit: > https://www.openstack.org/summit/berlin-2020/ > > > > Key Dates for OpenDev + PTG in Vancouver: > > - Schedule will be published in early April > > - Early bird deadline is April 4 > > - Final day to sponsor will be May 4 > > - Final registration price increase will be in early May > > > > Mark Collier > > COO, OpenStack Foundation > > @sparkycollier > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Mon Mar 9 09:15:38 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 9 Mar 2020 10:15:38 +0100 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: References: <1583528201.853712216@emailsrvr.com> Message-ID: You are very right Dmitriy. One thing to add: it is currently absolutely unpredictable how situation will look like in June, but we need to get travel approvals, visas, reservations already now. And at least in central Europe all companies (especially big ones) are now in the alarmed state with incredibly strengthen travel regulations, so that at least for me even to start discussion about travel is very hard. Artem ---- typed from mobile, auto-correct typos assumed ---- On Mon, 9 Mar 2020, 10:10 Dmitry Tantsur, wrote: > Hi, > > It seems that the critical bit we don't know for sure is whether and to > what extent the virus is affected by warmer weather. If it behaves > similarly to flu, June should be safe, otherwise we will likely have to > cancel it. Nothing can be reliably said about Berlin at this point... > > Personally, I've been a long-term proponent of virtual events for many > reasons, health situation included. As much as I would miss hanging out > with everyone... > > Dmitry > > On Fri, Mar 6, 2020 at 9:58 PM Mark Collier wrote: > >> I wanted to make sure everyone saw this thread on the foundation mailing >> list, since I know not everyone is subscribed to both lists: >> >> Archive: >> http://lists.openstack.org/pipermail/foundation/2020-March/002852.html >> >> Please join that ML thread to share feedback on this topic, or you can >> reach out directly to myself or jonathan at openstack.org >> >> I saw first hand how we all pulled together during the Snowpenstack in >> Dublin, so I know we'll once again pull together as a community to get >> through this! >> >> Mark >> >> >> >> On Friday, March 6, 2020 12:55pm, "Mark Collier" >> said: >> >> > Stackers, >> > >> > Before I get into the current plans for the OSF events in Vancouver and >> Berlin, I >> > wanted to say a few words in general about the virus impacting so many >> people >> > right now. >> > >> > First, I wanted to acknowledge the very difficult situation many are >> facing >> > because of COVID-19 (Coronavirus), across open source communities and >> local >> > communities in general (tech or otherwise). I also want to say to >> everyone who is >> > on the front lines managing events, from the full time staffers to the >> volunteers, >> > to the contractors and production partners, that we have some idea of >> what you're >> > going through and we know this is a tough time. If there's anything we >> can do to >> > help, please reach out. In the best of times, event organization can be >> grueling >> > and thankless, and so I just want to say THANK YOU to everyone who does >> the >> > organizing work in the communities we all care so much about. >> > >> > OSF 2020 EVENTS >> > >> > When it comes to the 2020 events OSF is managing, namely the OpenDev + >> PTG in >> > Vancouver June 8-11 and the Open Infrastructure Summit in Berlin >> October 19-23, >> > please read and bookmark this status page which we will continue to >> update: >> > https://www.openstack.org/events/covid-19-coronavirus-disease-updates >> > >> > When it comes to our community, the health of every individual is of >> paramount >> > concern. We have always aimed to produce events "of the community, by >> the >> > community" and the upcoming event in Vancouver is no exception. The >> OpenDev tracks >> > each morning will be programmed by volunteers from the community, and >> the project >> > teams will be organizing their own conversations as well each afternoon >> M-W, and >> > all day Thursday. >> > >> > But the larger question is here: should the show go on? >> > >> > The short answer is that as of now, the Vancouver and Berlin events are >> still >> > scheduled to happen in June (8-11) and October (19-23), respectively. >> > >> > However, we are willing to cancel or approach the events in a different >> way (i.e. >> > virtual) if the facts indicate that is the best path, and we know the >> facts are >> > changing rapidly. One of the most critical inputs we need is to hear >> from each of >> > you. We know that many of you rely on the twice-annual events to get >> together and >> > make rapid progress on the software, which is one reason we are not >> making any >> > decisions in haste. We also know that many of you may be unable or >> unwilling to >> > travel in June, and that is critical information to hear as we get >> closer to the >> > event so that we can make the most informed decision. >> > >> > I also wanted to answer a FAQ by letting everyone know that if either >> event is >> > cancelled, event tickets and sponsorships will be fully refunded. >> Please note that >> > if you're making travel arrangements (e.g. flights, hotels) those are >> outside of >> > our control. >> > >> > So as we continue to monitor the news and listen to health experts to >> make an >> > informed decision on any changes to our event plans, we'd like to hear >> from >> > everyone in the community who has a stake in these events. Our most >> pressing topic >> > is of course Vanvouver, but if you have questions or concerns about the >> Berlin >> > plan feel free to share those as well. >> > >> > If you'd like to connect directly, you can always contact Executive >> Director >> > Jonathan Bryce (jonathan at openstack.org) or myself (mark at openstack.org). >> > >> > Key Links: >> > - STATUS PAGE: >> > https://www.openstack.org/events/covid-19-coronavirus-disease-updates >> > - Vancouver OpenDev + PTG >> https://www.openstack.org/events/opendev-ptg-2020/ >> > - Berlin Open Infrastructure Summit: >> https://www.openstack.org/summit/berlin-2020/ >> > >> > Key Dates for OpenDev + PTG in Vancouver: >> > - Schedule will be published in early April >> > - Early bird deadline is April 4 >> > - Final day to sponsor will be May 4 >> > - Final registration price increase will be in early May >> > >> > Mark Collier >> > COO, OpenStack Foundation >> > @sparkycollier >> > >> > >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon Mar 9 09:22:44 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 9 Mar 2020 10:22:44 +0100 Subject: [neutron] Propose Lajos Katona for Neutron core team In-Reply-To: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> References: <478F5503-6507-419E-88A6-24B0BFBE0BE1@redhat.com> Message-ID: Well that's only a "stable core +1", but big personal +1! On Fri, 6 Mar 2020 at 16:31, Slawek Kaplonski wrote: > Hi neutrinos, > > I would like to propose Lajos Katona (irc: lajoskatona) as a member of the > Neutron core team. > Lajos is Neutron contributor Neutron since around Queens cycle and now he > is one of the most active reviewers in the Neutron group projects. > He was one of the key contributors in cooperation with Nova and Placement > teams to deliver guaranteed minimum bandwidth feature in OpenStack. > He is very active and helpful with triaging and fixing Neutron bugs and > issues in our CI. > > During last few cycles he proved that he has wide knowledge about Neutron > code base. He is currently also a maintainer of some neutron stadium > projects which shows that he has knowledge about code base not only about > neutron but also Neutron stadium. > > The quality and number of his reviews are comparable to other members of > the Neutron core team: > https://www.stackalytics.com/?release=ussuri&module=neutron-group and are > higher every cycle :) > I think he will be great addition to our core team. > > I will keep this nomination open for a week or until all current cores > will respond. > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > > -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Mar 9 09:22:57 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 9 Mar 2020 09:22:57 +0000 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: References: <1583528201.853712216@emailsrvr.com> Message-ID: On Mon, 9 Mar 2020 at 09:05, Dmitry Tantsur wrote: > > Hi, > > It seems that the critical bit we don't know for sure is whether and to what extent the virus is affected by warmer weather. If it behaves similarly to flu, June should be safe, otherwise we will likely have to cancel it. Nothing can be reliably said about Berlin at this point... > > Personally, I've been a long-term proponent of virtual events for many reasons, health situation included. As much as I would miss hanging out with everyone... We've been doing virtual-only design sessions in Kolla for a while now. My recent proposal for a Kolla SIG is partly an effort to close the gaps with other Summit sessions (onboarding, updates, ops feedback), and bring the wider community into the virtual net. Follow along to see how it goes... [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013122.html > > Dmitry > > On Fri, Mar 6, 2020 at 9:58 PM Mark Collier wrote: >> >> I wanted to make sure everyone saw this thread on the foundation mailing list, since I know not everyone is subscribed to both lists: >> >> Archive: http://lists.openstack.org/pipermail/foundation/2020-March/002852.html >> >> Please join that ML thread to share feedback on this topic, or you can reach out directly to myself or jonathan at openstack.org >> >> I saw first hand how we all pulled together during the Snowpenstack in Dublin, so I know we'll once again pull together as a community to get through this! >> >> Mark >> >> >> >> On Friday, March 6, 2020 12:55pm, "Mark Collier" said: >> >> > Stackers, >> > >> > Before I get into the current plans for the OSF events in Vancouver and Berlin, I >> > wanted to say a few words in general about the virus impacting so many people >> > right now. >> > >> > First, I wanted to acknowledge the very difficult situation many are facing >> > because of COVID-19 (Coronavirus), across open source communities and local >> > communities in general (tech or otherwise). I also want to say to everyone who is >> > on the front lines managing events, from the full time staffers to the volunteers, >> > to the contractors and production partners, that we have some idea of what you're >> > going through and we know this is a tough time. If there's anything we can do to >> > help, please reach out. In the best of times, event organization can be grueling >> > and thankless, and so I just want to say THANK YOU to everyone who does the >> > organizing work in the communities we all care so much about. >> > >> > OSF 2020 EVENTS >> > >> > When it comes to the 2020 events OSF is managing, namely the OpenDev + PTG in >> > Vancouver June 8-11 and the Open Infrastructure Summit in Berlin October 19-23, >> > please read and bookmark this status page which we will continue to update: >> > https://www.openstack.org/events/covid-19-coronavirus-disease-updates >> > >> > When it comes to our community, the health of every individual is of paramount >> > concern. We have always aimed to produce events "of the community, by the >> > community" and the upcoming event in Vancouver is no exception. The OpenDev tracks >> > each morning will be programmed by volunteers from the community, and the project >> > teams will be organizing their own conversations as well each afternoon M-W, and >> > all day Thursday. >> > >> > But the larger question is here: should the show go on? >> > >> > The short answer is that as of now, the Vancouver and Berlin events are still >> > scheduled to happen in June (8-11) and October (19-23), respectively. >> > >> > However, we are willing to cancel or approach the events in a different way (i.e. >> > virtual) if the facts indicate that is the best path, and we know the facts are >> > changing rapidly. One of the most critical inputs we need is to hear from each of >> > you. We know that many of you rely on the twice-annual events to get together and >> > make rapid progress on the software, which is one reason we are not making any >> > decisions in haste. We also know that many of you may be unable or unwilling to >> > travel in June, and that is critical information to hear as we get closer to the >> > event so that we can make the most informed decision. >> > >> > I also wanted to answer a FAQ by letting everyone know that if either event is >> > cancelled, event tickets and sponsorships will be fully refunded. Please note that >> > if you're making travel arrangements (e.g. flights, hotels) those are outside of >> > our control. >> > >> > So as we continue to monitor the news and listen to health experts to make an >> > informed decision on any changes to our event plans, we'd like to hear from >> > everyone in the community who has a stake in these events. Our most pressing topic >> > is of course Vanvouver, but if you have questions or concerns about the Berlin >> > plan feel free to share those as well. >> > >> > If you'd like to connect directly, you can always contact Executive Director >> > Jonathan Bryce (jonathan at openstack.org) or myself (mark at openstack.org). >> > >> > Key Links: >> > - STATUS PAGE: >> > https://www.openstack.org/events/covid-19-coronavirus-disease-updates >> > - Vancouver OpenDev + PTG https://www.openstack.org/events/opendev-ptg-2020/ >> > - Berlin Open Infrastructure Summit: https://www.openstack.org/summit/berlin-2020/ >> > >> > Key Dates for OpenDev + PTG in Vancouver: >> > - Schedule will be published in early April >> > - Early bird deadline is April 4 >> > - Final day to sponsor will be May 4 >> > - Final registration price increase will be in early May >> > >> > Mark Collier >> > COO, OpenStack Foundation >> > @sparkycollier >> > >> > >> >> >> From balazs.gibizer at est.tech Mon Mar 9 09:23:27 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Mar 2020 10:23:27 +0100 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: References: <1583528201.853712216@emailsrvr.com> Message-ID: <1583745807.12170.18@est.tech> On Mon, Mar 9, 2020 at 10:15, Artem Goncharov wrote: > You are very right Dmitriy. > > One thing to add: it is currently absolutely unpredictable how > situation will look like in June, but we need to get travel > approvals, visas, reservations already now. And at least in central > Europe all companies (especially big ones) are now in the alarmed > state with incredibly strengthen travel regulations, so that at least > for me even to start discussion about travel is very hard. I can second that. In theory I have the travel approval to go to Vancouver but at the same time all non business critical travel is suspended until further notice by my employer. So right now I can only wait for the situation to unfold. gibi > > Artem > > ---- > typed from mobile, auto-correct typos assumed > ---- > > On Mon, 9 Mar 2020, 10:10 Dmitry Tantsur, wrote: >> Hi, >> >> It seems that the critical bit we don't know for sure is whether and >> to what extent the virus is affected by warmer weather. If it >> behaves similarly to flu, June should be safe, otherwise we will >> likely have to cancel it. Nothing can be reliably said about Berlin >> at this point... >> >> Personally, I've been a long-term proponent of virtual events for >> many reasons, health situation included. As much as I would miss >> hanging out with everyone... >> >> Dmitry >> >> On Fri, Mar 6, 2020 at 9:58 PM Mark Collier >> wrote: >>> I wanted to make sure everyone saw this thread on the foundation >>> mailing list, since I know not everyone is subscribed to both lists: >>> >>> Archive: >>> http://lists.openstack.org/pipermail/foundation/2020-March/002852.html >>> >>> Please join that ML thread to share feedback on this topic, or you >>> can reach out directly to myself or jonathan at openstack.org >>> >>> I saw first hand how we all pulled together during the >>> Snowpenstack in Dublin, so I know we'll once again pull together as >>> a community to get through this! >>> >>> Mark >>> >>> >>> >>> On Friday, March 6, 2020 12:55pm, "Mark Collier" >>> said: >>> >>> > Stackers, >>> > >>> > Before I get into the current plans for the OSF events in >>> Vancouver and Berlin, I >>> > wanted to say a few words in general about the virus impacting >>> so many people >>> > right now. >>> > >>> > First, I wanted to acknowledge the very difficult situation many >>> are facing >>> > because of COVID-19 (Coronavirus), across open source >>> communities and local >>> > communities in general (tech or otherwise). I also want to say >>> to everyone who is >>> > on the front lines managing events, from the full time staffers >>> to the volunteers, >>> > to the contractors and production partners, that we have some >>> idea of what you're >>> > going through and we know this is a tough time. If there's >>> anything we can do to >>> > help, please reach out. In the best of times, event organization >>> can be grueling >>> > and thankless, and so I just want to say THANK YOU to everyone >>> who does the >>> > organizing work in the communities we all care so much about. >>> > >>> > OSF 2020 EVENTS >>> > >>> > When it comes to the 2020 events OSF is managing, namely the >>> OpenDev + PTG in >>> > Vancouver June 8-11 and the Open Infrastructure Summit in Berlin >>> October 19-23, >>> > please read and bookmark this status page which we will continue >>> to update: >>> > >>> https://www.openstack.org/events/covid-19-coronavirus-disease-updates >>> > >>> > When it comes to our community, the health of every individual >>> is of paramount >>> > concern. We have always aimed to produce events "of the >>> community, by the >>> > community" and the upcoming event in Vancouver is no exception. >>> The OpenDev tracks >>> > each morning will be programmed by volunteers from the >>> community, and the project >>> > teams will be organizing their own conversations as well each >>> afternoon M-W, and >>> > all day Thursday. >>> > >>> > But the larger question is here: should the show go on? >>> > >>> > The short answer is that as of now, the Vancouver and Berlin >>> events are still >>> > scheduled to happen in June (8-11) and October (19-23), >>> respectively. >>> > >>> > However, we are willing to cancel or approach the events in a >>> different way (i.e. >>> > virtual) if the facts indicate that is the best path, and we >>> know the facts are >>> > changing rapidly. One of the most critical inputs we need is to >>> hear from each of >>> > you. We know that many of you rely on the twice-annual events to >>> get together and >>> > make rapid progress on the software, which is one reason we are >>> not making any >>> > decisions in haste. We also know that many of you may be unable >>> or unwilling to >>> > travel in June, and that is critical information to hear as we >>> get closer to the >>> > event so that we can make the most informed decision. >>> > >>> > I also wanted to answer a FAQ by letting everyone know that if >>> either event is >>> > cancelled, event tickets and sponsorships will be fully >>> refunded. Please note that >>> > if you're making travel arrangements (e.g. flights, hotels) >>> those are outside of >>> > our control. >>> > >>> > So as we continue to monitor the news and listen to health >>> experts to make an >>> > informed decision on any changes to our event plans, we'd like >>> to hear from >>> > everyone in the community who has a stake in these events. Our >>> most pressing topic >>> > is of course Vanvouver, but if you have questions or concerns >>> about the Berlin >>> > plan feel free to share those as well. >>> > >>> > If you'd like to connect directly, you can always contact >>> Executive Director >>> > Jonathan Bryce (jonathan at openstack.org) or myself >>> (mark at openstack.org). >>> > >>> > Key Links: >>> > - STATUS PAGE: >>> > >>> https://www.openstack.org/events/covid-19-coronavirus-disease-updates >>> > - Vancouver OpenDev + PTG >>> https://www.openstack.org/events/opendev-ptg-2020/ >>> > - Berlin Open Infrastructure Summit: >>> https://www.openstack.org/summit/berlin-2020/ >>> > >>> > Key Dates for OpenDev + PTG in Vancouver: >>> > - Schedule will be published in early April >>> > - Early bird deadline is April 4 >>> > - Final day to sponsor will be May 4 >>> > - Final registration price increase will be in early May >>> > >>> > Mark Collier >>> > COO, OpenStack Foundation >>> > @sparkycollier >>> > >>> > >>> >>> >>> From balazs.gibizer at est.tech Mon Mar 9 10:01:27 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Mar 2020 11:01:27 +0100 Subject: [nova] bug triage Message-ID: <1583748087.12170.20@est.tech> Hi, We surpassed the somewhat magical line of having more than 100 untriaged nova bugs [1]. For me it seems that how we, as a team, are handling the incoming bugs is not sustainable. I'm guilty as well of not doing bug triage in a last two months. So I'm personally trying to change now and dedicate a weekly times lot to look at the bug list. But I also want to open a discussion about bug triage in genera. How can we handle the incoming bugs? I see that neutron does a weekly rotation of bug deputy and for them it works nicely. What do you think? Do we want to try that? Do we have enough volunteers to create a nice rotation period? Cheers, gibi [1] https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New From stephenfin at redhat.com Mon Mar 9 10:04:44 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 09 Mar 2020 10:04:44 +0000 Subject: [nova] US meeting slot In-Reply-To: <1583744340.12170.17@est.tech> References: <1583744340.12170.17@est.tech> Message-ID: <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> On Mon, 2020-03-09 at 09:59 +0100, Balázs Gibizer wrote: > Hi, > > Nova has alternate meeting slots on Thursdays to try to cover > contributors from different time zones. > * 14:00 UTC > * 21:00 UTC > > As I'm taking over the PTL role form Eric I need to figure out how to > run the nova meetings. I cannot really run the UTC 21:00 as it is > pretty late for me. (I will run the 14:00 UTC slot). I see different > options: > > a) Somebody from the US side of the globe volunteers to run the 21:00 > UTC slot. Please speak up if you would like to run it. I can help you > with agenda refresh and technicalities if needed. > > b) Have only one meeting time, and move that to 16:00 UTC. In this case > I will be able to run it most of the weeks. >From a European perspective, this is preferable for me since I could never attend the 21:00 UTC slot. I don't know how it works for the folks on the US west coast or in China though. > c) Do not have a dedicated meeting slot but switch to office hours. > Here we also need to find a time slot. I think 16:00 UTC could work > there as well. Do you mean move all meetings to office hours or just the one in the US timezone? Personally, I'd like to have a regular meeting with an agenda at least every couple of weeks. Stephen > Please share your view! Any other proposal is very welcome. > > Cheers, > gibi From balazs.gibizer at est.tech Mon Mar 9 10:16:29 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Mar 2020 11:16:29 +0100 Subject: [nova] US meeting slot In-Reply-To: <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> References: <1583744340.12170.17@est.tech> <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> Message-ID: <1583748989.12170.21@est.tech> On Mon, Mar 9, 2020 at 10:04, Stephen Finucane wrote: > On Mon, 2020-03-09 at 09:59 +0100, Balázs Gibizer wrote: >> >> c) Do not have a dedicated meeting slot but switch to office hours. >> Here we also need to find a time slot. I think 16:00 UTC could work >> there as well. > > Do you mean move all meetings to office hours or just the one in the > US > timezone? Personally, I'd like to have a regular meeting with an > agenda > at least every couple of weeks. To clarify, I meant to stop having meetings and have office hours instead. But a mixed setup also works for me if there will be folks around on the nova channel at 21:00 UTC. gibi From zhangbailin at inspur.com Mon Mar 9 10:21:38 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Mon, 9 Mar 2020 10:21:38 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTogW25vdmFd?= =?utf-8?Q?_US_meeting_slot?= In-Reply-To: <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> References: <25204fddabe1a05ee593ec9170dc512f@sslemail.net> <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> Message-ID: <70294733ed48434bb161033c294f562a@inspur.com> > 发件人: Stephen Finucane [mailto:stephenfin at redhat.com] > 发送时间: 2020年3月9日 18:05 > 收件人: Balázs Gibizer ; OpenStack Discuss > 主题: [lists.openstack.org代发]Re: [nova] US meeting slot > On Mon, 2020-03-09 at 09:59 +0100, Balázs Gibizer wrote: >> Hi, >> >> Nova has alternate meeting slots on Thursdays to try to cover >> contributors from different time zones. >> * 14:00 UTC >> * 21:00 UTC >> >> As I'm taking over the PTL role form Eric I need to figure out how to >> run the nova meetings. I cannot really run the UTC 21:00 as it is >> pretty late for me. (I will run the 14:00 UTC slot). I see different >> options: >> >> a) Somebody from the US side of the globe volunteers to run the 21:00 >> UTC slot. Please speak up if you would like to run it. I can help you >> with agenda refresh and technicalities if needed. >> >> b) Have only one meeting time, and move that to 16:00 UTC. In this >> case I will be able to run it most of the weeks. > From a European perspective, this is preferable for me since I could never attend the 21:00 UTC slot. I don't know how it works for the folks on the US west coast or in China though. To be honest, if 16:00UTC in China is 24:00 at night, if we attend to this meeting, then maybe the whole day will be listless and distressing. How about 10:30UTC or 11:00UTC? I know you are works at this time, but maybe dansmith will be still absent like the regular meeting at 21:00 UTC. >> c) Do not have a dedicated meeting slot but switch to office hours. >> Here we also need to find a time slot. I think 16:00 UTC could work >> there as well. > Do you mean move all meetings to office hours or just the one in the US timezone? > Personally, I'd like to have a regular meeting with an agenda at least every couple of weeks. Agree. > Stephen > Please share your view! Any other proposal is very welcome. > > Cheers, > gibi From geguileo at redhat.com Mon Mar 9 10:47:21 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 9 Mar 2020 11:47:21 +0100 Subject: [CINDER] Snapshots export In-Reply-To: References: <20200304155850.b4ydu4vfxthih7we@localhost> Message-ID: <20200309104721.fyqjvi4miiefon24@localhost> On 08/03, Alfredo De Luca wrote: > Hi Sean. Sorry for the late reply. > What we want to do is backing up snapshots in case of a complete compute > lost of as a plan for disaster recovery. > So after recreating the environment we can restore snapshots and start the > VMs again. > > Cheers Hi, Snapshots are stored in the same medium as the original volume, therefore are not valid for disaster recovery. In case of a disaster you would lose both, the volume and the snapshot. Depending on the type of scenario you want to guard against you will need different methods: - Snapshots - Backups - Replication Snapshots in general are only useful in case your volume gets corrupted, you accidentally delete data in the disk, etc. If you lose your compute host your volume would still be safe, so you don't need to do anything fancy, you can just attach the volume again. Cheers, Gorka. > > > On Wed, Mar 4, 2020 at 10:14 PM Sean McGinnis wrote: > > > On 3/4/20 9:58 AM, Gorka Eguileor wrote: > > > On 03/03, Alfredo De Luca wrote: > > >> Hi all. > > >> We have our env with Openstack (Train) and cinder with CEPH (nautilus) > > >> backend. > > >> We are creating automatic volumes snapshots and now we'd like to export > > >> them as a backup/restore plan. After exporting the snapshots we will use > > >> Acronis as backup tool. > > >> > > >> I couldn't find the right steps/commands to exports the snapshots. > > >> Any info? > > >> Cheers > > >> > > >> -- > > >> *Alfredo* > > > Hi Alfredo, > > > > > > What kind of backup/restore plan do you have planned? > > > > > > Because snapshots are not meant to be used in a Disaster Recovery > > > backup/restore plan, so the only thing available are the manage/unmanage > > > commands. > > > > > > These commands are meant to add an existing volume/snapshots into Cinder > > > together, not to unmanage/manage them independently. > > > > > > For example, you wouldn't be able to manage a snapshot if the volume is > > > not already managed. Also unmanaging the snapshot would block the > > > deletion of the RBD volume itself. > > > > > > Cheers, > > > Gorka. > > > > If the intent is to use the snapshots as a source to backup the volume > > data, leaving the actual volume attached and IO running but still > > getting a "static" view of the code, then you would need to create a > > volume from the chosen snapshot, mount that volume somewhere that is > > accessible to your backup software, perform the copy of the data, then > > delete the volume when complete. > > > > I haven't used Acronis myself, but the issue for some backup software > > could be that the volume it is backing up from is going to be different > > every time. Though you could make sure it is mounted at the same place > > so the backup software at least *thinks* it's backing up the same thing. > > > > Then restoring the data will likely require some manual intervention, > > but that's pretty much always the case when something goes wrong. I > > would just recommend you test the full disaster recovery scenario to > > make sure you have that figured out and working right before you > > actually need it. > > > > Sean > > > > > > > > -- > *Alfredo* From smooney at redhat.com Mon Mar 9 10:54:08 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Mar 2020 10:54:08 +0000 Subject: [nova] US meeting slot In-Reply-To: <1583748989.12170.21@est.tech> References: <1583744340.12170.17@est.tech> <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> <1583748989.12170.21@est.tech> Message-ID: <2d73108df2cac8732dd439b290c46d833e4c3bce.camel@redhat.com> On Mon, 2020-03-09 at 11:16 +0100, Balázs Gibizer wrote: > > On Mon, Mar 9, 2020 at 10:04, Stephen Finucane > wrote: > > On Mon, 2020-03-09 at 09:59 +0100, Balázs Gibizer wrote: > > > > > > c) Do not have a dedicated meeting slot but switch to office hours. > > > Here we also need to find a time slot. I think 16:00 UTC could work > > > there as well. > > > > Do you mean move all meetings to office hours or just the one in the > > US > > timezone? Personally, I'd like to have a regular meeting with an > > agenda > > at least every couple of weeks. > > To clarify, I meant to stop having meetings and have office hours > instead. But a mixed setup also works for me if there will be folks > around on the nova channel at 21:00 UTC. i would prefer to have the meeting or a mix rather then change to just office hours. i dont always rememebr to join the meeting unless there is a ping on the nova channel before but i generally am online for both slots. 16:00 UTC would be fine too but im not sure that would work for non eu/us folks. > > gibi > > > From rsharma1818 at outlook.com Mon Mar 9 11:27:31 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Mon, 9 Mar 2020 11:27:31 +0000 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: , Message-ID: Thanks Akihiro Adding the line "WEBROOT = '/dashboard'" to the local settings file worked ________________________________ From: Akihiro Motoki Sent: Monday, March 9, 2020 8:30 AM To: Rahul Sharma Cc: Donny Davis ; OpenStack Discuss Subject: Re: [Horizon] Unable to access the dashboard page I think you are hitting https://bugs.launchpad.net/horizon/+bug/1853651. The bug says WEBROOT needs to be configured. It was reported against the horizon installation guide on RHEL/CentOS, but I believe it is a bug on CentOS packaging as a package should work with the default config provided by the package. On Sun, Mar 8, 2020 at 2:45 AM Rahul Sharma wrote: > > Hi, > > Tried with /dashboard but it ain't working > > Getting error "The requested URL /auth/login/ was not found on this server." in the browser (Error 404) > > > ________________________________ > From: Donny Davis > Sent: Saturday, March 7, 2020 8:45 PM > To: Rahul Sharma > Cc: OpenStack Discuss > Subject: Re: [Horizon] Unable to access the dashboard page > > Try /dashboard > > Donny Davis > c: 805 814 6800 > > On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma wrote: > > Hi, > > I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components > > > - Keystone > - Glance > - Placement > - Nova > - Networking > > > After installing each component, I have also verified its operation and it seems to be working successfully. > > However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller > > Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." > > In the apache access logs, I see below error: > > " > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" > " > > If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. > > > Please assist. > > Thanks, > Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From rsharma1818 at outlook.com Mon Mar 9 11:27:50 2020 From: rsharma1818 at outlook.com (Rahul Sharma) Date: Mon, 9 Mar 2020 11:27:50 +0000 Subject: [Horizon] Unable to access the dashboard page In-Reply-To: References: , Message-ID: I checked the indentation and there's nothing wrong with it Rahul ________________________________ From: Amy Sent: Sunday, March 8, 2020 8:02 PM To: Rahul Sharma Cc: OpenStack Discuss Subject: Re: [Horizon] Unable to access the dashboard page Another thing to check is the indentation in your config file. Amy (spotz) On Mar 8, 2020, at 7:46 AM, Donny Davis wrote:  You may also want to check this setting https://docs.openstack.org/horizon/train/configuration/settings.html#allowed-hosts Donny Davis c: 805 814 6800 On Sat, Mar 7, 2020, 12:41 PM Rahul Sharma > wrote: Hi, Tried with /dashboard but it ain't working Getting error "The requested URL /auth/login/ was not found on this server." in the browser (Error 404) ________________________________ From: Donny Davis > Sent: Saturday, March 7, 2020 8:45 PM To: Rahul Sharma > Cc: OpenStack Discuss > Subject: Re: [Horizon] Unable to access the dashboard page Try /dashboard Donny Davis c: 805 814 6800 On Sat, Mar 7, 2020, 8:33 AM Rahul Sharma > wrote: Hi, I am trying to get OpenStack (Train) up and running on CentOS 7 by following the "OpenStack Installation Guide" provided on OpenStack's website and have completed installation of below components - Keystone - Glance - Placement - Nova - Networking After installing each component, I have also verified its operation and it seems to be working successfully. However, there is a problem I am facing after installing "Horizon" for dashboard services. In order to verify its operation, one is supposed to browse to the URL "http://controller/horizon/" where "controller" could be the hostname or IP address of the Node which is running the controller Browsing to the above URL throws an error "The requested URL /horizon was not found on this server." In the apache access logs, I see below error: " 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /horizon/ HTTP/1.1" 404 206 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" 103.44.50.92 - - [07/Mar/2020:13:14:52 +0000] "GET /favicon.ico HTTP/1.1" 404 209 "http://3.21.90.63/horizon/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36" " If I browse to the URL "http://controller/", the default "Testing123" page of apache gets loaded. Please assist. Thanks, Rahul -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Mar 9 11:31:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Mar 2020 12:31:50 +0100 Subject: Goodbye for now In-Reply-To: References: Message-ID: Albert Braden wrote: > My contract at Synopsys ends today, so I will have to continue my > efforts to sign up as an Openstack developer at my next role. Thanks to > everyone for all of the help and advice. Best wishes! Hoping to see you active again soon! -- Thierry Carrez (ttx) From thierry at openstack.org Mon Mar 9 13:50:02 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Mar 2020 14:50:02 +0100 Subject: [largescale-sig] Next meeting: Mar 11, 9utc Message-ID: <69398fb2-d2b0-d12e-e04f-7a9f7531fa7b@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, Mar 11 at 9 UTC[1] in #openstack-meeting on IRC. As we evolve in DST hell (US having moved to summer time while Europe hasn't), please doublecheck when that falls for you: [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200311T09 As always, the agenda for our meeting is available at: https://etherpad.openstack.org/p/large-scale-sig-meeting Feel free to add topics to it! Talk to you all on Wednesday, -- Thierry Carrez From dms at danplanet.com Mon Mar 9 14:10:17 2020 From: dms at danplanet.com (Dan Smith) Date: Mon, 09 Mar 2020 07:10:17 -0700 Subject: [nova] US meeting slot In-Reply-To: <1583744340.12170.17@est.tech> (=?utf-8?Q?=22Bal=C3=A1zs?= Gibizer"'s message of "Mon, 09 Mar 2020 09:59:00 +0100") References: <1583744340.12170.17@est.tech> Message-ID: > a) Somebody from the US side of the globe volunteers to run the 21:00 > UTC slot. Please speak up if you would like to run it. I can help you > with agenda refresh and technicalities if needed. > > b) Have only one meeting time, and move that to 16:00 UTC. In this > case I will be able to run it most of the weeks. > > c) Do not have a dedicated meeting slot but switch to office > hours. Here we also need to find a time slot. I think 16:00 UTC could > work there as well. I'd prefer 1600 to the 2100 actually, so that's fine with me. During DST I can make 1400, but no earlier. The 2100 meeting isn't very convenient for me, and very few people show up to it anymore anyway. I'd say it's not worth keeping that spot regardless. For a while now, I've felt that the meeting is unnecessary and overly repetitive. For that reason, I'd definitely be in favor of moving to office hours entirely (or as much as possible), although it sounds like most people aren't. --Dan From thierry at openstack.org Mon Mar 9 14:11:12 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Mar 2020 15:11:12 +0100 Subject: [Release-job-failures] Release of openstack/monasca-agent for ref refs/tags/2.8.1 failed In-Reply-To: References: Message-ID: zuul at openstack.org wrote: > Build failed. > > - release-openstack-python https://zuul.opendev.org/t/openstack/build/25e1809c970044708c503cda05ac84f9 : SUCCESS in 7m 24s > - announce-release https://zuul.opendev.org/t/openstack/build/646a0bbaa0054504a863b3900b476006 : FAILURE in 7m 55s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/5d85de0f42c740299fb5c6c7f24f3aa1 : SUCCESS in 4m 42s Analysis: Release announcement for monasca-agent failed due to the following transient email error: 451 Temporary local problem - please try later Release went out OK, only the announce was missed. -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Mon Mar 9 14:20:34 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 Mar 2020 09:20:34 -0500 Subject: [nova] US meeting slot In-Reply-To: <2d73108df2cac8732dd439b290c46d833e4c3bce.camel@redhat.com> References: <1583744340.12170.17@est.tech> <29fc2d4b8daa903715a3e620f6eb77a9be1d34e9.camel@redhat.com> <1583748989.12170.21@est.tech> <2d73108df2cac8732dd439b290c46d833e4c3bce.camel@redhat.com> Message-ID: <170bfab37b8.116cafb576317.5873131666573593661@ghanshyammann.com> ---- On Mon, 09 Mar 2020 05:54:08 -0500 Sean Mooney wrote ---- > On Mon, 2020-03-09 at 11:16 +0100, Balázs Gibizer wrote: > > > > On Mon, Mar 9, 2020 at 10:04, Stephen Finucane > > wrote: > > > On Mon, 2020-03-09 at 09:59 +0100, Balázs Gibizer wrote: > > > > > > > > c) Do not have a dedicated meeting slot but switch to office hours. > > > > Here we also need to find a time slot. I think 16:00 UTC could work > > > > there as well. > > > > > > Do you mean move all meetings to office hours or just the one in the > > > US > > > timezone? Personally, I'd like to have a regular meeting with an > > > agenda > > > at least every couple of weeks. > > > > To clarify, I meant to stop having meetings and have office hours > > instead. But a mixed setup also works for me if there will be folks > > around on the nova channel at 21:00 UTC. > i would prefer to have the meeting or a mix rather then change to just office hours. > i dont always rememebr to join the meeting unless there is a ping on the nova channel > before but i generally am online for both slots. 16:00 UTC would be fine too but im not sure that would > work for non eu/us folks. 16:00 UTC works for me too. I agree to keep meeting as of now at least from the meeting content point of view which is more towards the status side. If we start discussing more technical things than status then it makes sense to move to office hours. -gmann > > > > gibi > > > > > > > > > From smooney at redhat.com Mon Mar 9 14:30:24 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Mar 2020 14:30:24 +0000 Subject: [nova] US meeting slot In-Reply-To: References: <1583744340.12170.17@est.tech> Message-ID: On Mon, 2020-03-09 at 07:10 -0700, Dan Smith wrote: > > a) Somebody from the US side of the globe volunteers to run the 21:00 > > UTC slot. Please speak up if you would like to run it. I can help you > > with agenda refresh and technicalities if needed. > > > > b) Have only one meeting time, and move that to 16:00 UTC. In this > > case I will be able to run it most of the weeks. > > > > c) Do not have a dedicated meeting slot but switch to office > > hours. Here we also need to find a time slot. I think 16:00 UTC could > > work there as well. > > I'd prefer 1600 to the 2100 actually, so that's fine with me. During DST > I can make 1400, but no earlier. The 2100 meeting isn't very convenient > for me, and very few people show up to it anymore anyway. I'd say it's > not worth keeping that spot regardless. > > For a while now, I've felt that the meeting is unnecessary and overly > repetitive. For that reason, I'd definitely be in favor of moving to > office hours entirely (or as much as possible), although it sounds like > most people aren't. maybe we could try alternating. so like we alternate the time now alternate between office hours and meeting. every other week. that said im not that pushed either way. i do think the status updates are useful but that is most because if the updates were sent as emails i would just ignore them and since they are done the meeting when i attend i actually learn something. > --Dan > From thierry at openstack.org Mon Mar 9 14:32:01 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Mar 2020 15:32:01 +0100 Subject: [kolla][uc] Kolla SIG In-Reply-To: References: Message-ID: Mark Goddard wrote: > Hi, > > I'd like to propose the creation of a Special Interest Group (SIG) [0] > for Kolla. > [...] I'm a bit skeptical. We have a history of creating a lot of groups and structures. This was very helpful in the early years to cope with the crazy growth of openstack and to capture all the energy sent toward the project. But today we really have too many groups, meetings, channels compared to the number of active people. We still have thousands of contributors, and yet we feel spread thin. So I'm skeptical of creating new groups and/or meetings (at least not without eliminating a number of other groups/meetings as a result). Creation of a Kolla SIG would IMHO duplicate "Kolla" groups, and create a bit of confusion and uncertainty as to what is handled by the Kolla SIG vs. what is handled by the Kolla project team. I'd rather encourage the Kolla project team to directly engage with its users by holding "Ops feedback" sessions and other activities. Basically I'm not sure what the Kolla SIG would do that the Kolla project team cannot currently do... -- Thierry From amy at demarco.com Mon Mar 9 14:44:34 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 9 Mar 2020 09:44:34 -0500 Subject: [kolla][uc] Kolla SIG In-Reply-To: References: Message-ID: Mark, I agree with Thierry on this as I think this would cause confusion as well as split focus between two things vs including OPS more with the development. A SiG should bring together people from different projects or different interests together with a common interest. This is solely about Kolla which already has a specific project unlike say the Finance SiG which had users with a common interest of installing OpenStack for financial use. Have you utilized Forum or BoF sessions at events yet? Or maybe reach out to the OPS Meetup team about including Kolla more at their events? If I can be of any help let me know, Amy (spotz) On Mon, Mar 9, 2020 at 9:33 AM Thierry Carrez wrote: > Mark Goddard wrote: > > Hi, > > > > I'd like to propose the creation of a Special Interest Group (SIG) [0] > > for Kolla. > > [...] > > I'm a bit skeptical. > > We have a history of creating a lot of groups and structures. This was > very helpful in the early years to cope with the crazy growth of > openstack and to capture all the energy sent toward the project. But > today we really have too many groups, meetings, channels compared to the > number of active people. We still have thousands of contributors, and > yet we feel spread thin. So I'm skeptical of creating new groups and/or > meetings (at least not without eliminating a number of other > groups/meetings as a result). > > Creation of a Kolla SIG would IMHO duplicate "Kolla" groups, and create > a bit of confusion and uncertainty as to what is handled by the Kolla > SIG vs. what is handled by the Kolla project team. > > I'd rather encourage the Kolla project team to directly engage with its > users by holding "Ops feedback" sessions and other activities. Basically > I'm not sure what the Kolla SIG would do that the Kolla project team > cannot currently do... > > -- > Thierry > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Mar 9 15:05:12 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 9 Mar 2020 15:05:12 +0000 Subject: [kolla][uc] Kolla SIG In-Reply-To: References: Message-ID: On Mon, 9 Mar 2020 at 14:45, Amy Marrich wrote: > > Mark, > > I agree with Thierry on this as I think this would cause confusion as well as split focus between two things vs including OPS more with the development. A SiG should bring together people from different projects or different interests together with a common interest. This is solely about Kolla which already has a specific project unlike say the Finance SiG which had users with a common interest of installing OpenStack for financial use. > > Have you utilized Forum or BoF sessions at events yet? Or maybe reach out to the OPS Meetup team about including Kolla more at their events? > > If I can be of any help let me know, > > Amy (spotz) > > On Mon, Mar 9, 2020 at 9:33 AM Thierry Carrez wrote: >> >> Mark Goddard wrote: >> > Hi, >> > >> > I'd like to propose the creation of a Special Interest Group (SIG) [0] >> > for Kolla. >> > [...] >> >> I'm a bit skeptical. >> >> We have a history of creating a lot of groups and structures. This was >> very helpful in the early years to cope with the crazy growth of >> openstack and to capture all the energy sent toward the project. But >> today we really have too many groups, meetings, channels compared to the >> number of active people. We still have thousands of contributors, and >> yet we feel spread thin. So I'm skeptical of creating new groups and/or >> meetings (at least not without eliminating a number of other >> groups/meetings as a result). >> >> Creation of a Kolla SIG would IMHO duplicate "Kolla" groups, and create >> a bit of confusion and uncertainty as to what is handled by the Kolla >> SIG vs. what is handled by the Kolla project team. >> >> I'd rather encourage the Kolla project team to directly engage with its >> users by holding "Ops feedback" sessions and other activities. Basically >> I'm not sure what the Kolla SIG would do that the Kolla project team >> cannot currently do... Thanks for the feedback. I don't particularly mind what we call it - if a SIG is not appropriate then that's fine. What I want is to be able to include those on the periphery of the community who for a variety of reasons are not particularly active upstream. I want to provide some way for these people to feel more of a part of the community, and if possible help them to become more active. They might not have time to sit in #openstack-kolla or attend summits, but an hour or two a month is probably something they could commit to. A big part of the drive behind this is simply gathering a list of members. So often we put things on openstack-discuss asking about who uses particular features, knowing it mostly goes to the void. Many operators simply don't have time to follow it. When you suggest Ops feedback sessions, do you mean at the summit? My view is that we are at the point where that is a fairly exclusive club. I'd much rather something virtual with a low barrier to entry. >> >> -- >> Thierry >> From balazs.gibizer at est.tech Mon Mar 9 15:27:25 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 09 Mar 2020 16:27:25 +0100 Subject: [nova][stable] Nova stable branch liaison Message-ID: <1583767645.12170.27@est.tech> Hi Team, According to the wiki [1] we still have Matt as a stable branch liaison for nova. Obviously we need to find somebody else as Matt is gone. I'm not part of the nova-stable-core team so I cannot assume default ownership of that. Also I see that we have a small but active stable team so I hope somebody from that team can step forward and can take this role. Cheers, gibi [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch From thierry at openstack.org Mon Mar 9 15:40:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Mar 2020 16:40:50 +0100 Subject: [kolla][uc] Kolla SIG In-Reply-To: References: Message-ID: <6407c2a8-b778-4163-3fd1-031d9e073209@openstack.org> Mark Goddard wrote: > [...] > Thanks for the feedback. I don't particularly mind what we call it - > if a SIG is not appropriate then that's fine. What I want is to be > able to include those on the periphery of the community who for a > variety of reasons are not particularly active upstream. I want to > provide some way for these people to feel more of a part of the > community, and if possible help them to become more active. They might > not have time to sit in #openstack-kolla or attend summits, but an > hour or two a month is probably something they could commit to. > > A big part of the drive behind this is simply gathering a list of > members. So often we put things on openstack-discuss asking about who > uses particular features, knowing it mostly goes to the void. Many > operators simply don't have time to follow it. I think you can do that from the Kolla project team. You can call it the Kolla users club and create a number of events around it (virtual or face-to-face). My point is that you do not need a specific governance structure to do this. > When you suggest Ops feedback sessions, do you mean at the summit? My > view is that we are at the point where that is a fairly exclusive > club. I'd much rather something virtual with a low barrier to entry. No, I was just suggesting finding a catchy name for those outreach activities, one that encourages users to reach out, and make it less intimidating than joining the IRC meeting for a project team. -- Thierry From jimmy at openstack.org Mon Mar 9 15:42:37 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 09 Mar 2020 10:42:37 -0500 Subject: [kolla][uc] Kolla SIG In-Reply-To: <6407c2a8-b778-4163-3fd1-031d9e073209@openstack.org> References: <6407c2a8-b778-4163-3fd1-031d9e073209@openstack.org> Message-ID: <5E6663ED.5060609@openstack.org> Kollaborators? Thierry Carrez wrote: > No, I was just suggesting finding a catchy name for those outreach > activities, one that encourages users to reach out, and make it less > intimidating than joining the IRC meeting for a project team. From bence.romsics at gmail.com Mon Mar 9 15:57:55 2020 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 9 Mar 2020 16:57:55 +0100 Subject: [neutron] bug deputy report for week of 2020-03-02 Message-ID: Hi All, Here's the deputy report for the week of 2020-03-02. Please note we have a new rotation schedule at the usual place: https://wiki.openstack.org/wiki/Network/Meetings High * https://bugs.launchpad.net/neutron/+bug/1865453 neutron.tests.functional.plugins.ml2.drivers.ovn.mech_driver.test_mech_driver.TestVirtualPorts.test_virtual_port_created_before fails randomly Random error in the gate. Maciej is working on it. * https://bugs.launchpad.net/neutron/+bug/1866039 [OVN] QoS gives different bandwidth limit measures than ml2/ovs Clearing prerequisites of testing qos in tempest against ovn backend. Work in progress by Maciej: https://review.opendev.org/711048 * https://bugs.launchpad.net/neutron/+bug/1866336 Binding of floating ip agent gateway port and agent_id isn't removed Work in progress by Slawek: https://review.opendev.org/711623 * https://bugs.launchpad.net/neutron/+bug/1866560 FIP Port forwarding description API extension don't work Work in progress by Slawek: https://review.opendev.org/711888 Medium * https://bugs.launchpad.net/neutron/+bug/1865891 Race condition during removal of subnet from the router and removal of subnet Downstream bug reproduced on master too. Slawek is working on it. * https://bugs.launchpad.net/neutron/+bug/1866068 [OVN] neutron_pg_drop port group table creation race condition Work in progress by Jakub: https://review.opendev.org/711404 * https://bugs.launchpad.net/neutron/+bug/1866160 Update security group failed with the same stateful data Work in progress by Lina: https://review.opendev.org/711385 RFE * https://bugs.launchpad.net/neutron/+bug/1865889 Routed provider networks support in OVN To be scheduled and discussed on the drivers meeting. Could turn into an RFE * https://bugs.launchpad.net/neutron/+bug/1866077 [L3][IPv6] IPv6 traffic with DVR in compute host Incomplete * https://bugs.launchpad.net/neutron/+bug/1866139 GARP not sent on provider network after live migration * https://bugs.launchpad.net/neutron/+bug/1866445 br-int bridge in one compute can't learn MAC addresses of VMs in other compute nodes Invalid * https://bugs.launchpad.net/neutron/+bug/1866353 Neutron API returning HTTP 201 for SG rule create when not fully created yet octavia-ovn-provider * https://review.opendev.org/711244 [OVN Octavia Provider] Deleting of listener fails Work in progress by Maciej: https://review.opendev.org/711244 Cheers, Bence (rubasov) From gr at ham.ie Mon Mar 9 16:06:11 2020 From: gr at ham.ie (Graham Hayes) Date: Mon, 9 Mar 2020 16:06:11 +0000 Subject: [all][tc] Stepping down from TC In-Reply-To: References: Message-ID: <308304c3-7314-b01e-e1b4-6b15f926a8b3@ham.ie> On 05/03/2020 16:45, Alexandra Settle wrote: > Hi all, > > This should come as no shock as I have been relatively quite for some time > now, but I will not standing for the Technical Committee for a second term. > > I have thoroughly enjoyed my tenure, learning so much about open source > governance than I ever thought I needed 😉 > > My work takes me elsewhere, as it did several years ago, and I simply do > not have > the time to manage both. > > I encourage anyone who is interested in governance, or is passionate > about OpenStack > and wants to learn more, to stand for the TC elections. As was proven by > my own > nomination and subsequent successful election, you do not have to be > "purely technical" > to stand and be a part of something great. Diversity of skill is so > important to our > survival. > > Thanks to all those that have supported me to get to the point, I > appreciate you all and > will miss working intimately with the community. > > Please do not hesitate to reach out and ask any questions if you are > interested in the > positions available, happy to help encourage and answer any questions > you may have. > > All the best, > > Alex > > ------------------------------------------------------------------------ > Alexandra Settle > Senior Technical Writer > London, United Kingdom (GMT) > Sad to see you go! Thanks for all the work, and much needed perspective you brought to the TC and the community. From sbauza at redhat.com Mon Mar 9 16:08:14 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 9 Mar 2020 17:08:14 +0100 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <170b01d7595.10341333e516143.4131462912712933865@ghanshyammann.com> References: <1583482276.12170.14@est.tech> <4692F106-6B00-41FC-9BA9-1DF62A24EDAB@fried.cc> <170b01d7595.10341333e516143.4131462912712933865@ghanshyammann.com> Message-ID: Thanks indeed gibi for this. On Fri, Mar 6, 2020 at 2:58 PM Ghanshyam Mann wrote: > ---- On Fri, 06 Mar 2020 07:01:15 -0600 Eric Fried > wrote ---- > > Big +1 from me. Many thanks, gibi. Not that you‘ll need it, but please > don’t hesitate to reach out to me if you have questions. > > Indeed. Thanks gibi for helping out here. > > -gmann > > > > efried_gone > > > > > On Mar 6, 2020, at 02:16, Balázs Gibizer > wrote: > > > > > > Hi, > > > > > > Since Eric announced that he has to leave us [1] I have been working > > > internally with my employee to be able to take over the Nova PTL > > > position. Now I've got the necessary approvals. The official PTL > > > election is close [2] and I'm ready to fill the PTL gap until the > > > proper PTL election in April. > > > > > > Is this a workable solution for the community? > > > > > > Cheers, > > > gibi > > > > > > [1] > > > > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html > > > [2] https://governance.openstack.org/election/ > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Mar 9 16:23:37 2020 From: melwittt at gmail.com (melanie witt) Date: Mon, 9 Mar 2020 09:23:37 -0700 Subject: [nova] US meeting slot In-Reply-To: References: <1583744340.12170.17@est.tech> Message-ID: On 3/9/20 07:10, Dan Smith wrote: >> a) Somebody from the US side of the globe volunteers to run the 21:00 >> UTC slot. Please speak up if you would like to run it. I can help you >> with agenda refresh and technicalities if needed. >> >> b) Have only one meeting time, and move that to 16:00 UTC. In this >> case I will be able to run it most of the weeks. >> >> c) Do not have a dedicated meeting slot but switch to office >> hours. Here we also need to find a time slot. I think 16:00 UTC could >> work there as well. > > I'd prefer 1600 to the 2100 actually, so that's fine with me. During DST > I can make 1400, but no earlier. The 2100 meeting isn't very convenient > for me, and very few people show up to it anymore anyway. I'd say it's > not worth keeping that spot regardless. +1 to the opinion that it's not worth keeping the 2100 slot regardless. I think there are too few attendees during that time to make the meeting useful. 1400 is usually too early for me but I'm OK with that -- I catch up on the 1400 meetings by reading the meeting IRC logs. And if I need input on a blueprint or spec, I can use the ML if I can't make the meeting. Cheers, -melanie From rosmaita.fossdev at gmail.com Mon Mar 9 18:19:32 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 9 Mar 2020 14:19:32 -0400 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> Message-ID: On 3/6/20 6:12 PM, Goutham Pacha Ravi wrote: > > On Thu, Mar 5, 2020 at 11:53 AM Brian Rosmaita > > wrote: > > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > >   ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita > > > wrote ---- > >   > Hello QA team and devstack-plugin-ceph-core people, > >   > > >   > The Cinder team has some proposals we'd like to float. > >   > > >   > 1. The Cinder team is interested in becoming more active in the > >   > maintenance of openstack/devstack-plugin-ceph [0]. > Currently, the > >   > devstack-plugin-ceph-core is > >   > https://review.opendev.org/#/admin/groups/1196,members > >   > The cinder-core is already represented by Eric and Sean; we'd > like to > >   > replace them by including the cinder-core group. > > > > +1. This is good diea and make sense, I will do the change. > > Great, thanks! > > > > I agree this is a great idea to have more members of Cinder joining the > devstack-plugin-ceph team. I would like to have atleast a sub team of > manila core reviewers added to this project if it makes sense. The > Manila CephFS drivers (cephfs-native and cephfs-nfs) are currently being > tested with the help of the devstack integration in devstack-plugin-ceph. > > We have Tom Barron (tbarron) in the team, i'd like to propose myself > (gouthamr) and Victoria Martinez de la Cruz (vkmc) > > Please let me know what you think of the idea. I've got no objection from the Cinder side. I would also not object to adding the manila-core group instead of individuals. It's certainly in your team's interest to keep this thing stable and working, just as it is for the Cinder team. > > >   > > >   > 2. The Cinder team is interested in becoming more active in the > >   > maintenance of x/devstack-plugin-nfs [1].  Currently, the > >   > devstack-plugin-nfs-core is > >   > https://review.opendev.org/#/admin/groups/1330,members > >   > It's already 75% cinder-core members; we'd like to replace the > >   > individual members with the cinder-core group.  We also > propose that > >   > devstack-core be added as an included group. > >   > > >   > 3. The Cinder team is interested in implementing a new > devstack plugin: > >   >      openstack/devstack-plugin-open-cas > >   > This will enable thorough testing of a new feature [2] being > introduced > >   > as experimental in Ussuri and expected to be finalized in > Victoria.  Our > >   > plan would be to make both cinder-core and devstack-core > included groups > >   > for the gerrit group governing the new plugin. > > > > +1. You want this under Cinder governance or under QA ? > > I think it makes sense for these to be under QA governance -- QA would > own the repo with both QA and Cinder having permission to make changes. > > >   > > >   > 4. This is a minor point, but can the devstack-plugin-nfs > repo be moved > >   > back into the 'openstack' namespace? > > > > If this is usable plugin for nfs testing (I am not aware if we > have any other) then > > it make sense to bring it to openstack governance. > > Same question here, do you want to put this under Cinder > governance or QA. > > Same here, I think QA should "own" the repo, but Cinder will have > permission to make changes there. > > > > > Those plugins under QA governance also ok for me with your > proposal of calloborative maintainance by > > devstack-core and cinder-core. > > > > -gmann > > Thanks for the quick response! > > >   > > >   > Let us know which of these proposals you find acceptable. > >   > > >   > > >   > [0] https://opendev.org/openstack/devstack-plugin-ceph > >   > [1] https://opendev.org/x/devstack-plugin-nfs > >   > [2] > https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > >   > > >   > > > > > From lyarwood at redhat.com Mon Mar 9 19:01:44 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 9 Mar 2020 19:01:44 +0000 Subject: [nova][stable] Nova stable branch liaison In-Reply-To: <1583767645.12170.27@est.tech> References: <1583767645.12170.27@est.tech> Message-ID: <20200309190144.yad4y7wrcnpsfk4j@lyarwood.usersys.redhat.com> On 09-03-20 16:27:25, Balázs Gibizer wrote: > Hi Team, > > According to the wiki [1] we still have Matt as a stable branch liaison for > nova. Obviously we need to find somebody else as Matt is gone. I'm not part > of the nova-stable-core team so I cannot assume default ownership of that. > Also I see that we have a small but active stable team so I hope somebody > from that team can step forward and can take this role. Yup I would be happy to help with this. Should I update the page assuming we don't have any other volunteers? Cheers, Lee > [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Mar 9 19:10:04 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 Mar 2020 14:10:04 -0500 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> Message-ID: <170c0b444a5.e5378d4f17622.7872527250831207185@ghanshyammann.com> ---- On Mon, 09 Mar 2020 13:19:32 -0500 Brian Rosmaita wrote ---- > On 3/6/20 6:12 PM, Goutham Pacha Ravi wrote: > > > > On Thu, Mar 5, 2020 at 11:53 AM Brian Rosmaita > > > wrote: > > > > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > > > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita > > > > > wrote ---- > > > > Hello QA team and devstack-plugin-ceph-core people, > > > > > > > > The Cinder team has some proposals we'd like to float. > > > > > > > > 1. The Cinder team is interested in becoming more active in the > > > > maintenance of openstack/devstack-plugin-ceph [0]. > > Currently, the > > > > devstack-plugin-ceph-core is > > > > https://review.opendev.org/#/admin/groups/1196,members > > > > The cinder-core is already represented by Eric and Sean; we'd > > like to > > > > replace them by including the cinder-core group. > > > > > > +1. This is good diea and make sense, I will do the change. > > > > Great, thanks! > > > > > > > > I agree this is a great idea to have more members of Cinder joining the > > devstack-plugin-ceph team. I would like to have atleast a sub team of > > manila core reviewers added to this project if it makes sense. The > > Manila CephFS drivers (cephfs-native and cephfs-nfs) are currently being > > tested with the help of the devstack integration in devstack-plugin-ceph. > > > > We have Tom Barron (tbarron) in the team, i'd like to propose myself > > (gouthamr) and Victoria Martinez de la Cruz (vkmc) > > > > Please let me know what you think of the idea. > > I've got no objection from the Cinder side. I would also not object to > adding the manila-core group instead of individuals. It's certainly in > your team's interest to keep this thing stable and working, just as it > is for the Cinder team. Agree, I think adding manila group will be helpful, let me know if ok for you and accordinfgly I will make changes. -gmann > > > > > > > > > > > 2. The Cinder team is interested in becoming more active in the > > > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > > > devstack-plugin-nfs-core is > > > > https://review.opendev.org/#/admin/groups/1330,members > > > > It's already 75% cinder-core members; we'd like to replace the > > > > individual members with the cinder-core group. We also > > propose that > > > > devstack-core be added as an included group. > > > > > > > > 3. The Cinder team is interested in implementing a new > > devstack plugin: > > > > openstack/devstack-plugin-open-cas > > > > This will enable thorough testing of a new feature [2] being > > introduced > > > > as experimental in Ussuri and expected to be finalized in > > Victoria. Our > > > > plan would be to make both cinder-core and devstack-core > > included groups > > > > for the gerrit group governing the new plugin. > > > > > > +1. You want this under Cinder governance or under QA ? > > > > I think it makes sense for these to be under QA governance -- QA would > > own the repo with both QA and Cinder having permission to make changes. > > > > > > > > > > 4. This is a minor point, but can the devstack-plugin-nfs > > repo be moved > > > > back into the 'openstack' namespace? > > > > > > If this is usable plugin for nfs testing (I am not aware if we > > have any other) then > > > it make sense to bring it to openstack governance. > > > Same question here, do you want to put this under Cinder > > governance or QA. > > > > Same here, I think QA should "own" the repo, but Cinder will have > > permission to make changes there. > > > > > > > > Those plugins under QA governance also ok for me with your > > proposal of calloborative maintainance by > > > devstack-core and cinder-core. > > > > > > -gmann > > > > Thanks for the quick response! > > > > > > > > > > Let us know which of these proposals you find acceptable. > > > > > > > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > > > [1] https://opendev.org/x/devstack-plugin-nfs > > > > [2] > > https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > > > > > > > > > > > > > > From gouthampravi at gmail.com Mon Mar 9 19:21:09 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 9 Mar 2020 12:21:09 -0700 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: <170c0b444a5.e5378d4f17622.7872527250831207185@ghanshyammann.com> References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> <170c0b444a5.e5378d4f17622.7872527250831207185@ghanshyammann.com> Message-ID: On Mon, Mar 9, 2020 at 12:10 PM Ghanshyam Mann wrote: > ---- On Mon, 09 Mar 2020 13:19:32 -0500 Brian Rosmaita < > rosmaita.fossdev at gmail.com> wrote ---- > > On 3/6/20 6:12 PM, Goutham Pacha Ravi wrote: > > > > > > On Thu, Mar 5, 2020 at 11:53 AM Brian Rosmaita > > > > > wrote: > > > > > > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > > > > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita > > > > > > > wrote ---- > > > > > Hello QA team and devstack-plugin-ceph-core people, > > > > > > > > > > The Cinder team has some proposals we'd like to float. > > > > > > > > > > 1. The Cinder team is interested in becoming more active > in the > > > > > maintenance of openstack/devstack-plugin-ceph [0]. > > > Currently, the > > > > > devstack-plugin-ceph-core is > > > > > https://review.opendev.org/#/admin/groups/1196,members > > > > > The cinder-core is already represented by Eric and Sean; > we'd > > > like to > > > > > replace them by including the cinder-core group. > > > > > > > > +1. This is good diea and make sense, I will do the change. > > > > > > Great, thanks! > > > > > > > > > > > > I agree this is a great idea to have more members of Cinder joining > the > > > devstack-plugin-ceph team. I would like to have atleast a sub team of > > > manila core reviewers added to this project if it makes sense. The > > > Manila CephFS drivers (cephfs-native and cephfs-nfs) are currently > being > > > tested with the help of the devstack integration in > devstack-plugin-ceph. > > > > > > We have Tom Barron (tbarron) in the team, i'd like to propose myself > > > (gouthamr) and Victoria Martinez de la Cruz (vkmc) > > > > > > Please let me know what you think of the idea. > > > > I've got no objection from the Cinder side. I would also not object to > > adding the manila-core group instead of individuals. It's certainly in > > your team's interest to keep this thing stable and working, just as it > > is for the Cinder team. > > Agree, I think adding manila group will be helpful, let me know if ok for > you > and accordinfgly I will make changes. > Sure thing, works for me. Thanks Brian and Ghanshyam. > > -gmann > > > > > > > > > > > > > > > > 2. The Cinder team is interested in becoming more active > in the > > > > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > > > > devstack-plugin-nfs-core is > > > > > https://review.opendev.org/#/admin/groups/1330,members > > > > > It's already 75% cinder-core members; we'd like to replace > the > > > > > individual members with the cinder-core group. We also > > > propose that > > > > > devstack-core be added as an included group. > > > > > > > > > > 3. The Cinder team is interested in implementing a new > > > devstack plugin: > > > > > openstack/devstack-plugin-open-cas > > > > > This will enable thorough testing of a new feature [2] > being > > > introduced > > > > > as experimental in Ussuri and expected to be finalized in > > > Victoria. Our > > > > > plan would be to make both cinder-core and devstack-core > > > included groups > > > > > for the gerrit group governing the new plugin. > > > > > > > > +1. You want this under Cinder governance or under QA ? > > > > > > I think it makes sense for these to be under QA governance -- QA > would > > > own the repo with both QA and Cinder having permission to make > changes. > > > > > > > > > > > > > 4. This is a minor point, but can the devstack-plugin-nfs > > > repo be moved > > > > > back into the 'openstack' namespace? > > > > > > > > If this is usable plugin for nfs testing (I am not aware if we > > > have any other) then > > > > it make sense to bring it to openstack governance. > > > > Same question here, do you want to put this under Cinder > > > governance or QA. > > > > > > Same here, I think QA should "own" the repo, but Cinder will have > > > permission to make changes there. > > > > > > > > > > > Those plugins under QA governance also ok for me with your > > > proposal of calloborative maintainance by > > > > devstack-core and cinder-core. > > > > > > > > -gmann > > > > > > Thanks for the quick response! > > > > > > > > > > > > > Let us know which of these proposals you find acceptable. > > > > > > > > > > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > > > > [1] https://opendev.org/x/devstack-plugin-nfs > > > > > [2] > > > > https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Mar 9 19:49:53 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 9 Mar 2020 15:49:53 -0400 Subject: [cinder] reminder: ussuri virtual mid-cycle 16 march at 12:00 UTC Message-ID: (Feedback from the last virtual meet-up was that it needed more promotion, so if you know anyone who might be (or should be) interested, please tell them, in addition to marking your own calendar.) Session Two of the Cinder Ussuri virtual mid-cycle will be held: DATE: Monday, 16 March 2020 TIME: 1200-1400 UTC LOCATION: https://bluejeans.com/3228528973 The meeting will be recorded. Please add topics to the planning etherpad: https://etherpad.openstack.org/p/cinder-ussuri-mid-cycle-planning cheers, brian From openstack at nemebean.com Mon Mar 9 20:51:12 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Mar 2020 15:51:12 -0500 Subject: [oslo][infra] Oslo core security team on Launchpad Message-ID: <6427416f-ed83-e7c2-e40e-a5013202d5ce@nemebean.com> Hi, I just noticed that the Oslo core security team includes a number of people no longer active in Oslo and also only me for current cores. We should really clean that up so random people aren't getting notified of private security bugs and ideally add some current cores so we have more eyes on said security bugs. How do we go about doing that? I see it's owned by the OpenStack Administrators team, so do I put in a request with the changes or can they just make me an administrator for that group? Thanks. -Ben From fungi at yuggoth.org Mon Mar 9 21:18:03 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Mar 2020 21:18:03 +0000 Subject: [oslo][infra] Oslo core security team on Launchpad In-Reply-To: <6427416f-ed83-e7c2-e40e-a5013202d5ce@nemebean.com> References: <6427416f-ed83-e7c2-e40e-a5013202d5ce@nemebean.com> Message-ID: <20200309211802.fejswnds7n6t55zt@yuggoth.org> On 2020-03-09 15:51:12 -0500 (-0500), Ben Nemec wrote: > I just noticed that the Oslo core security team includes a number > of people no longer active in Oslo and also only me for current > cores. We should really clean that up so random people aren't > getting notified of private security bugs and ideally add some > current cores so we have more eyes on said security bugs. It's been languishing on my to do list to remind all projects with the vulnerability:managed governance tag to review those group memberships in LP regularly and keep them groomed to fit the recommendations in requirement #2 here: https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html#requirements 2. The deliverable must have a dedicated point of contact for security issues (which could be shared by multiple deliverables in a given project-team if needed), so that the VMT can engage them to triage reports of potential vulnerabilities. Deliverables with more than five core reviewers should (so as to limit the unnecessary exposure of private reports) settle on a subset of these to act as security core reviewers whose responsibility it is to be able to confirm whether a bug report is accurate/applicable or at least know other subject matter experts they can in turn subscribe to perform those activities in a timely manner. They should also be able to review and provide pre-approval of patches attached to private bugs, which is why at least a majority are expected to be core reviewers for the deliverable. These should be members of a group contact (for example a -coresec team) in the deliverable’s defect tracker so that the VMT can easily subscribe them to new bugs." We're also trying to keep the liaisons and links to corresponding security teams tracked here for faster VMT response: https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management > How do we go about doing that? A group member marked as an "administrator" for it should add and remove members as needed. Generally this group would include the current PTL or active liaison for vulnerability reports as an administrative member to take care of the duty of maintaining group membership, including proper hand-off during transitions of leadership. > I see it's owned by the OpenStack Administrators team, so do I put > in a request with the changes or can they just make me an > administrator for that group? Since I'm in the OpenStack Administrators group on LP I've gone ahead and flagged your membership in oslo-coresec as having administrative privileges. We require these groups to be owned by OpenStack Administrators so that it can act as a fallback in situations like this where expected group admin hand-off has been forgotten. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon Mar 9 21:42:20 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Mar 2020 16:42:20 -0500 Subject: [oslo][infra] Oslo core security team on Launchpad In-Reply-To: <20200309211802.fejswnds7n6t55zt@yuggoth.org> References: <6427416f-ed83-e7c2-e40e-a5013202d5ce@nemebean.com> <20200309211802.fejswnds7n6t55zt@yuggoth.org> Message-ID: On 3/9/20 4:18 PM, Jeremy Stanley wrote: > On 2020-03-09 15:51:12 -0500 (-0500), Ben Nemec wrote: >> I just noticed that the Oslo core security team includes a number >> of people no longer active in Oslo and also only me for current >> cores. We should really clean that up so random people aren't >> getting notified of private security bugs and ideally add some >> current cores so we have more eyes on said security bugs. > > It's been languishing on my to do list to remind all projects with > the vulnerability:managed governance tag to review those group > memberships in LP regularly and keep them groomed to fit the > recommendations in requirement #2 here: > > https://governance.openstack.org/tc/reference/tags/vulnerability_managed.html#requirements > > > 2. The deliverable must have a dedicated point of contact for > security issues (which could be shared by multiple deliverables > in a given project-team if needed), so that the VMT can engage > them to triage reports of potential vulnerabilities. Deliverables > with more than five core reviewers should (so as to limit the > unnecessary exposure of private reports) settle on a subset of > these to act as security core reviewers whose responsibility it > is to be able to confirm whether a bug report is > accurate/applicable or at least know other subject matter experts > they can in turn subscribe to perform those activities in a > timely manner. They should also be able to review and provide > pre-approval of patches attached to private bugs, which is why at > least a majority are expected to be core reviewers for the > deliverable. These should be members of a group contact (for > example a -coresec team) in the deliverable’s defect > tracker so that the VMT can easily subscribe them to new bugs." > > We're also trying to keep the liaisons and links to corresponding > security teams tracked here for faster VMT response: > > https://wiki.openstack.org/wiki/CrossProjectLiaisons#Vulnerability_management > >> How do we go about doing that? > > A group member marked as an "administrator" for it should add and > remove members as needed. Generally this group would include the > current PTL or active liaison for vulnerability reports as an > administrative member to take care of the duty of maintaining group > membership, including proper hand-off during transitions of > leadership. > >> I see it's owned by the OpenStack Administrators team, so do I put >> in a request with the changes or can they just make me an >> administrator for that group? > > Since I'm in the OpenStack Administrators group on LP I've gone > ahead and flagged your membership in oslo-coresec as having > administrative privileges. We require these groups to be owned by > OpenStack Administrators so that it can act as a fallback in > situations like this where expected group admin hand-off has been > forgotten. > Great, thanks! I have something to add to my shiny new Oslo PTL guide. :-) From kennelson11 at gmail.com Tue Mar 10 00:25:59 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 9 Mar 2020 17:25:59 -0700 Subject: [all] Collecting Virtual Midcycle Best Practices Message-ID: Hello Everyone! I wanted to collect best practices and pitfalls to avoid wrt projects experiences with virtual midcycles. I know of a few projects that have done them in the past and with how travel is hard for a lot of people right now, I expect more projects to have midcycles. I think it would be helpful to have all of the data we can collect in one place for those not just new to virtual midcycles but the whole community. I threw some categories into this etherpad[1] and filled in some options. Please add to it :) -Kendall (diablo_rojo) [1] https://etherpad.openstack.org/p/virtual-midcycle-best-practices -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Mar 10 01:30:11 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 09 Mar 2020 20:30:11 -0500 Subject: [qa][cinder][devstack] proposed governance changes for some devstack plugins In-Reply-To: References: <170a7b5430a.1155e6495437733.1575830632912803163@ghanshyammann.com> <69fcb574-1ae1-08cb-e8e2-8bd08bef80f4@gmail.com> <170c0b444a5.e5378d4f17622.7872527250831207185@ghanshyammann.com> Message-ID: <170c21047ef.1139d333621430.2800684447022470438@ghanshyammann.com> ---- On Mon, 09 Mar 2020 14:21:09 -0500 Goutham Pacha Ravi wrote ---- > > > On Mon, Mar 9, 2020 at 12:10 PM Ghanshyam Mann wrote: > ---- On Mon, 09 Mar 2020 13:19:32 -0500 Brian Rosmaita wrote ---- > > On 3/6/20 6:12 PM, Goutham Pacha Ravi wrote: > > > > > > On Thu, Mar 5, 2020 at 11:53 AM Brian Rosmaita > > > > wrote: > > > > > > On 3/4/20 5:40 PM, Ghanshyam Mann wrote: > > > > ---- On Wed, 04 Mar 2020 13:53:00 -0600 Brian Rosmaita > > > > > > > wrote ---- > > > > > Hello QA team and devstack-plugin-ceph-core people, > > > > > > > > > > The Cinder team has some proposals we'd like to float. > > > > > > > > > > 1. The Cinder team is interested in becoming more active in the > > > > > maintenance of openstack/devstack-plugin-ceph [0]. > > > Currently, the > > > > > devstack-plugin-ceph-core is > > > > > https://review.opendev.org/#/admin/groups/1196,members > > > > > The cinder-core is already represented by Eric and Sean; we'd > > > like to > > > > > replace them by including the cinder-core group. > > > > > > > > +1. This is good diea and make sense, I will do the change. > > > > > > Great, thanks! > > > > > > > > > > > > I agree this is a great idea to have more members of Cinder joining the > > > devstack-plugin-ceph team. I would like to have atleast a sub team of > > > manila core reviewers added to this project if it makes sense. The > > > Manila CephFS drivers (cephfs-native and cephfs-nfs) are currently being > > > tested with the help of the devstack integration in devstack-plugin-ceph. > > > > > > We have Tom Barron (tbarron) in the team, i'd like to propose myself > > > (gouthamr) and Victoria Martinez de la Cruz (vkmc) > > > > > > Please let me know what you think of the idea. > > > > I've got no objection from the Cinder side. I would also not object to > > adding the manila-core group instead of individuals. It's certainly in > > your team's interest to keep this thing stable and working, just as it > > is for the Cinder team. > > Agree, I think adding manila group will be helpful, let me know if ok for you > and accordinfgly I will make changes. > > > Sure thing, works for me. Thanks Brian and Ghanshyam. Done. Replace the individual with manila group. -gmann > > -gmann > > > > > > > > > > > > > > > > 2. The Cinder team is interested in becoming more active in the > > > > > maintenance of x/devstack-plugin-nfs [1]. Currently, the > > > > > devstack-plugin-nfs-core is > > > > > https://review.opendev.org/#/admin/groups/1330,members > > > > > It's already 75% cinder-core members; we'd like to replace the > > > > > individual members with the cinder-core group. We also > > > propose that > > > > > devstack-core be added as an included group. > > > > > > > > > > 3. The Cinder team is interested in implementing a new > > > devstack plugin: > > > > > openstack/devstack-plugin-open-cas > > > > > This will enable thorough testing of a new feature [2] being > > > introduced > > > > > as experimental in Ussuri and expected to be finalized in > > > Victoria. Our > > > > > plan would be to make both cinder-core and devstack-core > > > included groups > > > > > for the gerrit group governing the new plugin. > > > > > > > > +1. You want this under Cinder governance or under QA ? > > > > > > I think it makes sense for these to be under QA governance -- QA would > > > own the repo with both QA and Cinder having permission to make changes. > > > > > > > > > > > > > 4. This is a minor point, but can the devstack-plugin-nfs > > > repo be moved > > > > > back into the 'openstack' namespace? > > > > > > > > If this is usable plugin for nfs testing (I am not aware if we > > > have any other) then > > > > it make sense to bring it to openstack governance. > > > > Same question here, do you want to put this under Cinder > > > governance or QA. > > > > > > Same here, I think QA should "own" the repo, but Cinder will have > > > permission to make changes there. > > > > > > > > > > > Those plugins under QA governance also ok for me with your > > > proposal of calloborative maintainance by > > > > devstack-core and cinder-core. > > > > > > > > -gmann > > > > > > Thanks for the quick response! > > > > > > > > > > > > > Let us know which of these proposals you find acceptable. > > > > > > > > > > > > > > > [0] https://opendev.org/openstack/devstack-plugin-ceph > > > > > [1] https://opendev.org/x/devstack-plugin-nfs > > > > > [2] > > > https://blueprints.launchpad.net/cinder/+spec/support-volume-local-cache > > > > > > > > > > > > > > > > > > > > > > > > > > > From Dong.Ding at dell.com Tue Mar 10 01:55:29 2020 From: Dong.Ding at dell.com (Dong.Ding at dell.com) Date: Tue, 10 Mar 2020 01:55:29 +0000 Subject: [manila] share group replication spike/questions In-Reply-To: References: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> Message-ID: <3b103cb9a3894762a8664815fff5771c@KULX13MDC124.APAC.DELL.COM> Hi, Gotham, After checked the manila DB, I noticed there is table called ‘share_instances’ which was added for share replication and snapshot. Now, for group replication, do you think we also need a new table like ‘share_group_instances’ ? Thanks, Ding Dong From: Goutham Pacha Ravi Sent: Saturday, February 29, 2020 7:43 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Fri, Feb 28, 2020 at 12:21 AM > wrote: Thanks Gotham, We are talking about this feature after U release. Cannot get it done in recently. Just do some prepare first. Great, thanks for confirming. We'll hash out the design on the specification, and if necessary, we can work through it during the Open Infra Project Technical Gathering in June [8][9] [8] https://www.openstack.org/ptg/ [9] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning BR, Ding Dong From: Goutham Pacha Ravi > Sent: Friday, February 28, 2020 7:10 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Tue, Feb 25, 2020 at 12:53 AM > wrote: Hi, guys, As we talked about the topic in a virtual PTG few months ago. https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (Support promoting several shares in group (DELL EMC: dingdong) I’m trying to write a manila-spec for it. Hi, thank you for working on this, and for submitting a specification [0]. We're targeting this for the Victoria release, correct? I like working on these major changes as soon as possible giving us enough air time for testing and hardening. It’s my first experience to implement such feature in framework. I need to double check with you something, and hope you can give me some guides like: 1. Where is the extra-spec defined for group/group type, it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) Group type extra specs are added as storage capabilities first, you begin by modifying the driver interface to report this group type capability. When share drivers report their support for group replication, operators can use the corresponding string in their group type extra-specs to schedule appropriately. I suggest taking a look at an existing share group type capability called "consistent_snapshot_support". [1] and [2] are reviews that added it. 2. The command cli should be implemented for ‘python-manilaclinet’ repo, right? (I have never touched this repo before) Yes. python-manilaclient encompasses - a python SDK to version 2 of the manila API - two shell implementations: manila and openstack client (actively being developed) Group type extra-specs are passed transparently through the SDK and CLI, you may probably add some documentation or shell hint text (like [3] if needed). 3. Where is the rest-api should be implemented? The rest API is in the openstack/manila repository. [4][5] contain some documentation regarding how to change the manila API. 4. And more tips you have? like any other related project should be changed? For any new feature, we need these additional things besides working code: - A first party driver implementation where possible so we can test this feature in the upstream CI (if no first party driver can support this feature, you'll need to make the best approximation of this feature through the Dummy/Fake driver [6]) - The feature must be tested with adequate test cases in manila-tempest-plugin - Documentation must be added to the manila documentation [7] Just list what I know, and more details questions will be raised when implementing, I think. FYI Thanks, Ding Dong Happy to answer any more questions, here or on your specification [0] Thanks, Goutham [0] https://review.opendev.org/#/c/710166/ [1] https://review.opendev.org/#/c/446044/ [2] https://review.opendev.org/#/c/447474/ [3] https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 [4] https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html [5] https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html [6] https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py [7] https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Mar 10 02:21:11 2020 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 9 Mar 2020 22:21:11 -0400 Subject: [all][ideas] Introducing Project Teapot: a baremetal cloud for the 2020s Message-ID: <1dce98d8-e92d-f648-282d-2491d32ae1bd@redhat.com> I'm pleased to announce the first entry on the new Ideas site (I wanted to call it Crazy Ideas, but JP overruled me and he did all the work to set it up), which also happens to be the reason the Ideas site exists: Project Teapot. (Before we go any further; we all wear a lot of hats in this community, so I'd like to make clear that I'm writing this email wearing my cap-and-bells in my capacity as self-appointed spokes-jester for Project Teapot, and no other.) What is Project Teapot? It's a new kind of cloud that has only one type of workload: Kubernetes on bare metal. Intrigued? Read on: https://governance.openstack.org/ideas/ideas/teapot/index.html What does this have to do with OpenStack? Well, it turns out that the OpenStack community has already built a lot (but not all) of the implementations of things that a cloud like this would need, in the form of projects like Ironic, Manila, Cinder, Keystone, Designate, Barbican. Plus we think it could probably be run as an OpenStack service alongside an existing OpenStack cloud when desired as well. Thanks are due to all of the folks who helped develop this idea, and the domain experts who reviewed it to hopefully eliminate the most egregious errors. Now it's over to you. If you need this, or want to help implement it, we'd like to hear from you on this thread. If you think this is a terrible idea and we should all be run out of town on a rail, we want to hear that too! (Pro tip: use the hashtag #ProjectCrackpot when you complain about it on Twitter.) If you have corrections or additional implementation ideas to add, feel free to submit a patch to the openstack/ideas repo. You can also add questions as inline comments on the original review (https://review.opendev.org/710173) if you want. It might pay to flag anything you post to Gerrit in this thread as well to make sure it's not missed. Finally, if you have ideas of your own, please submit them to the Ideas repo. Remember, they can be as crazy as you want. Let's not let the wisdom of the OpenStack community remain locked up in individual heads. cheers, Zane. From whayutin at redhat.com Tue Mar 10 02:57:37 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 9 Mar 2020 20:57:37 -0600 Subject: [tripleo] Message-ID: Greetings, Everyone you are going to find your jobs going red atm. My apologies some containers were not pushed manually properly and I'm having to repull and and repush to docker.io atm. We should have this fixed in about 2 hours. Sorry again, and hopefully you'll be back up and running shortly. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Mar 10 04:24:40 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 9 Mar 2020 22:24:40 -0600 Subject: [tripleo] In-Reply-To: References: Message-ID: On Mon, Mar 9, 2020 at 8:57 PM Wesley Hayutin wrote: > Greetings, > > Everyone you are going to find your jobs going red atm. My apologies some > containers were not pushed manually properly and I'm having to repull and > and repush to docker.io atm. We should have this fixed in about 2 hours. > > Sorry again, and hopefully you'll be back up and running shortly. > > Thanks > Quick update.. We're about 1/2 done w/ pushing the new containers for centos-8, the tag is af182654cc32d30ea7f4774eb06ed9fd Hopefully all the mirrors will get seeded quickly. Sorry for the inconvenience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Mar 10 05:45:13 2020 From: soulxu at gmail.com (Alex Xu) Date: Tue, 10 Mar 2020 13:45:13 +0800 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: References: <1583482276.12170.14@est.tech> <4692F106-6B00-41FC-9BA9-1DF62A24EDAB@fried.cc> <170b01d7595.10341333e516143.4131462912712933865@ghanshyammann.com> Message-ID: gibi, thanks. Sylvain Bauza 于2020年3月10日周二 上午12:09写道: > Thanks indeed gibi for this. > > On Fri, Mar 6, 2020 at 2:58 PM Ghanshyam Mann > wrote: > >> ---- On Fri, 06 Mar 2020 07:01:15 -0600 Eric Fried >> wrote ---- >> > Big +1 from me. Many thanks, gibi. Not that you‘ll need it, but >> please don’t hesitate to reach out to me if you have questions. >> >> Indeed. Thanks gibi for helping out here. >> >> -gmann >> > >> > efried_gone >> > >> > > On Mar 6, 2020, at 02:16, Balázs Gibizer >> wrote: >> > > >> > > Hi, >> > > >> > > Since Eric announced that he has to leave us [1] I have been working >> > > internally with my employee to be able to take over the Nova PTL >> > > position. Now I've got the necessary approvals. The official PTL >> > > election is close [2] and I'm ready to fill the PTL gap until the >> > > proper PTL election in April. >> > > >> > > Is this a workable solution for the community? >> > > >> > > Cheers, >> > > gibi >> > > >> > > [1] >> > > >> http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012663.html >> > > [2] https://governance.openstack.org/election/ >> > > >> > > >> > > >> > >> > >> > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Tue Mar 10 06:30:57 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 10 Mar 2020 14:30:57 +0800 (GMT+08:00) Subject: [Watcher]about meeting on March 11 Message-ID: <3555fb1f.2d8e.170c323a2a3.Coremail.licanwei_cn@163.com> Hi, We will have the team meeting tomorrow at 08:00 UTC on #openstack-meeting-alt. pls update the meeting agenda if you have something want to be discussed. if nothing need to be discussed, maybe i'll cancel the meeting. thanks, licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Mar 10 08:16:54 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 10 Mar 2020 09:16:54 +0100 Subject: [nova][stable] Nova stable branch liaison In-Reply-To: <20200309190144.yad4y7wrcnpsfk4j@lyarwood.usersys.redhat.com> References: <1583767645.12170.27@est.tech> <20200309190144.yad4y7wrcnpsfk4j@lyarwood.usersys.redhat.com> Message-ID: <1583828214.12170.28@est.tech> On Mon, Mar 9, 2020 at 19:01, Lee Yarwood wrote: > On 09-03-20 16:27:25, Balázs Gibizer wrote: >> Hi Team, >> >> According to the wiki [1] we still have Matt as a stable branch >> liaison for >> nova. Obviously we need to find somebody else as Matt is gone. I'm >> not part >> of the nova-stable-core team so I cannot assume default ownership >> of that. >> Also I see that we have a small but active stable team so I hope >> somebody >> from that team can step forward and can take this role. > > Yup I would be happy to help with this. Thank you for taking this work up. > > Should I update the page assuming we don't have any other volunteers? I trust you that you can do it and I handle this in a first come first serve basis. But anyone who want to help in stable maint can reach out to me (or I guess to you as well) publicly or privately and we can arrange further load sharing if needed. Also after the PTL election the next PTL (if any) can re-evaluate the liaison situation if needed. I updated the wiki accordingly. Cheers, gibi > > Cheers, > > Lee > >> [1] >> https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 > F672 2D76 From thierry at openstack.org Tue Mar 10 09:15:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 10 Mar 2020 10:15:50 +0100 Subject: [all] Collecting Virtual Midcycle Best Practices In-Reply-To: References: Message-ID: Kendall Nelson wrote: > I wanted to collect best practices and pitfalls to avoid wrt projects > experiences with virtual midcycles. I know of a few projects that have > done them in the past and with how travel is hard for a lot of people > right now, I expect more projects to have midcycles. I think it would be > helpful to have all of the data we can collect in one place for those > not just new to virtual midcycles but the whole community. > [...] Also interested in feedback from teams that had virtual PTGs in the past (keeping all possibilities on the table). I think Kolla, Telemetry and a few others did that. -- Thierry Carrez (ttx) From mark at stackhpc.com Tue Mar 10 09:45:22 2020 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 10 Mar 2020 09:45:22 +0000 Subject: [all] Collecting Virtual Midcycle Best Practices In-Reply-To: References: Message-ID: On Tue, 10 Mar 2020 at 09:17, Thierry Carrez wrote: > > Kendall Nelson wrote: > > I wanted to collect best practices and pitfalls to avoid wrt projects > > experiences with virtual midcycles. I know of a few projects that have > > done them in the past and with how travel is hard for a lot of people > > right now, I expect more projects to have midcycles. I think it would be > > helpful to have all of the data we can collect in one place for those > > not just new to virtual midcycles but the whole community. > > [...] > > Also interested in feedback from teams that had virtual PTGs in the past > (keeping all possibilities on the table). I think Kolla, Telemetry and a > few others did that. Kolla has now had two virtual PTGs. Overall I think they went fairly well, particularly the most recent one. We tried Zoom, then moved to Google Meet. I forget the problems with Zoom. There were inevitably a few teething problems with the video, but I think we worked it out after 15-20 minutes. Etherpad for Ussuri vPTG here: https://etherpad.openstack.org/p/kolla-ussuri-ptg. Without seeing people's faces it can be hard to ensure everyone keeps focussed. It's quite rare for the whole room to be focussed at physical discussions though. Going around the room giving short intros helps to get people talking, and it may be better to do these ~1 hour in as people may miss the start. Directing questions at non-cores can help overcome that pesky imposter syndrome. Keeping video on definitely helps with engagement, up to the point where it impacts audio quality. There was also the Denver PTG where the PTL and a number of cores were remote where we struggled to make any progress. I think there were a few reasons for this. The fixed time of the PTG was not optimal for many remote attendees living in Europe or Asia. When there are a number of participants in one location, it can be easy to forget to direct speech at the microphone, allow time for remote callers to ask questions/respond etc. This makes it difficult and frustrating for them to join in, making it easier to get distracted and drop off. Not too much hard data in there, but hopefully a feel for how it went for us. > > -- > Thierry Carrez (ttx) > From balazs.gibizer at est.tech Tue Mar 10 09:49:13 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 10 Mar 2020 10:49:13 +0100 Subject: [nova] US meeting slot In-Reply-To: <1583744340.12170.17@est.tech> References: <1583744340.12170.17@est.tech> Message-ID: <1583833753.12170.29@est.tech> On Mon, Mar 9, 2020 at 08:59, Balázs Gibizer wrote: > Hi, > > Nova has alternate meeting slots on Thursdays to try to cover > contributors from different time zones. > * 14:00 UTC > * 21:00 UTC > > As I'm taking over the PTL role form Eric I need to figure out how to > run the nova meetings. I cannot really run the UTC 21:00 as it is > pretty late for me. (I will run the 14:00 UTC slot). I see different > options: > > a) Somebody from the US side of the globe volunteers to run the 21:00 > UTC slot. Please speak up if you would like to run it. I can help you > with agenda refresh and technicalities if needed. > > b) Have only one meeting time, and move that to 16:00 UTC. In this > case I will be able to run it most of the weeks. > > c) Do not have a dedicated meeting slot but switch to office hours. > Here we also need to find a time slot. I think 16:00 UTC could work > there as well. > > > Please share your view! Any other proposal is very welcome. Thank you all for the feedback. I see a consensus forming around moving the weekly meeting slot both from 21:00 UTC and from 14:00 UTC to a single 16:00 UTC slot. I'm aware that this is bad for our contributors from China. To help with this pain I offer to be available on Thurday 8:00 - 9:00 UTC on #openstack-nova to discuss any issues that needs to be brought up on the 16:00 UTC meeting (almost like an office hour). Let's see how this works out and please give me feedback any time. // gibi > > Cheers, > gibi > From thierry at openstack.org Tue Mar 10 10:12:13 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 10 Mar 2020 11:12:13 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: References: Message-ID: <2e142636-0070-704c-c5f7-1e035bc9d406@openstack.org> Mohammed Naser wrote: > [...] > I think it's time to re-evaluate the project leadership model that we > have. I am thinking that perhaps it would make a lot of sense to move > from a single PTL model to multiple maintainers. This would leave it > up to the maintainers to decide how they want to sort the different > requirements/liaisons/contact persons between them. > > The above is just a very basic idea, I don't intend to diving much > more in depth for now as I'd like to hear about what the rest of the > community thinks. I agree that in the current age we need to take steps to avoid overwhelming roles and long commitments. As others said, we also need to preserve some accountability, but I don't think those goals are incompatible. The original design goal of the "PTL" system was to have a clear "bucket stops here" for technical decisions at project-team level, as well as a safety valve for contributors at large (through elections) to reset the core reviewers team if it's gone wild. The "bucket stops here" power was very rarely exercised (probably due to its mere existence). I'd agree that today this is less needed, and we could have equal-power maintainers/corereviewers. We still have the TC above project teams as a safety valve, and we could agree that petitions from enough contributors can trigger a reset of the core reviewers structure. The real benefit of the "PTL" system today is to facilitate the work of people outside the project team. When you try to put out a coordinated release (or organize a PTG), having a clear person that can "speak for the team", without having to get into specifics for each of our 60+ teams, is invaluable. That said, there is really no reason why that clear person should be always the same person, for 6 months. We've always said that those subroles (release liaison, meeting chair, event liaison...) should be decomposed and delegated to multiple people. That the PTL should only be involved if the role was not delegated. Yet in most teams the PTL has trouble delegating and still fills all those roles. We need to change the perception. So one solution might be: - Define multiple roles (release liaison, event liaison, meeting chair...) and allow them to be filled by the team as they want, for the duration they want, replaced when they want (would just need +1 from previous and new holder of the role) - Use the TC as a governance safety valve to resolve any conflict (instead of PTL elections) -- Thierry Carrez (ttx) From balazs.gibizer at est.tech Tue Mar 10 11:39:51 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 10 Mar 2020 12:39:51 +0100 Subject: [nova] US meeting slot In-Reply-To: <1583833753.12170.29@est.tech> References: <1583744340.12170.17@est.tech> <1583833753.12170.29@est.tech> Message-ID: <1583840391.12170.30@est.tech> On Tue, Mar 10, 2020 at 10:49, Balázs Gibizer wrote: > > > On Mon, Mar 9, 2020 at 08:59, Balázs Gibizer > wrote: >> Hi, >> >> Nova has alternate meeting slots on Thursdays to try to cover >> contributors from different time zones. >> * 14:00 UTC >> * 21:00 UTC >> >> As I'm taking over the PTL role form Eric I need to figure out how >> to run the nova meetings. I cannot really run the UTC 21:00 as it >> is pretty late for me. (I will run the 14:00 UTC slot). I see >> different options: >> >> a) Somebody from the US side of the globe volunteers to run the >> 21:00 UTC slot. Please speak up if you would like to run it. I can >> help you with agenda refresh and technicalities if needed. >> >> b) Have only one meeting time, and move that to 16:00 UTC. In this >> case I will be able to run it most of the weeks. >> >> c) Do not have a dedicated meeting slot but switch to office hours. >> Here we also need to find a time slot. I think 16:00 UTC could work >> there as well. >> >> >> Please share your view! Any other proposal is very welcome. > > Thank you all for the feedback. I see a consensus forming around > moving the weekly meeting slot both from 21:00 UTC and from 14:00 UTC > to a single 16:00 UTC slot. I'm aware that this is bad for our > contributors from China. To help with this pain I offer to be > available on Thurday 8:00 - 9:00 UTC on #openstack-nova to discuss > any issues that needs to be brought up on the 16:00 UTC meeting > (almost like an office hour). Let's see how this works out and please > give me feedback any time. Patch is up to re-schedule meeting https://review.opendev.org/#/c/712052 gibi > > // > gibi > >> >> Cheers, >> gibi >> > > > From sean.mcginnis at gmx.com Tue Mar 10 13:56:33 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 10 Mar 2020 08:56:33 -0500 Subject: [all] Victoria schedule Message-ID: Hello everyone, The proposed schedule for the Victoria release has now been approved and the schedule published: https://releases.openstack.org/victoria/schedule.html Some of the key dates for Victoria: * Milestone 1 - June 18 * Milestone 2 - July 30 * Milestone 3 - Sept 10 * RC1 deadline - Sept 24 * Final RC deadline - Oct 8 * Victoria coordinated release - Oct 14 Thanks! Sean From whayutin at redhat.com Tue Mar 10 14:24:27 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 10 Mar 2020 08:24:27 -0600 Subject: [tripleo] In-Reply-To: References: Message-ID: Greetings, Follow up.. new containers have been pushed to docker.io. Thank you for your patience! On Mon, Mar 9, 2020 at 10:24 PM Wesley Hayutin wrote: > > > On Mon, Mar 9, 2020 at 8:57 PM Wesley Hayutin wrote: > >> Greetings, >> >> Everyone you are going to find your jobs going red atm. My >> apologies some containers were not pushed manually properly and I'm having >> to repull and and repush to docker.io atm. We should have this fixed in >> about 2 hours. >> >> Sorry again, and hopefully you'll be back up and running shortly. >> >> Thanks >> > > Quick update.. > We're about 1/2 done w/ pushing the new containers for centos-8, the tag > is af182654cc32d30ea7f4774eb06ed9fd > > > > Hopefully all the mirrors will get seeded quickly. > Sorry for the inconvenience. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Tue Mar 10 14:34:13 2020 From: mordred at inaugust.com (Monty Taylor) Date: Tue, 10 Mar 2020 09:34:13 -0500 Subject: [tripleo] In-Reply-To: References: Message-ID: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> > On Mar 10, 2020, at 9:24 AM, Wesley Hayutin wrote: > > Greetings, > > Follow up.. new containers have been pushed to docker.io. > Thank you for your patience! Yay! When you have brainspace after firefighting (always fun) - maybe we should find a time to talk about whether our image building and publishing automation could help you out here. No rush - this is one of those “we’ve got some tools we might be able to leverage to help” - just ping me whenever. > On Mon, Mar 9, 2020 at 10:24 PM Wesley Hayutin wrote: > > > On Mon, Mar 9, 2020 at 8:57 PM Wesley Hayutin wrote: > Greetings, > > Everyone you are going to find your jobs going red atm. My apologies some containers were not pushed manually properly and I'm having to repull and and repush to docker.io atm. We should have this fixed in about 2 hours. > > Sorry again, and hopefully you'll be back up and running shortly. > > Thanks > > Quick update.. > We're about 1/2 done w/ pushing the new containers for centos-8, the tag is af182654cc32d30ea7f4774eb06ed9fd > > Hopefully all the mirrors will get seeded quickly. > Sorry for the inconvenience. > From whayutin at redhat.com Tue Mar 10 14:49:08 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 10 Mar 2020 08:49:08 -0600 Subject: [tripleo] In-Reply-To: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> References: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> Message-ID: On Tue, Mar 10, 2020 at 8:34 AM Monty Taylor wrote: > > > > On Mar 10, 2020, at 9:24 AM, Wesley Hayutin wrote: > > > > Greetings, > > > > Follow up.. new containers have been pushed to docker.io. > > Thank you for your patience! > > Yay! > > When you have brainspace after firefighting (always fun) - maybe we should > find a time to talk about whether our image building and publishing > automation could help you out here. No rush - this is one of those “we’ve > got some tools we might be able to leverage to help” - just ping me > whenever. > > Definitely interested!! Some additional context on what we're up to is helpful as well. Our current tooling is here [1] and we're under the gun w/ getting centos-8 ready for ussuri and DLRN's new component feature [2]. If there are upstream tooling and processes we can incorporate we'd be interested in picking our heads up and listening! Thanks as usual Monty! [1] https://github.com/rdo-infra/ci-config/tree/master/ci-scripts/dlrnapi_promoter [2] https://review.rdoproject.org/r/#/c/24818/ https://trunk.rdoproject.org/centos8-master/component/ https://dlrn.readthedocs.io/en/latest/api.html > > On Mon, Mar 9, 2020 at 10:24 PM Wesley Hayutin > wrote: > > > > > > On Mon, Mar 9, 2020 at 8:57 PM Wesley Hayutin > wrote: > > Greetings, > > > > Everyone you are going to find your jobs going red atm. My apologies > some containers were not pushed manually properly and I'm having to repull > and and repush to docker.io atm. We should have this fixed in about 2 > hours. > > > > Sorry again, and hopefully you'll be back up and running shortly. > > > > Thanks > > > > Quick update.. > > We're about 1/2 done w/ pushing the new containers for centos-8, the tag > is af182654cc32d30ea7f4774eb06ed9fd > > > > Hopefully all the mirrors will get seeded quickly. > > Sorry for the inconvenience. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lyarwood at redhat.com Tue Mar 10 15:22:03 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 10 Mar 2020 15:22:03 +0000 Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 Message-ID: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> Hello all, I've started PoC'ing some ideas around $subject in the topic below and wanted to ask the wider team for feedback on the approach I'm taking: https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 My initial idea is to break the job up into the following smaller multinode jobs that are hopefully easier to understand and maintain. * nova-multinode-live-migration-py3 A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. * nova-multinode-live-migration-ceph-py3 A ceph based LM job using rbd for both imagebackend and c-vol. * nova-multinode-evacuate-py3 A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol. The existing script *could* then be ported to an Ansible role as part of the migration to Zuulv3. Hopefully this is pretty straight forward but I'd appreciate any feedback on this all the same. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From cboylan at sapwetik.org Tue Mar 10 15:25:38 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 10 Mar 2020 08:25:38 -0700 Subject: =?UTF-8?Q?Re:_[nova]_Breaking_up_and_migrating_the_nova-live-migration_j?= =?UTF-8?Q?ob_to_Zuulv3?= In-Reply-To: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> References: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> Message-ID: <00363f3f-ed3d-488d-98d4-c3025b7e179f@www.fastmail.com> On Tue, Mar 10, 2020, at 8:22 AM, Lee Yarwood wrote: > Hello all, > > I've started PoC'ing some ideas around $subject in the topic below and > wanted to ask the wider team for feedback on the approach I'm taking: > > https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 > > My initial idea is to break the job up into the following smaller > multinode jobs that are hopefully easier to understand and maintain. > > * nova-multinode-live-migration-py3 > > A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. > > * nova-multinode-live-migration-ceph-py3 > > A ceph based LM job using rbd for both imagebackend and c-vol. > > * nova-multinode-evacuate-py3 > > A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol. > The existing script *could* then be ported to an Ansible role as part of > the migration to Zuulv3. > > Hopefully this is pretty straight forward but I'd appreciate any > feedback on this all the same. Just a note that you can probably drop the -py3 suffix as I imagine that is assumed at this point? > > Cheers, > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From gagehugo at gmail.com Tue Mar 10 15:36:09 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 10 Mar 2020 10:36:09 -0500 Subject: [openstack-helm] VIrtual Midcycle - March 2020 Message-ID: Hello everyone, just a reminder that we are looking to host a virtual midcycle later this month. If you wish to attend, please fill out the poll[0] and etherpad[1] if you have any topics to discuss. We are looking to host a roughly 4 hour session virtually either next week or the following week. [0] https://doodle.com/poll/g6uvdb4rbad9s8gb [1] https://etherpad.openstack.org/p/osh-virtual-ptg-2020-03 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Tue Mar 10 16:43:12 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 10 Mar 2020 17:43:12 +0100 Subject: [horizon][kolla] pyscss failure on newest setuptools Message-ID: <79855e85-adcf-bf3b-dfa4-c017ed2ae329@linaro.org> One of Horizon's requirements is pyscss package. Which had last release over 4 years ago... Two days ago setuptools v46 got released. One of changes was drop of Features feature. Today Kolla builds started to fail: INFO:kolla.common.utils.horizon:Collecting pyScss===1.3.4 INFO:kolla.common.utils.horizon: Downloading http://mirror.ord.rax.opendev.org:8080/pypifiles/packages/1d/4a/221ae7561c8f51c4f28b2b172366ccd0820b14bb947350df82428dfce381/pyScss-1.3.4.tar.gz (120 kB) INFO:kolla.common.utils.horizon: ERROR: Command errored out with exit status 1: INFO:kolla.common.utils.horizon: command: /var/lib/kolla/venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-rr0db3qs/pyScss/pip-egg-info INFO:kolla.common.utils.horizon: cwd: /tmp/pip-install-rr0db3qs/pyScss/ INFO:kolla.common.utils.horizon: Complete output (5 lines): INFO:kolla.common.utils.horizon: Traceback (most recent call last): INFO:kolla.common.utils.horizon: File "", line 1, in INFO:kolla.common.utils.horizon: File "/tmp/pip-install-rr0db3qs/pyScss/setup.py", line 9, in INFO:kolla.common.utils.horizon: from setuptools import setup, Extension, Feature INFO:kolla.common.utils.horizon: ImportError: cannot import name 'Feature' Are there any plans to fix it? pyscss project got issue: https://github.com/Kronuz/pyScss/issues/385 In Kolla I made ugly workaround: https://paste.centos.org/view/2e29d284 What are plans of Horizon team? From Arkady.Kanevsky at dell.com Tue Mar 10 17:04:19 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 10 Mar 2020 17:04:19 +0000 Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 In-Reply-To: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> References: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> Message-ID: Thank Lee. Sound approach. A few questions/comments. 1. Assume that we have unwritten assumption that all nova nodes have access to volumes on the backend. So we rely on it except for ephemeral storage. 2. What need to be done for volumes that use FC not iSCSI? 3. You have one for Ceph. Does that mean that we need an analog for other cinder back ends? 4. Do we need to anything analogous for Manila? 5. How do we address multi-attach volumes and multipathing? Expect that if we have multipthaing on origin node we laso have multipathing at destination at the end. Thanks, Arkady -----Original Message----- From: Lee Yarwood Sent: Tuesday, March 10, 2020 10:22 AM To: openstack-discuss at lists.openstack.org Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 Hello all, I've started PoC'ing some ideas around $subject in the topic below and wanted to ask the wider team for feedback on the approach I'm taking: https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 My initial idea is to break the job up into the following smaller multinode jobs that are hopefully easier to understand and maintain. * nova-multinode-live-migration-py3 A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. * nova-multinode-live-migration-ceph-py3 A ceph based LM job using rbd for both imagebackend and c-vol. * nova-multinode-evacuate-py3 A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol. The existing script *could* then be ported to an Ansible role as part of the migration to Zuulv3. Hopefully this is pretty straight forward but I'd appreciate any feedback on this all the same. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From grant at civo.com Tue Mar 10 19:18:58 2020 From: grant at civo.com (Grant Morley) Date: Tue, 10 Mar 2020 19:18:58 +0000 Subject: Neutron RabbitMQ issues Message-ID: <825e802d-5a6f-4e96-dcf5-9b10332ebf3e@civo.com> Hi all, We are currently experiencing some fairly major issues with our OpenStack cluster. It all appears to be with Neutron and RabbitMQ.  We are seeing a lot of time out messages in responses to replies and because of this instance creation or anything to do with instances and networking is broken. We are running OpenStack Queens. We have already tuned Rabbit for Neutron by doing the following on neutron: heartbeat_timeout_threshold = 0 rpc_conn_pool_size = 300 rpc_thread_pool_size = 2048 rpc_response_timeout = 3600 rpc_poll_timeout = 60 ## Rpc all executor_thread_pool_size = 64 rpc_response_timeout = 3600 What we are seeing in the error logs for neutron for all services (l3-agent, dhcp, linux-bridge etc ) are these timeouts: https://pastebin.com/Fjh23A5a We have manually tried to get everything in sync by forcing fail-over of the networking which seems to get routers in sync. We are also seeing that there are a lot of "unacknowledged" messages in RabbitMQ for 'q-plugin' in the neutron queues. Some times restarting of the services on neutron gets these back acknowledged again, however the timeouts come back. The RabbitMQ servers themselves are not loaded at all. All memory, file descriptors and errlang processes have plenty of resources available. We are also seeing a lot of rpc issues: Timeout in RPC method release_dhcp_port. Waiting for 1523 seconds before next attempt. If the server is not down, consider increasing the rpc_response_timeout option as Neutron server(s) may be overloaded and unable to respond quickly enough.: MessagingTimeout: Timed out waiting for a reply to message ID 965fa44ab4f6462fa378a1cf7259aad4 2020-03-10 19:02:33.548 16242 ERROR neutron.common.rpc [req-a858afbb-5083-4e21-a309-6ee53582c4d9 - - - - -] Timeout in RPC method release_dhcp_port. Waiting for 3347 seconds before next attempt. If the server is not down, consider increasing the rpc_response_timeout option as Neutron server(s) may be overloaded and unable to respond quickly enough.: MessagingTimeout: Timed out waiting for a reply to message ID 7937465f15634fbfa443fe1758a12a9c Does anyone know if there is anymore tuning to be done at all? Upgrading for us at the moment to a newer version isn't really an option unfortunately. Because of our setup, we also have roughly 800 routers enabled and I know that will be putting a load on the system. However these problems have only started to happen roughly 1 week ago and have steadily got worse. If anyone has any use cases for this or any more recommendations that would be great. Many thanks, From radoslaw.piliszek at gmail.com Tue Mar 10 20:47:29 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 10 Mar 2020 21:47:29 +0100 Subject: OSC version Message-ID: Hiya Folks, while doing something else entirely (as usual!), I noticed something related to OSC version. OSC metapackage README [1] states that: "The major version of openstackclient will always correspond to the major version of python-openstackclient that will be installed." But OSC metapackage is 4.0.0 atm and installs latest python-osc which is 5.0.0. I don't follow that "correspondence" unless it was meant to mean "always not less than". :-) Still confusing. [1] https://pypi.org/project/openstackclient/ -yoctozepto From jim at jimrollenhagen.com Tue Mar 10 20:59:48 2020 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 10 Mar 2020 16:59:48 -0400 Subject: Not running for TC next election Message-ID: Hi all, I won't be running for TC next election. As you probably noticed, I don't really have enough time these days to meaningfully contribute, so leaving it open for someone new. It's been fun and a great learning experience, so I highly encourage others in the community to run! I'll still be around to heckle in the background, don't worry. :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Mar 10 21:19:00 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Mar 2020 14:19:00 -0700 Subject: [all] Collecting Virtual Midcycle Best Practices In-Reply-To: References: Message-ID: Thanks for sharing Mark! I think there is a lot of good information in there. How many people were joining approximately? How did you coordinate the when you would do it? Would you mind adding some of that to the etherpad[1] I am collecting info into? -Kendall (diablo_rojo) [1] https://etherpad.openstack.org/p/virtual-midcycle-best-practices On Tue, Mar 10, 2020 at 2:47 AM Mark Goddard wrote: > On Tue, 10 Mar 2020 at 09:17, Thierry Carrez > wrote: > > > > Kendall Nelson wrote: > > > I wanted to collect best practices and pitfalls to avoid wrt projects > > > experiences with virtual midcycles. I know of a few projects that have > > > done them in the past and with how travel is hard for a lot of people > > > right now, I expect more projects to have midcycles. I think it would > be > > > helpful to have all of the data we can collect in one place for those > > > not just new to virtual midcycles but the whole community. > > > [...] > > > > Also interested in feedback from teams that had virtual PTGs in the past > > (keeping all possibilities on the table). I think Kolla, Telemetry and a > > few others did that. > > Kolla has now had two virtual PTGs. Overall I think they went fairly > well, particularly the most recent one. We tried Zoom, then moved to > Google Meet. I forget the problems with Zoom. There were inevitably a > few teething problems with the video, but I think we worked it out > after 15-20 minutes. Etherpad for Ussuri vPTG here: > https://etherpad.openstack.org/p/kolla-ussuri-ptg. > > Without seeing people's faces it can be hard to ensure everyone keeps > focussed. It's quite rare for the whole room to be focussed at > physical discussions though. > > Going around the room giving short intros helps to get people talking, > and it may be better to do these ~1 hour in as people may miss the > start. Directing questions at non-cores can help overcome that pesky > imposter syndrome. Keeping video on definitely helps with engagement, > up to the point where it impacts audio quality. > > There was also the Denver PTG where the PTL and a number of cores were > remote where we struggled to make any progress. I think there were a > few reasons for this. The fixed time of the PTG was not optimal for > many remote attendees living in Europe or Asia. When there are a > number of participants in one location, it can be easy to forget to > direct speech at the microphone, allow time for remote callers to ask > questions/respond etc. This makes it difficult and frustrating for > them to join in, making it easier to get distracted and drop off. > > Not too much hard data in there, but hopefully a feel for how it went for > us. > > > > > -- > > Thierry Carrez (ttx) > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Mar 10 23:11:25 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Mar 2020 19:11:25 -0400 Subject: [tripleo] Missing tag in cron container image - no recheck please Message-ID: Hi folks, We seem to have an issue with container images, where one (at least) has a missing tag: https://bugs.launchpad.net/tripleo/+bug/1866927 It is causing most of our jobs to go red and fail on: tripleo_common.image.exception.ImageNotFoundException: Not found image: docker:// docker.io/tripleomaster/centos-binary-cron:3621159be13b41f8ead2e873b357f4a5 Please refrain from approving or rechecking patches until we have sorted this out. Thanks and stay tuned. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Tue Mar 10 23:38:37 2020 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Mar 2020 19:38:37 -0400 Subject: [tripleo] In-Reply-To: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> References: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> Message-ID: On Tue, Mar 10, 2020 at 10:41 AM Monty Taylor wrote: > Yay! > > When you have brainspace after firefighting (always fun) - maybe we should > find a time to talk about whether our image building and publishing > automation could help you out here. No rush - this is one of those “we’ve > got some tools we might be able to leverage to help” - just ping me > whenever. > Hey Monty, The CI team is presently busy with CentOS 8 fires but I would be happy to help and work together on convergence. Maybe I can start by explaining how our process works, then you can do the same and we see where we can collaborate. The TL;DR is that we have built TripleO CLI and Ansible roles to consume Kolla tooling and build our images. 1) How a TripleO user would build an image? By using the "openstack overcloud container image build" command ("overcloud" is here by legacy, please ignore it). The magic happens here: https://opendev.org/openstack/python-tripleoclient/src/branch/master/tripleoclient/v1/container_image.py#L104-L252 It's basically wrapping out the kolla-build CLI; with proper options for us. In fact, since podman/buildah, we only use kolla-build to render Kolla Dockerfiles templates to merge them with our TripleO overrides: https://opendev.org/openstack/tripleo-common/src/branch/master/container-images/tripleo_kolla_template_overrides.j2 kolla-build would generate directories for each image and inside you would have their Dockerfiles. We don't use kolla-build to build the containers because Kolla doesn't support Buildah, and none of us has taken the time to do it yet. To build the images from Dockerfiles, we use that code: https://opendev.org/openstack/tripleo-common/src/branch/master/tripleo_common/image/builder/buildah.py It's basically running "buildah bud" with concurrency (to make it faster). This code could be converted to an Ansible module eventually; which could be consumed by more than us. Once images are built, the code runs "buildah push"; to push it to a remote (or local) registry. That's it, that's all. If we resume, we use kolla-build to generate Dockerfiles for our containers (since TripleO images use Kolla format) and then we have our own crap to use Buildah to build & push the image. I guess the second part is something we could share. 2) How TripleO CI builds containers? We have an Ansible role for that: https://opendev.org/openstack/tripleo-ci/src/branch/master/roles/build-containers It basically: - Install repositories needed to deploy TripleO - Deploy a local docker registry with ansible-role-container-registry (also used in production when Docker is deployed, so before Stein) - Install and configure Kolla - Runs "openstack overcloud container image build" (which was described earlier) to build, tag and push images I skipped a few details but this is the big picture. I'm sure there is a lot where we can share and I would be more than happy to contribute in that effort, please let me know how it works on your side and we'll find ways to collaborate. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Wed Mar 11 02:17:36 2020 From: ekcs.openstack at gmail.com (Eric Kao) Date: Tue, 10 Mar 2020 19:17:36 -0700 Subject: [congress][ptl] new PTL for V-cycle Message-ID: Hello all, Due to a change in my role, I do not intend to seek another term as Congress project PTL. I will continue through the end of the U-cycle. After that, I look forward to new voices and new perspectives to help us take the next step in the Congress journey. Cheers, Eric From mark.kirkwood at catalyst.net.nz Wed Mar 11 02:27:58 2020 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Wed, 11 Mar 2020 15:27:58 +1300 Subject: [swift] Rolling upgrade, any version relationships? Message-ID: <5f55bc36-b51c-5c6d-ad0f-63a32fcba2d4@catalyst.net.nz> Hi, we are looking at upgrading our 2.7.0 Swift cluster. In the past I've modeled this on a dev system by upgrading storage nodes one by one (using 2.17 as the target version). This seemed to work well - I deliberately left the cluster half upgraded for an extended period to test for any cross version weirdness (didn't see any). However I'm wanting to check that I have not missed something important. So my questions are: - If upgrading from 2.7.0 is it safe to just grab the latest version (e.g 2.23)? - If not is there a preferred version to jump to first? - Is it ok for the upgrade to take extended time (e.g weeks) and therefore be running with some new and some old storage nodes for that time? regards Mark From sundar.nadathur at intel.com Wed Mar 11 06:02:08 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 11 Mar 2020 06:02:08 +0000 Subject: [cyborg] Updating the list of core reviewers Message-ID: Hello all, Brin Zhang has been actively contributing to Cyborg in various areas, adding new features, improving quality, reviewing patches, and generally helping others in the community. Despite the relatively short time, he has been one of the most prolific contributors, and brings an enthusiastic and active mindset. I would like to thank him and acknowledge his significant contributions by proposing him as a core reviewer for Cyborg. Shogo Saito has been active in Cyborg since Train release. He has been driving the Cyborg client improvements, including its revamp to use OpenStackSDK. Previously he was instrumental in the transition to Python 3, testing and fixing issues in the process. As he has access to real FPGA hardware, he brings a users' perspective and also tests Cyborg with real hardware. I would like to thank and acknowledge him for his steady valuable contributions, and propose him as a core reviewer for Cyborg. Some of the currently listed core reviewers have not been participating for a lengthy period of time. It is proposed that those who have had no contributions for the past 18 months - i.e. no participation in meetings, no code contributions and no reviews - be removed from the list of core reviewers. If no objections are made known by March 20, I will make the changes proposed above. Thanks. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Mar 11 06:09:37 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 10 Mar 2020 23:09:37 -0700 Subject: [manila] share group replication spike/questions In-Reply-To: <3b103cb9a3894762a8664815fff5771c@KULX13MDC124.APAC.DELL.COM> References: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> <3b103cb9a3894762a8664815fff5771c@KULX13MDC124.APAC.DELL.COM> Message-ID: On Mon, Mar 9, 2020 at 6:55 PM wrote: > Hi, Gotham, > > > > After checked the manila DB, I noticed there is table called > ‘share_instances’ which was added for share replication and snapshot. > > Now, for group replication, do you think we also need a new table like > ‘share_group_instances’ ? > Agree, I think that's a sane approach to capture source and destination replicas adequately. Could you please discuss this through your specification? > > > Thanks, > > Ding Dong > > > > *From:* Goutham Pacha Ravi > *Sent:* Saturday, February 29, 2020 7:43 AM > *To:* Ding, Dong > *Cc:* OpenStack Discuss > *Subject:* Re: [manila] share group replication spike/questions > > > > [EXTERNAL EMAIL] > > > > > > On Fri, Feb 28, 2020 at 12:21 AM wrote: > > Thanks Gotham, > > > > We are talking about this feature *after U release*. > > Cannot get it done in recently. > > Just do some prepare first. > > > > Great, thanks for confirming. We'll hash out the design on the > specification, and if necessary, we can work through it during the Open > Infra Project Technical Gathering in June [8][9] > > > > [8] https://www.openstack.org/ptg/ > > [9] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning > > > > > > BR, > > Ding Dong > > > > *From:* Goutham Pacha Ravi > *Sent:* Friday, February 28, 2020 7:10 AM > *To:* Ding, Dong > *Cc:* OpenStack Discuss > *Subject:* Re: [manila] share group replication spike/questions > > > > [EXTERNAL EMAIL] > > > > > > > > > > On Tue, Feb 25, 2020 at 12:53 AM wrote: > > Hi, guys, > > As we talked about the topic in a virtual PTG few months ago. > > https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (*Support > promoting several shares in group (DELL EMC: dingdong) * > > > > I’m trying to write a manila-spec for it. > > > > Hi, thank you for working on this, and for submitting a specification [0]. > We're targeting this for the Victoria release, correct? I like working on > these major changes as soon as possible giving us enough air time for > testing and hardening. > > > > It’s my first experience to implement such feature in framework. > > I need to double check with you something, and hope you can give me some > guides like: > > 1. Where is the extra-spec defined for group/group type, > it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) > > Group type extra specs are added as storage capabilities first, you begin > by modifying the driver interface to report this group type capability. > When share drivers report their support for group replication, operators > can use the corresponding string in their group type extra-specs to > schedule appropriately. I suggest taking a look at an existing share group > type capability called "consistent_snapshot_support". [1] and [2] are > reviews that added it. > > 2. The command cli should be implemented for > ‘python-manilaclinet’ repo, right? (I have never touched this repo before) > > Yes. python-manilaclient encompasses > > - a python SDK to version 2 of the manila API > > - two shell implementations: manila and openstack client (actively being > developed) > > > > Group type extra-specs are passed transparently through the SDK and CLI, > you may probably add some documentation or shell hint text (like [3] if > needed). > > > > > > 3. Where is the rest-api should be implemented? > > The rest API is in the openstack/manila repository. [4][5] contain some > documentation regarding how to change the manila API. > > > > 4. And more tips you have? like any other related project > should be changed? > > For any new feature, we need these additional things besides working code: > > - A first party driver implementation where possible so we can test this > feature in the upstream CI (if no first party driver can support this > feature, you'll need to make the best approximation of this feature through > the Dummy/Fake driver [6]) > > - The feature must be tested with adequate test cases in > manila-tempest-plugin > > - Documentation must be added to the manila documentation [7] > > Just list what I know, and more details questions will be raised when > implementing, I think. > > FYI > > Thanks, > > Ding Dong > > > > Happy to answer any more questions, here or on your specification [0] > > > > Thanks, > > Goutham > > > > [0] https://review.opendev.org/#/c/710166/ > > [1] https://review.opendev.org/#/c/446044/ > > [2] https://review.opendev.org/#/c/447474/ > > [3] > https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 > > [4] > https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html > > [5] > https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html > > [6] > https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py > > [7] > https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Wed Mar 11 06:22:23 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 11 Mar 2020 06:22:23 +0000 Subject: [nova][ptl] Temporary Nova PTL until election In-Reply-To: <1583482276.12170.14@est.tech> References: <1583482276.12170.14@est.tech> Message-ID: > -----Original Message----- > From: Balázs Gibizer > Sent: Friday, March 6, 2020 12:11 AM > To: OpenStack Discuss > Subject: [nova][ptl] Temporary Nova PTL until election > > Hi, > > Since Eric announced that he has to leave us [1] I have been working > internally with my employee to be able to take over the Nova PTL position. > Now I've got the necessary approvals. The official PTL election is close [2] and > I'm ready to fill the PTL gap until the proper PTL election in April. > > Is this a workable solution for the community? > > Cheers, > gibi Definitely a +1. Thanks a lot, gibi. Regards, Sundar From mike.carden at gmail.com Wed Mar 11 07:06:53 2020 From: mike.carden at gmail.com (Mike Carden) Date: Wed, 11 Mar 2020 18:06:53 +1100 Subject: [all] Guides for newbies to OpenStack Message-ID: Our small team at ${DAYJOB} has built a handful of OpenStack clusters based on Red Hat OpenStack 13 (aka Queens) over the last couple of years. We now find ourselves in the position of being 'gifted' human resources in the shape of mid-level 'IT people' who are sent to join our team for a short time to 'Learn OpenStack'. These tend to be people for whom, "Here's a Horizon URL and some creds - go log in and launch a VM"[1]... is a bit much. I've done a wee bit of web searching (enough to find the dead links) trying to find some newbie friendly tutorials on OpenStack basics. Before I attempt to re-invent the wheel, can anyone suggest some public resources I might point people to? Deity help us if we have to explain Tripleo's Undercloud, Overcloud, partially containered, partially pacemakered, fully flaky... underpinnings. Thanks, MC [1] Even with a step by step guide -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Wed Mar 11 07:31:50 2020 From: licanwei_cn at 163.com (licanwei) Date: Wed, 11 Mar 2020 15:31:50 +0800 (GMT+08:00) Subject: [Watcher]no topics and cancelling the irc meeting today Message-ID: <66e7638d.951c.170c881bba6.Coremail.licanwei_cn@163.com> | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Wed Mar 11 08:05:48 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 11 Mar 2020 09:05:48 +0100 Subject: [horizon][kolla] pyscss failure on newest setuptools Message-ID: <6295fd26-d983-75c5-4ead-a36823034d2c@linaro.org> One of Horizon's requirements is pyscss package. Which had last release over 4 years ago... Two days ago setuptools v46 got released. One of changes was drop of Features feature. Today Kolla builds started to fail: INFO:kolla.common.utils.horizon:Collecting pyScss===1.3.4 INFO:kolla.common.utils.horizon: Downloading http://mirror.ord.rax.opendev.org:8080/pypifiles/packages/1d/4a/221ae7561c8f51c4f28b2b172366ccd0820b14bb947350df82428dfce381/pyScss-1.3.4.tar.gz (120 kB) INFO:kolla.common.utils.horizon: ERROR: Command errored out with exit status 1: INFO:kolla.common.utils.horizon: command: /var/lib/kolla/venv/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"'; __file__='"'"'/tmp/pip-install-rr0db3qs/pyScss/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-rr0db3qs/pyScss/pip-egg-info INFO:kolla.common.utils.horizon: cwd: /tmp/pip-install-rr0db3qs/pyScss/ INFO:kolla.common.utils.horizon: Complete output (5 lines): INFO:kolla.common.utils.horizon: Traceback (most recent call last): INFO:kolla.common.utils.horizon: File "", line 1, in INFO:kolla.common.utils.horizon: File "/tmp/pip-install-rr0db3qs/pyScss/setup.py", line 9, in INFO:kolla.common.utils.horizon: from setuptools import setup, Extension, Feature INFO:kolla.common.utils.horizon: ImportError: cannot import name 'Feature' Are there any plans to fix it? pyscss project got issue: https://github.com/Kronuz/pyScss/issues/385 In Kolla I made ugly workaround: https://paste.centos.org/view/2e29d284 What are plans of Horizon team? From dtantsur at redhat.com Wed Mar 11 09:17:09 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 11 Mar 2020 10:17:09 +0100 Subject: [all] Collecting Virtual Midcycle Best Practices In-Reply-To: References: Message-ID: Hi, Ironic did, I think, 3-4 virtual midcycles, and I think they were quite successful. The most positive outcome was to reach out to folks who usually cannot travel. They really appreciated that. We used SIP the first couple of times, but it caused problems for a big share of participants. Also lack of moderation was problematic. We switched to Bluejeans later, and that was, IMO, a big improvement: 1) A participant list with information who is speaking. 2) An option for a moderator to mute a person or everyone. 3) Screen sharing. Dmitry On Tue, Mar 10, 2020 at 1:29 AM Kendall Nelson wrote: > Hello Everyone! > > I wanted to collect best practices and pitfalls to avoid wrt projects > experiences with virtual midcycles. I know of a few projects that have done > them in the past and with how travel is hard for a lot of people right now, > I expect more projects to have midcycles. I think it would be helpful to > have all of the data we can collect in one place for those not just new to > virtual midcycles but the whole community. > > I threw some categories into this etherpad[1] and filled in some options. > Please add to it :) > > -Kendall (diablo_rojo) > > [1] https://etherpad.openstack.org/p/virtual-midcycle-best-practices > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Wed Mar 11 09:42:46 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 11 Mar 2020 09:42:46 +0000 Subject: [all] Collecting Virtual Midcycle Best Practices In-Reply-To: References: Message-ID: On Tue, 10 Mar 2020 at 21:19, Kendall Nelson wrote: > > Thanks for sharing Mark! I think there is a lot of good information in there. > > How many people were joining approximately? How did you coordinate the when you would do it? > > Would you mind adding some of that to the etherpad[1] I am collecting info into? No problem, added some things to the etherpad. > > -Kendall (diablo_rojo) > > [1] https://etherpad.openstack.org/p/virtual-midcycle-best-practices > > On Tue, Mar 10, 2020 at 2:47 AM Mark Goddard wrote: >> >> On Tue, 10 Mar 2020 at 09:17, Thierry Carrez wrote: >> > >> > Kendall Nelson wrote: >> > > I wanted to collect best practices and pitfalls to avoid wrt projects >> > > experiences with virtual midcycles. I know of a few projects that have >> > > done them in the past and with how travel is hard for a lot of people >> > > right now, I expect more projects to have midcycles. I think it would be >> > > helpful to have all of the data we can collect in one place for those >> > > not just new to virtual midcycles but the whole community. >> > > [...] >> > >> > Also interested in feedback from teams that had virtual PTGs in the past >> > (keeping all possibilities on the table). I think Kolla, Telemetry and a >> > few others did that. >> >> Kolla has now had two virtual PTGs. Overall I think they went fairly >> well, particularly the most recent one. We tried Zoom, then moved to >> Google Meet. I forget the problems with Zoom. There were inevitably a >> few teething problems with the video, but I think we worked it out >> after 15-20 minutes. Etherpad for Ussuri vPTG here: >> https://etherpad.openstack.org/p/kolla-ussuri-ptg. >> >> Without seeing people's faces it can be hard to ensure everyone keeps >> focussed. It's quite rare for the whole room to be focussed at >> physical discussions though. >> >> Going around the room giving short intros helps to get people talking, >> and it may be better to do these ~1 hour in as people may miss the >> start. Directing questions at non-cores can help overcome that pesky >> imposter syndrome. Keeping video on definitely helps with engagement, >> up to the point where it impacts audio quality. >> >> There was also the Denver PTG where the PTL and a number of cores were >> remote where we struggled to make any progress. I think there were a >> few reasons for this. The fixed time of the PTG was not optimal for >> many remote attendees living in Europe or Asia. When there are a >> number of participants in one location, it can be easy to forget to >> direct speech at the microphone, allow time for remote callers to ask >> questions/respond etc. This makes it difficult and frustrating for >> them to join in, making it easier to get distracted and drop off. >> >> Not too much hard data in there, but hopefully a feel for how it went for us. >> >> > >> > -- >> > Thierry Carrez (ttx) >> > >> From thierry at openstack.org Wed Mar 11 11:37:35 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 11 Mar 2020 12:37:35 +0100 Subject: [largescale-sig] Meeting summary and next actions Message-ID: <84525104-d146-1741-a32c-f4580e585b33@openstack.org> Hi everyone, The Large Scale SIG held a meeting today. You can catch up with the summary and logs of the meeting at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-03-11-09.00.html No progress on "Documenting large scale operations" this week. masahito just posted a new revision for the oslo.metrics spec: https://review.opendev.org/#/c/704733/ belmoreira asks for input/comment on the proposed "Large Scale operations" Opendev track content: https://etherpad.openstack.org/p/LargeScaleOps_OpenDev Standing TODOs: - amorin to create a wiki page for large scale documentation - amorin to propose patch against Nova doc - all to check/comment on https://etherpad.openstack.org/p/LargeScaleOps_OpenDev - all to review new patchset of oslo.metrics spec https://review.opendev.org/#/c/704733/ - oneswig to contribute a scaling story on bare metal cluster scaling The next meeting will happen on March 25, at 9:00 UTC on #openstack-meeting. Cheers, -- Thierry Carrez (ttx) From kklimonda at syntaxhighlighted.com Wed Mar 11 13:29:58 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Wed, 11 Mar 2020 14:29:58 +0100 Subject: [neutron][largescale-sig] Debugging and tracking missing flows with l2pop Message-ID: <6A0F6E0F-9D6E-4ED2-B4AC-F862885220B4@syntaxhighlighted.com> Hi, (This is stein deployment with 14.0.2 neutron release) I’ve just spent some time debugging a missing connection between two VMs running on OS stein with ovs+l2pop enabled and the direct cause was missing flows in table 20 and a very incomplete flood flow in table 22. Restarting neutron-openvswitch-agent on that host has fixed the issue. Last time we’ve encountered missing flood flows (in another pike-based deployment), we tracked it down to https://review.opendev.org/#/c/600151/ and since then it was stable. My initial thought was that we were hitting the same bug - a couple of VMs are scheduled on the same compute, 3 ports are activated at the same time, and the flood entry is not broadcasted to other computes. However that issue was only affecting one of the computes, and it was the only one missing both MAC entries in table 20 and VXLAN tunnels in table 22. The only other idea I have is that the compute with missing flows have not received them from rabbitmq, but there I see nothing in logs that would suggest that agent was disconnected from rabbitmq. So at this point I have three questions: - what would be a good place to look next to track down those missing flows - for other operators, how stable do you find l2pop in general? and if you have problems with missing flows in your environment, do you try to monitor your deployment for that? -Chris From sundar.nadathur at intel.com Wed Mar 11 14:08:49 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 11 Mar 2020 14:08:49 +0000 Subject: [cyborg] Proposing core reviewers Message-ID: Hello all, Brin Zhang has been actively contributing to Cyborg in various areas, adding new features, improving quality, reviewing patches, and generally helping others in the community. Despite the relatively short time, he has been one of the most prolific contributors, and brings an enthusiastic and active mindset. I would like to thank him and acknowledge his significant contributions by proposing him as a core reviewer for Cyborg. Shogo Saito has been active in Cyborg since Train release. He has been driving the Cyborg client improvements, including its revamp to use OpenStackSDK. Previously he was instrumental in the transition to Python 3, testing and fixing issues in the process. As he has access to real FPGA hardware, he brings a users’ perspective and also tests Cyborg with real hardware. I would like to thank and acknowledge him for his steady valuable contributions, and propose him as a core reviewer for Cyborg. Some of the currently listed core reviewers have not been participating for a lengthy period of time. It is proposed that those who have had no contributions for the past 18 months – i.e. no participation in meetings, no code contributions and no reviews – be removed from the list of core reviewers. If no objections are made known by March 20, I will make the changes proposed above. Thanks. Regards, Sundar From Dong.Ding at dell.com Wed Mar 11 06:17:57 2020 From: Dong.Ding at dell.com (Dong.Ding at dell.com) Date: Wed, 11 Mar 2020 06:17:57 +0000 Subject: [manila] share group replication spike/questions In-Reply-To: References: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> <3b103cb9a3894762a8664815fff5771c@KULX13MDC124.APAC.DELL.COM> Message-ID: <5ca39398c52f4c0ab7e77616eabc76fe@KULX13MDC124.APAC.DELL.COM> Sure. I can list it in manila-spec. I’m wondering if we need the share-group-instances-xxx APIs also at the same time? Like share-instance-xxx: [cid:image001.jpg at 01D5F7AF.A32AB440] Thanks, Ding Dong From: Goutham Pacha Ravi Sent: Wednesday, March 11, 2020 2:10 PM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Mon, Mar 9, 2020 at 6:55 PM > wrote: Hi, Gotham, After checked the manila DB, I noticed there is table called ‘share_instances’ which was added for share replication and snapshot. Now, for group replication, do you think we also need a new table like ‘share_group_instances’ ? Agree, I think that's a sane approach to capture source and destination replicas adequately. Could you please discuss this through your specification? Thanks, Ding Dong From: Goutham Pacha Ravi > Sent: Saturday, February 29, 2020 7:43 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Fri, Feb 28, 2020 at 12:21 AM > wrote: Thanks Gotham, We are talking about this feature after U release. Cannot get it done in recently. Just do some prepare first. Great, thanks for confirming. We'll hash out the design on the specification, and if necessary, we can work through it during the Open Infra Project Technical Gathering in June [8][9] [8] https://www.openstack.org/ptg/ [9] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning BR, Ding Dong From: Goutham Pacha Ravi > Sent: Friday, February 28, 2020 7:10 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Tue, Feb 25, 2020 at 12:53 AM > wrote: Hi, guys, As we talked about the topic in a virtual PTG few months ago. https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (Support promoting several shares in group (DELL EMC: dingdong) I’m trying to write a manila-spec for it. Hi, thank you for working on this, and for submitting a specification [0]. We're targeting this for the Victoria release, correct? I like working on these major changes as soon as possible giving us enough air time for testing and hardening. It’s my first experience to implement such feature in framework. I need to double check with you something, and hope you can give me some guides like: 1. Where is the extra-spec defined for group/group type, it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) Group type extra specs are added as storage capabilities first, you begin by modifying the driver interface to report this group type capability. When share drivers report their support for group replication, operators can use the corresponding string in their group type extra-specs to schedule appropriately. I suggest taking a look at an existing share group type capability called "consistent_snapshot_support". [1] and [2] are reviews that added it. 2. The command cli should be implemented for ‘python-manilaclinet’ repo, right? (I have never touched this repo before) Yes. python-manilaclient encompasses - a python SDK to version 2 of the manila API - two shell implementations: manila and openstack client (actively being developed) Group type extra-specs are passed transparently through the SDK and CLI, you may probably add some documentation or shell hint text (like [3] if needed). 3. Where is the rest-api should be implemented? The rest API is in the openstack/manila repository. [4][5] contain some documentation regarding how to change the manila API. 4. And more tips you have? like any other related project should be changed? For any new feature, we need these additional things besides working code: - A first party driver implementation where possible so we can test this feature in the upstream CI (if no first party driver can support this feature, you'll need to make the best approximation of this feature through the Dummy/Fake driver [6]) - The feature must be tested with adequate test cases in manila-tempest-plugin - Documentation must be added to the manila documentation [7] Just list what I know, and more details questions will be raised when implementing, I think. FYI Thanks, Ding Dong Happy to answer any more questions, here or on your specification [0] Thanks, Goutham [0] https://review.opendev.org/#/c/710166/ [1] https://review.opendev.org/#/c/446044/ [2] https://review.opendev.org/#/c/447474/ [3] https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 [4] https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html [5] https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html [6] https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py [7] https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 24640 bytes Desc: image001.jpg URL: From Dong.Ding at dell.com Wed Mar 11 08:30:42 2020 From: Dong.Ding at dell.com (Dong.Ding at dell.com) Date: Wed, 11 Mar 2020 08:30:42 +0000 Subject: [manila] share group replication spike/questions In-Reply-To: <5ca39398c52f4c0ab7e77616eabc76fe@KULX13MDC124.APAC.DELL.COM> References: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> <3b103cb9a3894762a8664815fff5771c@KULX13MDC124.APAC.DELL.COM> <5ca39398c52f4c0ab7e77616eabc76fe@KULX13MDC124.APAC.DELL.COM> Message-ID: Sorry, send again to involve my colleague. * Sure. I can list the DB change in manila-spec. * Because I don’t use the instance API too much, I’m wondering if we need share-group-instances-xxx APIs if add such table? Like share-instance-xxx: [cid:image002.jpg at 01D5F7C1.2E887020] Thanks, Ding Dong From: Goutham Pacha Ravi > Sent: Wednesday, March 11, 2020 3:40 PM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Mon, Mar 9, 2020 at 6:55 PM > wrote: Hi, Gotham, After checked the manila DB, I noticed there is table called ‘share_instances’ which was added for share replication and snapshot. Now, for group replication, do you think we also need a new table like ‘share_group_instances’ ? Agree, I think that's a sane approach to capture source and destination replicas adequately. Could you please discuss this through your specification? Thanks, Ding Dong From: Goutham Pacha Ravi > Sent: Saturday, February 29, 2020 7:43 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Fri, Feb 28, 2020 at 12:21 AM > wrote: Thanks Gotham, We are talking about this feature after U release. Cannot get it done in recently. Just do some prepare first. Great, thanks for confirming. We'll hash out the design on the specification, and if necessary, we can work through it during the Open Infra Project Technical Gathering in June [8][9] [8] https://www.openstack.org/ptg/ [9] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning BR, Ding Dong From: Goutham Pacha Ravi > Sent: Friday, February 28, 2020 7:10 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Tue, Feb 25, 2020 at 12:53 AM > wrote: Hi, guys, As we talked about the topic in a virtual PTG few months ago. https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (Support promoting several shares in group (DELL EMC: dingdong) I’m trying to write a manila-spec for it. Hi, thank you for working on this, and for submitting a specification [0]. We're targeting this for the Victoria release, correct? I like working on these major changes as soon as possible giving us enough air time for testing and hardening. It’s my first experience to implement such feature in framework. I need to double check with you something, and hope you can give me some guides like: 1. Where is the extra-spec defined for group/group type, it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) Group type extra specs are added as storage capabilities first, you begin by modifying the driver interface to report this group type capability. When share drivers report their support for group replication, operators can use the corresponding string in their group type extra-specs to schedule appropriately. I suggest taking a look at an existing share group type capability called "consistent_snapshot_support". [1] and [2] are reviews that added it. 2. The command cli should be implemented for ‘python-manilaclinet’ repo, right? (I have never touched this repo before) Yes. python-manilaclient encompasses - a python SDK to version 2 of the manila API - two shell implementations: manila and openstack client (actively being developed) Group type extra-specs are passed transparently through the SDK and CLI, you may probably add some documentation or shell hint text (like [3] if needed). 3. Where is the rest-api should be implemented? The rest API is in the openstack/manila repository. [4][5] contain some documentation regarding how to change the manila API. 4. And more tips you have? like any other related project should be changed? For any new feature, we need these additional things besides working code: - A first party driver implementation where possible so we can test this feature in the upstream CI (if no first party driver can support this feature, you'll need to make the best approximation of this feature through the Dummy/Fake driver [6]) - The feature must be tested with adequate test cases in manila-tempest-plugin - Documentation must be added to the manila documentation [7] Just list what I know, and more details questions will be raised when implementing, I think. FYI Thanks, Ding Dong Happy to answer any more questions, here or on your specification [0] Thanks, Goutham [0] https://review.opendev.org/#/c/710166/ [1] https://review.opendev.org/#/c/446044/ [2] https://review.opendev.org/#/c/447474/ [3] https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 [4] https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html [5] https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html [6] https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py [7] https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 24613 bytes Desc: image002.jpg URL: From yangyi01 at inspur.com Wed Mar 11 14:33:50 2020 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Wed, 11 Mar 2020 14:33:50 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gV2h5IG5ldHdvcmsgcGVyZm9ybWFuY2UgaXMg?= =?utf-8?B?ZXh0cmVtZWx5IGJhZCBhbmQgbGluZWFybHkgcmVsYXRlZCB3aXRoIG51bWJl?= =?utf-8?Q?r_of_VMs=3F?= In-Reply-To: <52E415A7-AF3E-4798-803C-391123729345@gmail.com> References: <52E415A7-AF3E-4798-803C-391123729345@gmail.com> Message-ID: <40e10b0a56b140f2aab1a50a1018c9c9@inspur.com> Just let you guys know, this is indeed a common bug of openstack, https://bugs.launchpad.net/neutron/+bug/1732067 has more info, but there isn’t a real fix patch, current merged fix patch is just a workaround, it doesn’t fix MAC learn issue. 发件人: Satish Patel [mailto:satish.txt at gmail.com] 发送时间: 2020年2月24日 7:10 收件人: Donny Davis 抄送: Yi Yang (杨燚)-云服务集团 ; openstack-discuss at lists.openstack.org 主题: Re: [neutron] Why network performance is extremely bad and linearly related with number of VMs? What is max age time in Linux bridge? If it’s zero then it won’t learn Mac and flush arp table. Sent from my iPhone On Feb 23, 2020, at 12:52 AM, Donny Davis > wrote:  So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. I have not observed this behaviour in my experience. Could you tell us more about the configuration of your deployment? I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups. On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 > wrote: Hi, All Anybody has noticed network performance between VMs is extremely bad, it is basically linearly related with numbers of VMs in same compute node. In my case, if I launch one VM per compute node and run iperf3 tcp and udp, performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP packets, it can reach 180000 pps (packets per second), but if I launch two VMs per compute node (note: they are in the same subnet) and only run pps test case, that will be decrease to about 90000 pps, if I launch 3 VMs per compute node, that will be about 50000 pps, I tried to find out the root cause, other VMs in this subnet (they are in the same compute node as iperf3 client) can receive all the packets iperf3 client VM sent out although destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of iperf3 server VM in another compute node, by further check, I did find qemu instances of these VMs have higher CPU utilization and corresponding vhost kernel threads also also higher CPU utilization, to be importantly, I did find ovs was broadcasting these packets because all the ovs bridges didn’t learn this destination MAC. I tried this in Queens and Rocky, the same issue is there. By the way, we’re using linux bridge for security group, so VM tap interface is attached into linux bridge which is connected to br-int by veth pair. Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched many VMs: recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, used:0.000s, flags:SP., actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 ,19 $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 All the bridges can’t learn this MAC. My question is why ovs bridges can’t learn MACs of other compute nodes, is this common issue of all the Openstack versions? Is there any known existing way to fix it? Look forward to hearing your insights and solutions, thank you in advance and have a good day. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From smooney at redhat.com Wed Mar 11 14:37:35 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 14:37:35 +0000 Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 In-Reply-To: References: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> Message-ID: <21add71a504fa8f9a0d32b2181f2660076a6d356.camel@redhat.com> On Tue, 2020-03-10 at 17:04 +0000, Arkady.Kanevsky at dell.com wrote: > Thank Lee. Sound approach. > A few questions/comments. > 1. Assume that we have unwritten assumption that all nova nodes have access to volumes on the backend. > So we rely on it except for ephemeral storage. well the job deploys the storage backend so its not an asusmtion we deploy it that way intentionally. we also set up ssh keys so we can rsync the qcow files between host when we do block migration. > 2. What need to be done for volumes that use FC not iSCSI? we dont test FC in the migration job currently so i think that is out of scope of this refactor. the goal is to move it to zuulv3 while testing all the existing cases not to add more cases in this phase. > 3. You have one for Ceph. Does that mean that we need an analog for other cinder back ends? no. the ceph backend is tested seperatly as there are basicaly 3 storage backend to the libvirt driver. local file which is tested as part of block migration with qcow2 local block device which is tested via cinder/lvm with the block device mounted on the host vi isci (FC would be the same form a qemu point of view) and finally ceph is used to test the qemu nataive network block device support. so we are not trying to test different cinder backends but rahter the different image backends/qemu storage types supprot in nova > 4. Do we need to anything analogous for Manila? maybe but again that seams like its out of scope so intally i would say no > 5. How do we address multi-attach volumes and multipathing? Expect that if we have multipthaing on origin node we laso > have multipathing at destination at the end. multi attach is already tested in the job i belive so we would continue that. i think both cinder lvm and ceph support multi attach. i dont think we test multipath in the gate in the current jobs so i would not imediatly assume we woudl add it as part of this refactor. > > > Thanks, > Arkady > > > -----Original Message----- > From: Lee Yarwood > Sent: Tuesday, March 10, 2020 10:22 AM > To: openstack-discuss at lists.openstack.org > Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 > > Hello all, > > I've started PoC'ing some ideas around $subject in the topic below and wanted to ask the wider team for feedback on > the approach I'm taking: > > https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 > > My initial idea is to break the job up into the following smaller multinode jobs that are hopefully easier to > understand and maintain. > > * nova-multinode-live-migration-py3 > > A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. > > * nova-multinode-live-migration-ceph-py3 this would be replaceing our existing devstack-plugin-ceph-tempest-py3 job runing all the same test but in a multinode config with live migration tests enabled in the tempest config. > > A ceph based LM job using rbd for both imagebackend and c-vol. > > * nova-multinode-evacuate-py3 so this would be the only new job although i am not sure it should be seperated out. we likely want to test evacuate with file,block and network storage so i think it makes sense to do this as a post playbook in the other two jobs. > > A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol. > The existing script *could* then be ported to an Ansible role as part of the migration to Zuulv3. > > Hopefully this is pretty straight forward but I'd appreciate any feedback on this all the same. > > Cheers, > > -- > Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From gmann at ghanshyammann.com Wed Mar 11 14:38:57 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 11 Mar 2020 09:38:57 -0500 Subject: Not running for TC next election In-Reply-To: References: Message-ID: <170ca08c61f.f20f108159019.296868179124784200@ghanshyammann.com> Thanks, Jim for your contribution as TC and hope to see you back :) -gmann ---- On Tue, 10 Mar 2020 15:59:48 -0500 Jim Rollenhagen wrote ---- > Hi all, > I won't be running for TC next election. As you probably noticed, I don't really have enough time these days to meaningfully contribute, so leaving it open for someone new. It's been fun and a great learning experience, so I highly encourage others in the community to run! > I'll still be around to heckle in the background, don't worry. :) > > // jim From thierry at openstack.org Wed Mar 11 15:15:32 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 11 Mar 2020 16:15:32 +0100 Subject: [all] A call for consolidation and simplification Message-ID: Hi all, I'd like to issue a call for consolidation and simplification for OpenStack development. In the early years of the project, we faced a lot of challenges. We had to spread the development load across manageable-size groups, so we encouraged the creation of a lot of project teams. We wanted to capture all the energy that was sent towards the project, so we passed project structure reforms (like the big tent) that would aggressively include new community groups in the "official" OpenStack community. We needed to remove bottlenecks, so we encouraged decentralized decision making. And we had to answer unique challenges, so we created software to match them (Zuul). In summary, we had a lot of people, and not enough systems to organize them, so we created those. Fast-forward to 2020, and our challenges are different. The many systems that we created in the early days have created silos, with very small groups of people working in isolation, making cross-project work more difficult than it should be. The many systems that we created generate a lot of fragmentation. Like we have too many meetings (76, in case you were wondering), too much energy spent running them, too much frustration when nobody joins. Finally, the many systems that we created represent a lot of complexity for newcomers to handle. We have 180 IRC channels, most of them ghost towns where by the time someone answers, the person asking the question is long gone. So I think it's time to generally think about simplifying and consolidating things. It's not as easy as it sounds. Our successful decentralization efforts make it difficult to make the centralized decision to regroup. It's hard to justify time and energy spent to /remove/ things, especially those that we spent time creating in the first place. But we now have too many systems and not enough people, so we need to consolidate and simplify. Back around Havana, when we had around the same number of active contributors as today, we used to have 36 meetings and 20 teams. Do we really need 180 IRC channels, 76 meetings, 63 project teams (not even counting SIGs)? Yes, we all specialized over time, so it's hard to merge for example Oslo + Requirements, or QA + Infrastructure, or Stable + Release Management, or Monasca + Telemetry. We are all overextended so it's hard to learn new tricks or codebases. And yet, while I'm not really sure what the best approach is, I think it's necessary. Comments, thoughts? -- Thierry Carrez (ttx) From zhipengh512 at gmail.com Wed Mar 11 16:17:59 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Mar 2020 00:17:59 +0800 Subject: [cyborg] Proposing core reviewers In-Reply-To: References: Message-ID: Big +1 for Brin and shogo's nomination and well deserved :) I'm a little bit concerned over the 18 months period. The original rule we setup is volunteer step down, since this is a small team we want to acknowledge everyone that has made significant contributions. Some of the inactive core reviewers like Justin Kilpatrick have moved on a long time ago, and I don't see people like him could do any harm to the project. But if the core reviewer has a size limit in the system, that would be reasonable to replace the inactive ones with the new recruits :) Just my two cents On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar wrote: > Hello all, > Brin Zhang has been actively contributing to Cyborg in various areas, > adding new features, improving quality, reviewing patches, and generally > helping others in the community. Despite the relatively short time, he has > been one of the most prolific contributors, and brings an enthusiastic and > active mindset. I would like to thank him and acknowledge his significant > contributions by proposing him as a core reviewer for Cyborg. > > Shogo Saito has been active in Cyborg since Train release. He has been > driving the Cyborg client improvements, including its revamp to use > OpenStackSDK. Previously he was instrumental in the transition to Python 3, > testing and fixing issues in the process. As he has access to real FPGA > hardware, he brings a users’ perspective and also tests Cyborg with real > hardware. I would like to thank and acknowledge him for his steady valuable > contributions, and propose him as a core reviewer for Cyborg. > > Some of the currently listed core reviewers have not been participating > for a lengthy period of time. It is proposed that those who have had no > contributions for the past 18 months – i.e. no participation in meetings, > no code contributions and no reviews – be removed from the list of core > reviewers. > > If no objections are made known by March 20, I will make the changes > proposed above. > > Thanks. > > Regards, > Sundar > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Mar 11 16:37:06 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 16:37:06 +0000 Subject: [cyborg] Proposing core reviewers In-Reply-To: References: Message-ID: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > Big +1 for Brin and shogo's nomination and well deserved :) > > I'm a little bit concerned over the 18 months period. The original rule we > setup is volunteer step down, since this is a small team we want to > acknowledge everyone that has made significant contributions. Some of the > inactive core reviewers like Justin Kilpatrick have moved on a long time > ago, and I don't see people like him could do any harm to the project. > > But if the core reviewer has a size limit in the system, that would be > reasonable to replace the inactive ones with the new recruits :) it is generally considerd best pratice to maintian the core team adding or removing people based on there activity. if a core is removed due to in activity and they come back they can always be restored but it give a bad perception if a project has like 20 core but only 2 are active. as a new contibutor you dont know which ones are active and it can be frustrating to reach out to them and get no responce. also just form a project healt point of view it make the project look like its more diverse or more active then it actully is which is also not generally a good thing. that said core can step down if they feel like they can contribute time anymore when ever they like so and if a core is steping a way for a few months but intends to come back they can also say that in advance and there is no harm in leaving them for a cycle or two but in general after a period of in activity (usally more then a full release/6months) i think its good to reduce back down the core team. > > Just my two cents > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar > wrote: > > > Hello all, > > Brin Zhang has been actively contributing to Cyborg in various areas, > > adding new features, improving quality, reviewing patches, and generally > > helping others in the community. Despite the relatively short time, he has > > been one of the most prolific contributors, and brings an enthusiastic and > > active mindset. I would like to thank him and acknowledge his significant > > contributions by proposing him as a core reviewer for Cyborg. > > > > Shogo Saito has been active in Cyborg since Train release. He has been > > driving the Cyborg client improvements, including its revamp to use > > OpenStackSDK. Previously he was instrumental in the transition to Python 3, > > testing and fixing issues in the process. As he has access to real FPGA > > hardware, he brings a users’ perspective and also tests Cyborg with real > > hardware. I would like to thank and acknowledge him for his steady valuable > > contributions, and propose him as a core reviewer for Cyborg. > > > > Some of the currently listed core reviewers have not been participating > > for a lengthy period of time. It is proposed that those who have had no > > contributions for the past 18 months – i.e. no participation in meetings, > > no code contributions and no reviews – be removed from the list of core > > reviewers. > > > > If no objections are made known by March 20, I will make the changes > > proposed above. > > > > Thanks. > > > > Regards, > > Sundar > > > > From balazs.gibizer at est.tech Wed Mar 11 17:17:13 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 11 Mar 2020 18:17:13 +0100 Subject: [nova][ptg] PTG participation Message-ID: <1583947033.12170.37@est.tech> Hi, I've just got the news from my employer that due to COVID19 I cannot travel to Vancouver in June. cheers, gibi From smooney at redhat.com Wed Mar 11 17:36:20 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 17:36:20 +0000 Subject: [nova][ptg] PTG participation In-Reply-To: <1583947033.12170.37@est.tech> References: <1583947033.12170.37@est.tech> Message-ID: On Wed, 2020-03-11 at 18:17 +0100, Balázs Gibizer wrote: > Hi, > > I've just got the news from my employer that due to COVID19 I cannot > travel to Vancouver in June. >From a redhat perspective i dont think a decision has been made on if we should attend or not. last time i spoke to my manager we were still waiting to see how thing progress with the assumtion we would attend but we might want to plan for a virtual PTG (via video conference, etherpad and email) in the event many cant travel or that things escalate to a point where the PTG event could be canceled. i dont think the foundation has indicated that that is likely to happen but im sure they are monitoring things closely as our employers will be so having a plan b might now be a bad thing in either case. if there isnt a diverse quoram physically at the ptg it would limit our ability to make desisions as happend to some extent in china. it would be still good to get operator feedback but they may also be under similar travel restrictions. > > cheers, > gibi > > > From smooney at redhat.com Wed Mar 11 18:01:29 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 18:01:29 +0000 Subject: [nova][ptg] PTG participation In-Reply-To: References: <1583947033.12170.37@est.tech> Message-ID: <064cda0d6d2511f3cbccd26fbf1bb5460797fbef.camel@redhat.com> On Wed, 2020-03-11 at 17:36 +0000, Sean Mooney wrote: > On Wed, 2020-03-11 at 18:17 +0100, Balázs Gibizer wrote: > > Hi, > > > > I've just got the news from my employer that due to COVID19 I cannot > > travel to Vancouver in June. > > From a redhat perspective i dont think a decision has been made on if we should attend or not. ill clarify that slightly in that we do have guidence that "Red Hatters may not travel to attend external events or conferences with 1000+ attendees, even within their home country." in the past when the ptg and summit were combinined and we had the devsumit have ment that travel to the openstack even would not be allowed. At its current size its kind of in a gray zone where its is not banned as a public event but if it was an internal event the number of redhat employee that would be attending woudl be over the limit we have and the physical event would be canceled and converted to a virtual only event. so its tbd if i will be attending too although i have not heard a definitive No at this point but i also cant really book tickets and flight yet either however the guidance we have been given is to try and default to virtual attendance were possible. > last time i spoke to my manager we were still waiting to see how thing progress with the assumtion we would attend > but we might want to plan for a virtual PTG (via video conference, etherpad and email) in the event many cant travel > or > that things escalate to a point where the PTG event could be canceled. > > i dont think the foundation has indicated that that is likely to happen but im sure they are monitoring > things closely as our employers will be so having a plan b might now be a bad thing in either case. > if there isnt a diverse quoram physically at the ptg it would limit our ability to make desisions as happend to > some extent in china. it would be still good to get operator feedback but they may also be under similar travel > restrictions. > > > > cheers, > > gibi > > > > > > > > From lyarwood at redhat.com Wed Mar 11 18:34:53 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 11 Mar 2020 18:34:53 +0000 Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 In-Reply-To: <21add71a504fa8f9a0d32b2181f2660076a6d356.camel@redhat.com> References: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> <21add71a504fa8f9a0d32b2181f2660076a6d356.camel@redhat.com> Message-ID: <20200311183453.iabtv3gxkn5i43jj@lyarwood.usersys.redhat.com> On 11-03-20 14:37:35, Sean Mooney wrote: > On Tue, 2020-03-10 at 17:04 +0000, Arkady.Kanevsky at dell.com wrote: > > Thank Lee. Sound approach. > > A few questions/comments. > > 1. Assume that we have unwritten assumption that all nova nodes have > > access to volumes on the backend. > > So we rely on it except for ephemeral storage. > well the job deploys the storage backend so its not an asusmtion we > deploy it that way intentionally. we also set up ssh keys so we can > rsync the qcow files between host when we do block migration. Correct, the jobs are simple multinode deployments of one main controller/compute and a smaller subnode compute. > > 2. What need to be done for volumes that use FC not iSCSI? > we dont test FC in the migration job currently so i think that is out > of scope of this refactor. the goal is to move it to zuulv3 while > testing all the existing cases not to add more cases in this phase. Yes, apologies if that wasn't clear from my initial post. That said I'd argue that FC testing of any kind would be out of scope for our jobs in openstack/nova. Specific backends and interconnects being better tested by openstack/cinder and openstack/os-brick IMHO. > > 3. You have one for Ceph. Does that mean that we need an analog for > > other cinder back ends? > no. the ceph backend is tested seperatly as there are basicaly 3 > storage backend to the libvirt driver. local file which is tested as > part of block migration with qcow2 local block device which is tested > via cinder/lvm with the block device mounted on the host vi isci (FC > would be the same form a qemu point of view) and finally ceph is used > to test the qemu nataive network block device support. > > so we are not trying to test different cinder backends but rahter the > different image backends/qemu storage types supprot in nova Correct. > > 4. Do we need to anything analogous for Manila? > maybe but again that seams like its out of scope so intally i would > say no Correct, we don't have any coverage for this at the moment. > > 5. How do we address multi-attach volumes and multipathing? Expect > > that if we have multipthaing on origin node we laso have > > multipathing at destination at the end. > multi attach is already tested in the job i belive so we would > continue that. i think both cinder lvm and ceph support I'm actually not sure if we do have any multiattach LM coverage, something to potentially add with this refactor. > multi attach. i dont think we test multipath in the gate in the > current jobs so i would not imediatly assume we woudl add it as part > of this refactor. As with FC I don't think this should live in our jobs tbh. > > -----Original Message----- > > From: Lee Yarwood > > Sent: Tuesday, March 10, 2020 10:22 AM > > To: openstack-discuss at lists.openstack.org > > Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 > > > > Hello all, > > > > I've started PoC'ing some ideas around $subject in the topic below > > and wanted to ask the wider team for feedback on the approach I'm > > taking: > > > > https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 > > > > My initial idea is to break the job up into the following smaller > > multinode jobs that are hopefully easier to understand and maintain. > > > > * nova-multinode-live-migration-py3 > > > > A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. > > > > * nova-multinode-live-migration-ceph-py3 > > this would be replaceing our existing devstack-plugin-ceph-tempest-py3 > job runing all the same test but in a multinode config with live > migration tests enabled in the tempest config. If we want to merge the evacuation tests back into this I was going to limit it to live migration tests only and continue running devstack-plugin-ceph-tempest-py3 for everything else. FWIW devstack-plugin-ceph-tempest-py3 is still NV even when we've been gating on the success of ceph live migration in the original nova-live-migration job. > > A ceph based LM job using rbd for both imagebackend and c-vol. > > > > * nova-multinode-evacuate-py3 > so this would be the only new job although i am not sure it should be > seperated out. we likely want to test evacuate with file,block and > network storage so i think it makes sense to do this as a post > playbook in the other two jobs. Yeah that's fair, I might start with this broken out just to work on that playbook/role before merging it back into the above jobs tbh. > > A separate evacuation job using qcow2 imagebackend and LVM/iSCSI > > c-vol. The existing script *could* then be ported to an Ansible > > role as part of the migration to Zuulv3. > > > > Hopefully this is pretty straight forward but I'd appreciate any > > feedback on this all the same. -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From lyarwood at redhat.com Wed Mar 11 18:36:48 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 11 Mar 2020 18:36:48 +0000 Subject: [nova] Breaking up and migrating the nova-live-migration job to Zuulv3 In-Reply-To: <00363f3f-ed3d-488d-98d4-c3025b7e179f@www.fastmail.com> References: <20200310152203.6fv57e5qyqhxdgep@lyarwood.usersys.redhat.com> <00363f3f-ed3d-488d-98d4-c3025b7e179f@www.fastmail.com> Message-ID: <20200311183648.c4dyz3r2upv5zyrd@lyarwood.usersys.redhat.com> On 10-03-20 08:25:38, Clark Boylan wrote: > On Tue, Mar 10, 2020, at 8:22 AM, Lee Yarwood wrote: > > Hello all, > > > > I've started PoC'ing some ideas around $subject in the topic below and > > wanted to ask the wider team for feedback on the approach I'm taking: > > > > https://review.opendev.org/#/q/topic:nova-live-migration-zuulv3 > > > > My initial idea is to break the job up into the following smaller > > multinode jobs that are hopefully easier to understand and maintain. > > > > * nova-multinode-live-migration-py3 > > > > A simple LM job using the qcow2 imagebackend and LVM/iSCSI c-vol. > > > > * nova-multinode-live-migration-ceph-py3 > > > > A ceph based LM job using rbd for both imagebackend and c-vol. > > > > * nova-multinode-evacuate-py3 > > > > A separate evacuation job using qcow2 imagebackend and LVM/iSCSI c-vol. > > The existing script *could* then be ported to an Ansible role as part of > > the migration to Zuulv3. > > > > Hopefully this is pretty straight forward but I'd appreciate any > > feedback on this all the same. > > Just a note that you can probably drop the -py3 suffix as I imagine that is assumed at this point? Gah, of course, thanks Clark. -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From juliaashleykreger at gmail.com Wed Mar 11 18:54:20 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 11 Mar 2020 11:54:20 -0700 Subject: [ironic] proposing Iury Gregory for bifrost-core, ironic-inspector-core, sushy-core Message-ID: Iury has been working hard across the ironic community and has been quite active in changing and improving our CI, as well as reviewing code contributions and helpfully pointing out issues or items that need to be fixed. I feel that he is on track to join ironic-core in the next few months, but first I propose we add him to bifrost-core, ironic-inspector-core, and sushy-core. Any objections? From opetrenko at mirantis.com Wed Mar 11 19:01:42 2020 From: opetrenko at mirantis.com (Oleksii Petrenko) Date: Wed, 11 Mar 2020 21:01:42 +0200 Subject: Add pytest, pytest-django and pytest-html to global requirements Message-ID: Adding pytest, pytest-django and pytest-html allows import of tests results in xml, html formats for openstack-horizon. What do you think about this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Wed Mar 11 19:06:23 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 11 Mar 2020 14:06:23 -0500 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: References: Message-ID: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> On 20-03-11 21:01:42, Oleksii Petrenko wrote: > Adding pytest, pytest-django and pytest-html allows import of tests results > in xml, html formats for openstack-horizon. What do you think about this? I think the question is in refrence to using something that is already included in global-requirements (like stestr or something else). Also, here's the review https://review.opendev.org/712315 Starting with stestr, could you explain why it was not good enough for your use case? -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From smooney at redhat.com Wed Mar 11 19:42:47 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 19:42:47 +0000 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> References: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> Message-ID: <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> On Wed, 2020-03-11 at 14:06 -0500, Matthew Thode wrote: > On 20-03-11 21:01:42, Oleksii Petrenko wrote: > > Adding pytest, pytest-django and pytest-html allows import of tests results > > in xml, html formats for openstack-horizon. What do you think about this? > > I think the question is in refrence to using something that is already > included in global-requirements (like stestr or something else). > > Also, here's the review https://review.opendev.org/712315 > > Starting with stestr, could you explain why it was not good enough for > your use case? more of a general question if they are test only deps that wont be used at runtime which i think is the case in all of the above do they enven need to be in Global-requirements? ignoring the fact that devstack installes all test- requireemtes when it in stall packages which is a different topic if this is only used for generating html report for tests then it seams liek we would not need to corrdiate the software version. that said i am also curious why the normal html report that gets generated form our tox runs in the ci is not sufficent. what does pytest-html add? is it just the ablity to do a local html report when you run tox manually? or do you plan to use pytest-django and pytest-html to do some other testing that you cant currently do? > From smooney at redhat.com Wed Mar 11 19:54:36 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Mar 2020 19:54:36 +0000 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> References: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> Message-ID: <4636a78af2b78b8d316ff5cd3d2b76a1a2173cd7.camel@redhat.com> On Wed, 2020-03-11 at 19:42 +0000, Sean Mooney wrote: > On Wed, 2020-03-11 at 14:06 -0500, Matthew Thode wrote: > > On 20-03-11 21:01:42, Oleksii Petrenko wrote: > > > Adding pytest, pytest-django and pytest-html allows import of tests results > > > in xml, html formats for openstack-horizon. What do you think about this? > > > > I think the question is in refrence to using something that is already > > included in global-requirements (like stestr or something else). > > > > Also, here's the review https://review.opendev.org/712315 > > > > Starting with stestr, could you explain why it was not good enough for > > your use case? > > more of a general question if they are test only deps that wont be used at runtime which i think is the case in all of > the above do they enven need to be in Global-requirements? ignoring the fact that devstack installes all test- > requireemtes when it in stall packages which is a different topic if this is only used for generating html report for > tests then it seams liek we would not need to corrdiate the software version. > > that said i am also curious why the normal html report that gets generated form our tox runs in the ci > is not sufficent. what does pytest-html add? is it just the ablity to do a local html report when you run tox > manually? > or do you plan to use pytest-django and pytest-html to do some other testing that you cant currently do? ah i see that pytest-django will allow the removal of django test which is presuable to adress https://bugs.launchpad.net/horizon/+bug/1866666 based on https://review.opendev.org/#/c/711195/ if hoizong is thinking of moveing away form its current custome test runner https://github.com/openstack/horizon/blob/stable/pike/manage.py then stestr should at least be considerd but if there is a valid technical reason to go with pytest instead. that said as someone that does not work on horizon im supriced its not already using stestr. give it still has a testr.conf meanign at somepoint it moved a way form testrepostry and to its current custom runner when the other project moved to os-testr and then to stestr. that said i kind of like pytest as a test runner and it would solve some other issues with subunit in nova so im not against adding it. > > > > From opetrenko at mirantis.com Wed Mar 11 20:32:32 2020 From: opetrenko at mirantis.com (Oleksii Petrenko) Date: Wed, 11 Mar 2020 22:32:32 +0200 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: <4636a78af2b78b8d316ff5cd3d2b76a1a2173cd7.camel@redhat.com> References: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> <4636a78af2b78b8d316ff5cd3d2b76a1a2173cd7.camel@redhat.com> Message-ID: > > > Starting with stestr, could you explain why it was not good enough for > > > your use case? > > Stestr will not provide us with fixtures for django (for future use), also with the help of pytest, we probably would be able to unify html creation all across our projects. Also, xml exporting in different formats can help users with aggregating test statistics. > > more of a general question if they are test only deps that wont be used at runtime which i think is the case in all of > > the above do they enven need to be in Global-requirements? ignoring the fact that devstack installes all test- > > requireemtes when it in stall packages which is a different topic if this is only used for generating html report for > > tests then it seams liek we would not need to corrdiate the software version. pytest is needed to generate coverage reports. > > > > > > > > > From mtreinish at kortar.org Wed Mar 11 21:08:27 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 11 Mar 2020 17:08:27 -0400 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: References: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> <4636a78af2b78b8d316ff5cd3d2b76a1a2173cd7.camel@redhat.com> Message-ID: <20200311210827.GA90029@sinanju> On Wed, Mar 11, 2020 at 10:32:32PM +0200, Oleksii Petrenko wrote: > > > > Starting with stestr, could you explain why it was not good enough for > > > > your use case? > > > > Stestr will not provide us with fixtures for django (for future use), > also with the help of pytest, we probably would be able to unify html > creation all across our projects. Also, xml exporting in different > formats can help users with aggregating test statistics. The aggregated data view already exists: http://status.openstack.org/openstack-health/#/ We also have 2 different html views of a test run depending on the level of detail you want: https://7dd927a4891851ac968e-517bfbb0b76f5445108257ba8a306671.ssl.cf5.rackcdn.com/712315/2/check/tempest-full-py3/c20f9f1/testr_results.html and https://7dd927a4891851ac968e-517bfbb0b76f5445108257ba8a306671.ssl.cf5.rackcdn.com/712315/2/check/tempest-full-py3/c20f9f1/controller/logs/stackviz/index.html#/stdin/timeline As for "xml exporting" I assume you're talking about xunitxml. There are several limitations around it, especially for parallel test execution which is why stestr is built around and uses subunit. But, if you want to generate xunitxml from subunit for any reason this is straightforward to do, it's built into subunit: https://github.com/testing-cabal/subunit/blob/master/filters/subunit2junitxml > > > > more of a general question if they are test only deps that wont be used at runtime which i think is the case in all of > > > the above do they enven need to be in Global-requirements? ignoring the fact that devstack installes all test- > > > requireemtes when it in stall packages which is a different topic if this is only used for generating html report for > > > tests then it seams liek we would not need to corrdiate the software version. > pytest is needed to generate coverage reports. I don't understand this either, we have coverage jobs already running on most projects. The reports get published as part of the job artifacts: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd1/706509/2/check/openstack-tox-cover/fd1df05/cover/index.html Also as I pointed out on the review, this is not the first time we've discussed this. Since I started working on OpenStack why not runner or framework X (at one point it was nose, then it switched to pytest) has been brought up by someone. We tried to write it down in the project testing interface: https://governance.openstack.org/tc/reference/pti/python.html#python-test-running Basically, by using a unittest based runner anybody can use their preferred test runner locally. stestr is used for CI because of the parallel execution and subunit integration to leverage all the infra tooling built around it. That being said horizon has always been an exception because django has special requirements for testing (mainly they publish their testing framework as an extension for a test frameworks other than stdlib unittest). In the past it was needed a nose extension and now it looks like that has been updated to be a pytest exception. I don't see a problem to just morph the old exception that horizon uses nose to horizon uses pytest if it's really necessary to test django. If you do end up using pytest because there is no other choice for django testing, you can convert the xunitxml to subunit to integrate it into all those existing tools I mentioned before with either: https://github.com/mtreinish/health-helm/blob/master/junitxml2subunit.py or https://github.com/mtreinish/junitxml2subunit (do note stackviz and subunit2sql/openstack-health won't be really useful with xunitxml to subunit conversion because xunitxml doesn't track execution timestamps) -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From gouthampravi at gmail.com Wed Mar 11 21:17:53 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 11 Mar 2020 14:17:53 -0700 Subject: [OSSA-2020-002] Manila: Unprivileged users can retrieve, use and manipulate share networks (CVE-2020-9543) Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 ================================================================================= OSSA-2020-002: Unprivileged users can retrieve, use and manipulate share networks ================================================================================= :Date: March 10, 2020 :CVE: CVE-2020-9543 Affects ~~~~~~~ - - Manila: <7.4.1, >=8.0.0 <8.1.1, >=9.0.0 <9.1.1 Description ~~~~~~~~~~~ Tobias Rydberg from City Network Hosting AB reported a vulnerability with the manila's share network APIs. An attacker can retrieve and manipulate share networks that do not belong to them if they possess the share network ID. By exploiting this vulnerability, they can view and manipulate share network subnets and use the share network to create resources such as shares and share groups. Patches ~~~~~~~ - - https://review.opendev.org/712167 (Pike) - - https://review.opendev.org/712166 (Queens) - - https://review.opendev.org/712165 (Rocky) - - https://review.opendev.org/712164 (Stein) - - https://review.opendev.org/712163 (Train) - - https://review.opendev.org/712158 (Ussuri) Credits ~~~~~~~ - - Tobias Rydberg from City Network Hosting AB (CVE-2020-9543) References ~~~~~~~~~~ - - https://launchpad.net/bugs/1861485 - - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9543 Notes ~~~~~ - - The stable/queens and stable/pike branches are under extended maintenance and will receive no new point releases, but patches for them are provided as a courtesy. - -- Goutham Pacha Ravi PTL, OpenStack Manila -----BEGIN PGP SIGNATURE----- wsFcBAEBCAAGBQJeaVVWAAoJEDEySBmyuw9i8c0P/Rjkr4mxbDi7GzDCLdvC 4SK31LaF92uop/t2XXnm/p2Lui/4nG6ss46ajnmsplN2D//f+/NhBC+Oa/+R 3rwEl1YFFO8NoNcpjWS+6oE66HNPEPTxSMheyfWJTjl8bmH4wL0ZGnQ+cNWM q1XhO5Qjwv58epa0IK5vRA6lfWEmZQ69/+7nf6Tyha8vuLFOpStWXj7sV0SZ j/AxvTeCu/30EH9U4E10VQ/GpHz00WuueEYUCJgOZw4jGk32238yXmuF1fBU il4PR53ZPFqb20It56t/rrr0sGB8lLui7KiBhaHFmjRK8YqwD1pqz9XAaxNq CsgbkMnR8+WsheAgMr49NeYsQ1PD6SCLBXPQGVNus/pl5bzctIaqmswPN1ey p23tREpTEjOxg9mQJLkTCKICvi0alx3Nlk9EsrSapovJk/v8BJGrjkIj8iH0 a1pAMzjcHfGpCTGO2dHBOfJs7BXL9B6Jdba9bdRTt5BRI4NHKwvM9SP9yBb6 F7UNoo8cd+pQp0EV6i8CPUTF/qWU5rqOyIr9tGTAOPm0lg8+uIOot7oZzJcu QBaKyEZu9X4OV1o5mZ68KokiVP7RWYGMGz94NV4ZMNNfmgpsxP/h2+MZCUQJ +lmMPInx5abdwMtqiyhrSQxdgLCOKlWMYXgrs7w225sjv2+LpuVltIPXGPEJ tJq+ =tXeN -----END PGP SIGNATURE----- From emilien at redhat.com Wed Mar 11 22:46:13 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 11 Mar 2020 18:46:13 -0400 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: <1583528201.853712216@emailsrvr.com> References: <1583528201.853712216@emailsrvr.com> Message-ID: Hi Mark, Thanks for the transparency, as usual. I have a few thoughts, please read inline. On Fri, Mar 6, 2020 at 4:04 PM Mark Collier wrote: > upcoming event in Vancouver is no exception. The OpenDev tracks > > each morning will be programmed by volunteers from the community, and > the project > > teams will be organizing their own conversations as well each afternoon > M-W, and > > all day Thursday. > > > > But the larger question is here: should the show go on? > > > > The short answer is that as of now, the Vancouver and Berlin events are > still > > scheduled to happen in June (8-11) and October (19-23), respectively. > > > > However, we are willing to cancel or approach the events in a different > way (i.e. > > virtual) if the facts indicate that is the best path, and we know the > facts are > > changing rapidly. One of the most critical inputs we need is to hear > from each of > > you. We know that many of you rely on the twice-annual events to get > together and > > make rapid progress on the software, which is one reason we are not > making any > > decisions in haste. We also know that many of you may be unable or > unwilling to > > travel in June, and that is critical information to hear as we get > closer to the > > event so that we can make the most informed decision. > I believe that we, as a community should show the example and our strengths by cancelling the Vancouver event and organize a virtual event like some other big events are doing. There is an opportunity for the OSF to show leadership in Software communities and acknowledge the risk of spread during that meeting; not only for the people attending it but for also those in contact with these people later. I'm not a doctor nor I know much about the virus; but I'm not interested to travel and take the risk to 1) catch the virus and 2) spread it at home and in my country; and as a community member, I feel like our responsibility is also to maintain ourselves healthy. In my opinion, the sooner we cancel, the better we can focus on organizing the virtual meetings, and also we can influence more communities to take that kind of decisions. Thanks Mark for starting that discussion, it's a perfect sign of how healthy is our community; and hopefully it will continue to be. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Mar 11 23:36:51 2020 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 11 Mar 2020 18:36:51 -0500 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: References: <1583528201.853712216@emailsrvr.com> Message-ID: Hi, At Verizon Media we haven't been told specifically we won't attend the Vancouver event. However, all international travel is cancelled and in-country trips are highly restricted Regards Miguel On Wed, Mar 11, 2020 at 5:47 PM Emilien Macchi wrote: > Hi Mark, > > Thanks for the transparency, as usual. I have a few thoughts, please read > inline. > > On Fri, Mar 6, 2020 at 4:04 PM Mark Collier wrote: > >> upcoming event in Vancouver is no exception. The OpenDev tracks >> > each morning will be programmed by volunteers from the community, and >> the project >> > teams will be organizing their own conversations as well each afternoon >> M-W, and >> > all day Thursday. >> > >> > But the larger question is here: should the show go on? >> > >> > The short answer is that as of now, the Vancouver and Berlin events are >> still >> > scheduled to happen in June (8-11) and October (19-23), respectively. >> > >> > However, we are willing to cancel or approach the events in a different >> way (i.e. >> > virtual) if the facts indicate that is the best path, and we know the >> facts are >> > changing rapidly. One of the most critical inputs we need is to hear >> from each of >> > you. We know that many of you rely on the twice-annual events to get >> together and >> > make rapid progress on the software, which is one reason we are not >> making any >> > decisions in haste. We also know that many of you may be unable or >> unwilling to >> > travel in June, and that is critical information to hear as we get >> closer to the >> > event so that we can make the most informed decision. >> > > I believe that we, as a community should show the example and our > strengths by cancelling the Vancouver event and organize a virtual event > like some other big events are doing. > There is an opportunity for the OSF to show leadership in Software > communities and acknowledge the risk of spread during that meeting; not > only for the people attending it but for also those in contact with these > people later. > > I'm not a doctor nor I know much about the virus; but I'm not interested > to travel and take the risk to 1) catch the virus and 2) spread it at home > and in my country; and as a community member, I feel like our > responsibility is also to maintain ourselves healthy. > > In my opinion, the sooner we cancel, the better we can focus on organizing > the virtual meetings, and also we can influence more communities to take > that kind of decisions. > > Thanks Mark for starting that discussion, it's a perfect sign of how > healthy is our community; and hopefully it will continue to be. > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Thu Mar 12 00:00:37 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Mar 2020 08:00:37 +0800 Subject: [cyborg] Proposing core reviewers In-Reply-To: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> References: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> Message-ID: Hi Sean, This is a good point on making clarity on the contributor side, wasn't thinking about that. Re "be restored" my thoughts were that if anyone comes back, and not as core member, they should follow the process and be nominated/elected again. If a previous inactive core being deleted and then come back just restored, it is also problematic :) That is also why I suggested not to presumptively remove inactive cores. On Thu, Mar 12, 2020 at 12:37 AM Sean Mooney wrote: > On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > > Big +1 for Brin and shogo's nomination and well deserved :) > > > > I'm a little bit concerned over the 18 months period. The original rule > we > > setup is volunteer step down, since this is a small team we want to > > acknowledge everyone that has made significant contributions. Some of the > > inactive core reviewers like Justin Kilpatrick have moved on a long time > > ago, and I don't see people like him could do any harm to the project. > > > > But if the core reviewer has a size limit in the system, that would be > > reasonable to replace the inactive ones with the new recruits :) > it is generally considerd best pratice to maintian the core team adding or > removing > people based on there activity. if a core is removed due to in activity > and they > come back they can always be restored but it give a bad perception if a > project has > like 20 core but only 2 are active. as a new contibutor you dont know > which ones are > active and it can be frustrating to reach out to them and get no responce. > also just form a project healt point of view it make the project look like > its more diverse > or more active then it actully is which is also not generally a good thing. > > that said core can step down if they feel like they can contribute time > anymore > when ever they like so and if a core is steping a way for a few months but > intends to > come back they can also say that in advance and there is no harm in > leaving them > for a cycle or two but in general after a period of in activity (usally > more then a full release/6months) > i think its good to reduce back down the core team. > > > > Just my two cents > > > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar < > sundar.nadathur at intel.com> > > wrote: > > > > > Hello all, > > > Brin Zhang has been actively contributing to Cyborg in various > areas, > > > adding new features, improving quality, reviewing patches, and > generally > > > helping others in the community. Despite the relatively short time, he > has > > > been one of the most prolific contributors, and brings an enthusiastic > and > > > active mindset. I would like to thank him and acknowledge his > significant > > > contributions by proposing him as a core reviewer for Cyborg. > > > > > > Shogo Saito has been active in Cyborg since Train release. He has been > > > driving the Cyborg client improvements, including its revamp to use > > > OpenStackSDK. Previously he was instrumental in the transition to > Python 3, > > > testing and fixing issues in the process. As he has access to real FPGA > > > hardware, he brings a users’ perspective and also tests Cyborg with > real > > > hardware. I would like to thank and acknowledge him for his steady > valuable > > > contributions, and propose him as a core reviewer for Cyborg. > > > > > > Some of the currently listed core reviewers have not been participating > > > for a lengthy period of time. It is proposed that those who have had no > > > contributions for the past 18 months – i.e. no participation in > meetings, > > > no code contributions and no reviews – be removed from the list of core > > > reviewers. > > > > > > If no objections are made known by March 20, I will make the changes > > > proposed above. > > > > > > Thanks. > > > > > > Regards, > > > Sundar > > > > > > > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Mar 12 00:13:32 2020 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 11 Mar 2020 20:13:32 -0400 Subject: Neutron RabbitMQ issues In-Reply-To: <825e802d-5a6f-4e96-dcf5-9b10332ebf3e@civo.com> References: <825e802d-5a6f-4e96-dcf5-9b10332ebf3e@civo.com> Message-ID: I am also dealing with some short of rabbitmq performance issue but its not as worst you your issue. This is my favorite video, not sure you have seen this before or not but anyway posting here - https://www.youtube.com/watch?v=bpmgxrPOrZw On Wed, Mar 11, 2020 at 10:24 AM Grant Morley wrote: > > Hi all, > > We are currently experiencing some fairly major issues with our > OpenStack cluster. It all appears to be with Neutron and RabbitMQ. We > are seeing a lot of time out messages in responses to replies and > because of this instance creation or anything to do with instances and > networking is broken. > > We are running OpenStack Queens. > > We have already tuned Rabbit for Neutron by doing the following on neutron: > > heartbeat_timeout_threshold = 0 > rpc_conn_pool_size = 300 > rpc_thread_pool_size = 2048 > rpc_response_timeout = 3600 > rpc_poll_timeout = 60 > > ## Rpc all > executor_thread_pool_size = 64 > rpc_response_timeout = 3600 > > What we are seeing in the error logs for neutron for all services > (l3-agent, dhcp, linux-bridge etc ) are these timeouts: > > https://pastebin.com/Fjh23A5a > > We have manually tried to get everything in sync by forcing fail-over of > the networking which seems to get routers in sync. > > We are also seeing that there are a lot of "unacknowledged" messages in > RabbitMQ for 'q-plugin' in the neutron queues. > > Some times restarting of the services on neutron gets these back > acknowledged again, however the timeouts come back. > > The RabbitMQ servers themselves are not loaded at all. All memory, file > descriptors and errlang processes have plenty of resources available. > > We are also seeing a lot of rpc issues: > > Timeout in RPC method release_dhcp_port. Waiting for 1523 seconds before > next attempt. If the server is not down, consider increasing the > rpc_response_timeout option as Neutron server(s) may be overloaded and > unable to respond quickly enough.: MessagingTimeout: Timed out waiting > for a reply to message ID 965fa44ab4f6462fa378a1cf7259aad4 > 2020-03-10 19:02:33.548 16242 ERROR neutron.common.rpc > [req-a858afbb-5083-4e21-a309-6ee53582c4d9 - - - - -] Timeout in RPC > method release_dhcp_port. Waiting for 3347 seconds before next attempt. > If the server is not down, consider increasing the rpc_response_timeout > option as Neutron server(s) may be overloaded and unable to respond > quickly enough.: MessagingTimeout: Timed out waiting for a reply to > message ID 7937465f15634fbfa443fe1758a12a9c > > Does anyone know if there is anymore tuning to be done at all? Upgrading > for us at the moment to a newer version isn't really an option > unfortunately. > > Because of our setup, we also have roughly 800 routers enabled and I > know that will be putting a load on the system. However these problems > have only started to happen roughly 1 week ago and have steadily got worse. > > If anyone has any use cases for this or any more recommendations that > would be great. > > Many thanks, > > From sundar.nadathur at intel.com Thu Mar 12 00:40:42 2020 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 12 Mar 2020 00:40:42 +0000 Subject: [cyborg] Proposing core reviewers In-Reply-To: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> References: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> Message-ID: > From: Sean Mooney > Sent: Wednesday, March 11, 2020 9:37 AM > > On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > > Big +1 for Brin and shogo's nomination and well deserved :) > > > > I'm a little bit concerned over the 18 months period. The original > > rule we setup is volunteer step down, since this is a small team we > > want to acknowledge everyone that has made significant contributions. > > Some of the inactive core reviewers like Justin Kilpatrick have moved > > on a long time ago, and I don't see people like him could do any harm to > the project. > > > > But if the core reviewer has a size limit in the system, that would be > > reasonable to replace the inactive ones with the new recruits :) > it is generally considerd best pratice to maintian the core team adding or > removing people based on there activity. if a core is removed due to in > activity and they come back they can always be restored but it give a bad > perception if a project has like 20 core but only 2 are active. as a new > contibutor you dont know which ones are active and it can be frustrating to > reach out to them and get no responce. > also just form a project healt point of view it make the project look like its > more diverse or more active then it actully is which is also not generally a > good thing. > > that said core can step down if they feel like they can contribute time > anymore when ever they like so and if a core is steping a way for a few > months but intends to come back they can also say that in advance and there > is no harm in leaving them for a cycle or two but in general after a period of > in activity (usally more then a full release/6months) i think its good to reduce > back down the core team. > > > > Just my two cents As of now, Cyborg core team officially has 12 members [1]. That is hardly small. Justin Kilpatrick seems to be gone for good; he didn't respond to my emails. Rushil Chugh confirmed that he is not working on OpenStack anymore and asked to step down as core. With due thanks to him for his contributions, I'll go ahead. Those are the two cores I had in mind. Agree with Sean that it is better to keep the list of core reviewers up to date. With all the changes in Cyborg over the past 18 months, it will be tough for a person to jump in after a long hiatus and resume as a core reviewer. Even if they want to come back, it is better for them to come up to speed first. Given this background, if there is any objection to the removal of these two cores, please let me know. [1] https://review.opendev.org/#/admin/groups/1243,members Regards, Sundar > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar > > > > wrote: > > > > > Hello all, > > > Brin Zhang has been actively contributing to Cyborg in various > > > areas, adding new features, improving quality, reviewing patches, > > > and generally helping others in the community. Despite the > > > relatively short time, he has been one of the most prolific > > > contributors, and brings an enthusiastic and active mindset. I would > > > like to thank him and acknowledge his significant contributions by > proposing him as a core reviewer for Cyborg. > > > > > > Shogo Saito has been active in Cyborg since Train release. He has > > > been driving the Cyborg client improvements, including its revamp to > > > use OpenStackSDK. Previously he was instrumental in the transition > > > to Python 3, testing and fixing issues in the process. As he has > > > access to real FPGA hardware, he brings a users’ perspective and > > > also tests Cyborg with real hardware. I would like to thank and > > > acknowledge him for his steady valuable contributions, and propose him > as a core reviewer for Cyborg. > > > > > > Some of the currently listed core reviewers have not been > > > participating for a lengthy period of time. It is proposed that > > > those who have had no contributions for the past 18 months – i.e. no > > > participation in meetings, no code contributions and no reviews – be > > > removed from the list of core reviewers. > > > > > > If no objections are made known by March 20, I will make the changes > > > proposed above. > > > > > > Thanks. > > > > > > Regards, > > > Sundar From fungi at yuggoth.org Thu Mar 12 00:43:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 Mar 2020 00:43:26 +0000 Subject: [all] IRC channel cleanup (was: A call for consolidation and simplification) In-Reply-To: References: Message-ID: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> On 2020-03-11 16:15:32 +0100 (+0100), Thierry Carrez wrote: [...] > Do we really need 180 IRC channels [...] There are around 110 we currently seem to deem worth logging with the "openstack" meetbot (note that not all are OpenStack community channels): A quick survey of logs suggests these have seen no comment from a human in 6 months (all are OpenStack-related): #scientific-wg #openstack-women #openstack-sprint #openstack-net-bgpvpn #openstack-heat-translator #openstack-forum #openstack-dragonflow #congress And these have averaged fewer than one comment from a human per month since September: #openstack-outreachy #murano #openstack-tricircle #openstack-performance #openstack-ec2api #openstack-browbeat Does anyone object to us ceasing logging of the above 14 channels? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amy at demarco.com Thu Mar 12 00:49:42 2020 From: amy at demarco.com (Amy Marrich) Date: Wed, 11 Mar 2020 19:49:42 -0500 Subject: [all] IRC channel cleanup (was: A call for consolidation and simplification) In-Reply-To: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> References: <20200312004325.p7zwr7pkc6aeuezc@yuggoth.org> Message-ID: You can go ahead and archive #openstack-women as anyone should now be on #openstack-diversity. Thanks, Amy (spotz) On Wed, Mar 11, 2020 at 7:45 PM Jeremy Stanley wrote: > On 2020-03-11 16:15:32 +0100 (+0100), Thierry Carrez wrote: > [...] > > Do we really need 180 IRC channels > [...] > > There are around 110 we currently seem to deem worth logging with > the "openstack" meetbot (note that not all are OpenStack community > channels): > > https://opendev.org/opendev/system-config/src/commit/c24853076ddc59932a0760ddc2dcafdc6958340e/hiera/common.yaml#L102-L214 > > > > A quick survey of logs suggests these have seen no comment from a > human in 6 months (all are OpenStack-related): > > #scientific-wg > #openstack-women > #openstack-sprint > #openstack-net-bgpvpn > #openstack-heat-translator > #openstack-forum > #openstack-dragonflow > #congress > > And these have averaged fewer than one comment from a human per > month since September: > > #openstack-outreachy > #murano > #openstack-tricircle > #openstack-performance > #openstack-ec2api > #openstack-browbeat > > Does anyone object to us ceasing logging of the above 14 channels? > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Mar 12 01:03:33 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 Mar 2020 01:03:33 +0000 Subject: [all] Removing defunct meeting records (was: A call for consolidation and simplification) In-Reply-To: References: Message-ID: <20200312010333.auxhcao54e6gbf42@yuggoth.org> On 2020-03-11 16:15:32 +0100 (+0100), Thierry Carrez wrote: [...] > we have too many meetings (76, in case you were wondering), too > much energy spent running them, too much frustration when nobody > joins. [...] Here's a list of 25 currently defined meetings which have not been held in 2020 (though it's possible some are being held with a different meeting_id passed to #startmeeting than is listed in the meeting record): CloudKitty Team Meeting Congress Team Meeting Containers Team Meeting Documentation Team Meeting First Contact SIG Meeting Freezer Meeting Glance Bug Squad Meeting Group Based Policy Team Meeting Heat (Orchestration) Team Meeting I18N Team Meeting Interop Working Group Meeting Kuryr Project Office Hours LOCI Development Meeting Mistral Meeting Networking VPP team meeting OpenStack Charms Placement Team Office Hour PowerVM Driver Meeting Public Cloud SIG Searchlight Team Meeting Telemetry Team Meeting Trove (DBaaS) Team Meeting Upgrades SIG Vitrage Team Meeting Zaqar Team Meeting I recommend at least correcting inaccurate meeting_id entries in the YAML files here: https://opendev.org/opendev/irc-meetings/src/branch/master/meetings/ If there are meetings you know are not being held, please submit changes to remove their corresponding YAML files. I'll set myself a reminder to rerun this query again sometime soon and we can discuss bulk removing any which are presumed defunct at that time. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From eandersson at blizzard.com Thu Mar 12 01:05:21 2020 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Thu, 12 Mar 2020 01:05:21 +0000 Subject: Neutron RabbitMQ issues In-Reply-To: References: <825e802d-5a6f-4e96-dcf5-9b10332ebf3e@civo.com> Message-ID: We are hitting something awfully similar. We have basically been hitting a few pretty serious bugs with RabbitMQ. The main one is when a RabbitMQ server crashes, or gets split brain it does not always recover, or even when just one node is restarted. We sometimes end up with orphaned consumers that keep consuming messages, but goes to /dev/null pretty much. Another issue is that sometimes bindings stop working. They are visually there, but simply does not route traffic to the intended queues. e.g. https://github.com/rabbitmq/rabbitmq-server/issues/641 I wrote two quick scripts to audit these issues. http://paste.openstack.org/show/790569/ - Check if you have orphaned consumers (may need pagination if you have a large deployment). http://paste.openstack.org/show/790570/ - Check if the bindings are bad for a specific queue. The main issue seems to be the number of queues + connections causing the recovery after restarting a node to cause bindings and/or queues to get into an "orphaned" state. Best Regards, Erik Olof Gunnar Andersson -----Original Message----- From: Satish Patel Sent: Wednesday, March 11, 2020 5:14 PM To: Grant Morley Cc: openstack-discuss at lists.openstack.org Subject: Re: Neutron RabbitMQ issues I am also dealing with some short of rabbitmq performance issue but its not as worst you your issue. This is my favorite video, not sure you have seen this before or not but anyway posting here - https://urldefense.com/v3/__https://www.youtube.com/watch?v=bpmgxrPOrZw__;!!Ci6f514n9QsL8ck!1rOR_L7ya6zmMgZ0owpfO7NvhsPOzbgyUplonob2awcg8hd80yCAT_ynvarUEZv4Mw$ On Wed, Mar 11, 2020 at 10:24 AM Grant Morley wrote: > > Hi all, > > We are currently experiencing some fairly major issues with our > OpenStack cluster. It all appears to be with Neutron and RabbitMQ. We > are seeing a lot of time out messages in responses to replies and > because of this instance creation or anything to do with instances and > networking is broken. > > We are running OpenStack Queens. > > We have already tuned Rabbit for Neutron by doing the following on neutron: > > heartbeat_timeout_threshold = 0 > rpc_conn_pool_size = 300 > rpc_thread_pool_size = 2048 > rpc_response_timeout = 3600 > rpc_poll_timeout = 60 > > ## Rpc all > executor_thread_pool_size = 64 > rpc_response_timeout = 3600 > > What we are seeing in the error logs for neutron for all services > (l3-agent, dhcp, linux-bridge etc ) are these timeouts: > > https://urldefense.com/v3/__https://pastebin.com/Fjh23A5a__;!!Ci6f514n > 9QsL8ck!1rOR_L7ya6zmMgZ0owpfO7NvhsPOzbgyUplonob2awcg8hd80yCAT_ynvapLQK > 9aOA$ > > We have manually tried to get everything in sync by forcing fail-over > of the networking which seems to get routers in sync. > > We are also seeing that there are a lot of "unacknowledged" messages > in RabbitMQ for 'q-plugin' in the neutron queues. > > Some times restarting of the services on neutron gets these back > acknowledged again, however the timeouts come back. > > The RabbitMQ servers themselves are not loaded at all. All memory, > file descriptors and errlang processes have plenty of resources available. > > We are also seeing a lot of rpc issues: > > Timeout in RPC method release_dhcp_port. Waiting for 1523 seconds > before next attempt. If the server is not down, consider increasing > the rpc_response_timeout option as Neutron server(s) may be overloaded > and unable to respond quickly enough.: MessagingTimeout: Timed out > waiting for a reply to message ID 965fa44ab4f6462fa378a1cf7259aad4 > 2020-03-10 19:02:33.548 16242 ERROR neutron.common.rpc > [req-a858afbb-5083-4e21-a309-6ee53582c4d9 - - - - -] Timeout in RPC > method release_dhcp_port. Waiting for 3347 seconds before next attempt. > If the server is not down, consider increasing the > rpc_response_timeout option as Neutron server(s) may be overloaded and > unable to respond quickly enough.: MessagingTimeout: Timed out waiting > for a reply to message ID 7937465f15634fbfa443fe1758a12a9c > > Does anyone know if there is anymore tuning to be done at all? > Upgrading for us at the moment to a newer version isn't really an > option unfortunately. > > Because of our setup, we also have roughly 800 routers enabled and I > know that will be putting a load on the system. However these problems > have only started to happen roughly 1 week ago and have steadily got worse. > > If anyone has any use cases for this or any more recommendations that > would be great. > > Many thanks, > > From whayutin at redhat.com Thu Mar 12 02:42:52 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 11 Mar 2020 20:42:52 -0600 Subject: [tripleo] Missing tag in cron container image - no recheck please In-Reply-To: References: Message-ID: On Wed, Mar 11, 2020 at 8:12 AM Emilien Macchi wrote: > Hi folks, > > We seem to have an issue with container images, where one (at least) has a > missing tag: > https://bugs.launchpad.net/tripleo/+bug/1866927 > > It is causing most of our jobs to go red and fail on: > tripleo_common.image.exception.ImageNotFoundException: Not found image: > docker:// > docker.io/tripleomaster/centos-binary-cron:3621159be13b41f8ead2e873b357f4a5 > > Please refrain from approving or rechecking patches until we have > sorted this out. > > Thanks and stay tuned. > -- > Emilien Macchi > This issue has been resolved.. -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Thu Mar 12 03:29:39 2020 From: satish.txt at gmail.com (Satish Patel) Date: Wed, 11 Mar 2020 23:29:39 -0400 Subject: Neutron RabbitMQ issues In-Reply-To: References: <825e802d-5a6f-4e96-dcf5-9b10332ebf3e@civo.com> Message-ID: Totally agreed with you, I had similar issue when my cluster got split and not able to recover from that state then finally i have to re-build it from scratch to make it functional. There isn't any good guideline about rabbitmq capacity planning, every deployment is unique. Anyway thanks for those script i will hook them up with my monitoring system. On Wed, Mar 11, 2020 at 9:05 PM Erik Olof Gunnar Andersson wrote: > > We are hitting something awfully similar. > > We have basically been hitting a few pretty serious bugs with RabbitMQ. > > The main one is when a RabbitMQ server crashes, or gets split brain it does not always recover, or even when just one node is restarted. We sometimes end up with orphaned consumers that keep consuming messages, but goes to /dev/null pretty much. Another issue is that sometimes bindings stop working. They are visually there, but simply does not route traffic to the intended queues. > > e.g. https://github.com/rabbitmq/rabbitmq-server/issues/641 > > I wrote two quick scripts to audit these issues. > http://paste.openstack.org/show/790569/ - Check if you have orphaned consumers (may need pagination if you have a large deployment). > http://paste.openstack.org/show/790570/ - Check if the bindings are bad for a specific queue. > > The main issue seems to be the number of queues + connections causing the recovery after restarting a node to cause bindings and/or queues to get into an "orphaned" state. > > Best Regards, Erik Olof Gunnar Andersson > > -----Original Message----- > From: Satish Patel > Sent: Wednesday, March 11, 2020 5:14 PM > To: Grant Morley > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Neutron RabbitMQ issues > > I am also dealing with some short of rabbitmq performance issue but its not as worst you your issue. > > This is my favorite video, not sure you have seen this before or not but anyway posting here - https://urldefense.com/v3/__https://www.youtube.com/watch?v=bpmgxrPOrZw__;!!Ci6f514n9QsL8ck!1rOR_L7ya6zmMgZ0owpfO7NvhsPOzbgyUplonob2awcg8hd80yCAT_ynvarUEZv4Mw$ > > On Wed, Mar 11, 2020 at 10:24 AM Grant Morley wrote: > > > > Hi all, > > > > We are currently experiencing some fairly major issues with our > > OpenStack cluster. It all appears to be with Neutron and RabbitMQ. We > > are seeing a lot of time out messages in responses to replies and > > because of this instance creation or anything to do with instances and > > networking is broken. > > > > We are running OpenStack Queens. > > > > We have already tuned Rabbit for Neutron by doing the following on neutron: > > > > heartbeat_timeout_threshold = 0 > > rpc_conn_pool_size = 300 > > rpc_thread_pool_size = 2048 > > rpc_response_timeout = 3600 > > rpc_poll_timeout = 60 > > > > ## Rpc all > > executor_thread_pool_size = 64 > > rpc_response_timeout = 3600 > > > > What we are seeing in the error logs for neutron for all services > > (l3-agent, dhcp, linux-bridge etc ) are these timeouts: > > > > https://urldefense.com/v3/__https://pastebin.com/Fjh23A5a__;!!Ci6f514n > > 9QsL8ck!1rOR_L7ya6zmMgZ0owpfO7NvhsPOzbgyUplonob2awcg8hd80yCAT_ynvapLQK > > 9aOA$ > > > > We have manually tried to get everything in sync by forcing fail-over > > of the networking which seems to get routers in sync. > > > > We are also seeing that there are a lot of "unacknowledged" messages > > in RabbitMQ for 'q-plugin' in the neutron queues. > > > > Some times restarting of the services on neutron gets these back > > acknowledged again, however the timeouts come back. > > > > The RabbitMQ servers themselves are not loaded at all. All memory, > > file descriptors and errlang processes have plenty of resources available. > > > > We are also seeing a lot of rpc issues: > > > > Timeout in RPC method release_dhcp_port. Waiting for 1523 seconds > > before next attempt. If the server is not down, consider increasing > > the rpc_response_timeout option as Neutron server(s) may be overloaded > > and unable to respond quickly enough.: MessagingTimeout: Timed out > > waiting for a reply to message ID 965fa44ab4f6462fa378a1cf7259aad4 > > 2020-03-10 19:02:33.548 16242 ERROR neutron.common.rpc > > [req-a858afbb-5083-4e21-a309-6ee53582c4d9 - - - - -] Timeout in RPC > > method release_dhcp_port. Waiting for 3347 seconds before next attempt. > > If the server is not down, consider increasing the > > rpc_response_timeout option as Neutron server(s) may be overloaded and > > unable to respond quickly enough.: MessagingTimeout: Timed out waiting > > for a reply to message ID 7937465f15634fbfa443fe1758a12a9c > > > > Does anyone know if there is anymore tuning to be done at all? > > Upgrading for us at the moment to a newer version isn't really an > > option unfortunately. > > > > Because of our setup, we also have roughly 800 routers enabled and I > > know that will be putting a load on the system. However these problems > > have only started to happen roughly 1 week ago and have steadily got worse. > > > > If anyone has any use cases for this or any more recommendations that > > would be great. > > > > Many thanks, > > > > > From amotoki at gmail.com Thu Mar 12 03:48:54 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 12 Mar 2020 12:48:54 +0900 Subject: Add pytest, pytest-django and pytest-html to global requirements In-Reply-To: <20200311210827.GA90029@sinanju> References: <20200311190623.rrezlwz6kfoiuf4o@mthode.org> <2f761df2a3db8c080f5cab75710b825242ed79f9.camel@redhat.com> <4636a78af2b78b8d316ff5cd3d2b76a1a2173cd7.camel@redhat.com> <20200311210827.GA90029@sinanju> Message-ID: I am commenting only on horizon specific cases. I believe it helps understanding the situation and discussing the direction. > That being said horizon has always been an exception because django > has special requirements for testing (mainly they publish their testing > framework as an extension for a test frameworks other than stdlib unittest). In > the past it was needed a nose extension and now it looks like that has been > updated to be a pytest exception. I don't see a problem to just morph the old > exception that horizon uses nose to horizon uses pytest if it's really > necessary to test django. Previously we used nose for the test runner, but the Django (default) test runner is used now. It happens because Django support in nose looks unmaintained and the migration to the Django test runner is simplest and we are confident that it is maintained as long as the Django project is live. Only reason we did/could not choose stestr is that there is no integration support for Django applications in stestr. IIUC, roughly speaking, what Django testing framework provides are: (1) convenient methods for writing tests (for example, custom assertion, sending requests to Django framework and so on) (2) setup Django for testing mainly including loading Django settings (3) Fixtures for Django Database integration (which is not used in horizon) (4) provide Django test runner (1) is useful for test writers and (2) is required as Django settings module is common in Django projects. (1) and (2) are things re-implemented by individual projects like horizon, and (2) is what pytest-django and nose Django plugin do. (2) is missing in stestr. We already have test runners for Django: the Django default test runner and pytest (with pytest-django) and they are maintained well, so from the horizon team perspective it is better to use an existing one. The downside of the Django default test runner is there seems no good way to handle test results. It has no subunit support. It looks like that only stdout is a place to see test results. I think this is the reason Oleksii proposed pytest usage. The above is my understanding around Django testing. I have no preference on a test runner in horizon. If someone can work on stestr Django integration, it would be great. It provides more consistency with other OpenStack projects. If pytest(-django) is adopted I would like to see zuul role(s) along with the proposal. Akihiro Motoki (irc: amotoki) On Thu, Mar 12, 2020 at 6:10 AM Matthew Treinish wrote: > > On Wed, Mar 11, 2020 at 10:32:32PM +0200, Oleksii Petrenko wrote: > > > > > Starting with stestr, could you explain why it was not good enough for > > > > > your use case? > > > > > > Stestr will not provide us with fixtures for django (for future use), > > also with the help of pytest, we probably would be able to unify html > > creation all across our projects. Also, xml exporting in different > > formats can help users with aggregating test statistics. > > The aggregated data view already exists: > > http://status.openstack.org/openstack-health/#/ > > We also have 2 different html views of a test run depending on the level of > detail you want: > > https://7dd927a4891851ac968e-517bfbb0b76f5445108257ba8a306671.ssl.cf5.rackcdn.com/712315/2/check/tempest-full-py3/c20f9f1/testr_results.html > and > https://7dd927a4891851ac968e-517bfbb0b76f5445108257ba8a306671.ssl.cf5.rackcdn.com/712315/2/check/tempest-full-py3/c20f9f1/controller/logs/stackviz/index.html#/stdin/timeline > > As for "xml exporting" I assume you're talking about xunitxml. There are > several limitations around it, especially for parallel test execution > which is why stestr is built around and uses subunit. But, if you want to > generate xunitxml from subunit for any reason this is straightforward to > do, it's built into subunit: > > https://github.com/testing-cabal/subunit/blob/master/filters/subunit2junitxml > > > > > > > more of a general question if they are test only deps that wont be used at runtime which i think is the case in all of > > > > the above do they enven need to be in Global-requirements? ignoring the fact that devstack installes all test- > > > > requireemtes when it in stall packages which is a different topic if this is only used for generating html report for > > > > tests then it seams liek we would not need to corrdiate the software version. > > pytest is needed to generate coverage reports. > > I don't understand this either, we have coverage jobs already running on most > projects. The reports get published as part of the job artifacts: > > https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd1/706509/2/check/openstack-tox-cover/fd1df05/cover/index.html > > > Also as I pointed out on the review, this is not the first time we've > discussed this. Since I started working on OpenStack why not runner or > framework X (at one point it was nose, then it switched to pytest) has > been brought up by someone. We tried to write it down in the project > testing interface: > > https://governance.openstack.org/tc/reference/pti/python.html#python-test-running > > Basically, by using a unittest based runner anybody can use their preferred > test runner locally. stestr is used for CI because of the parallel execution > and subunit integration to leverage all the infra tooling built around it. > > That being said horizon has always been an exception because django > has special requirements for testing (mainly they publish their testing > framework as an extension for a test frameworks other than stdlib unittest). In > the past it was needed a nose extension and now it looks like that has been > updated to be a pytest exception. I don't see a problem to just morph the old > exception that horizon uses nose to horizon uses pytest if it's really > necessary to test django. > > If you do end up using pytest because there is no other choice for django > testing, you can convert the xunitxml to subunit to integrate it into all > those existing tools I mentioned before with either: > > https://github.com/mtreinish/health-helm/blob/master/junitxml2subunit.py > or > https://github.com/mtreinish/junitxml2subunit > > (do note stackviz and subunit2sql/openstack-health won't be really useful > with xunitxml to subunit conversion because xunitxml doesn't track > execution timestamps) > > -Matt Treinish From arnaud.morin at gmail.com Thu Mar 12 06:46:49 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 12 Mar 2020 06:46:49 +0000 Subject: [neutron][largescale-sig] Debugging and tracking missing flows with l2pop In-Reply-To: <6A0F6E0F-9D6E-4ED2-B4AC-F862885220B4@syntaxhighlighted.com> References: <6A0F6E0F-9D6E-4ED2-B4AC-F862885220B4@syntaxhighlighted.com> Message-ID: <20200312064649.GI29109@sync> Hey Krzysztof, In my company we dont use l2pop, I remember that it has some downsides when scaling a lot (more that 1k computes in a region) but I dont remember the details. Anyway, our agent is based on an OVS Agent, which is also using OpenFlow rules. We do monitor the openflow rules out of neutron with custom tools. We do that mainly for 2 reasons: - we want to make sure that neutron wont leak any rule, this could be very harmful - we want to make sure that neutron did not miss any rule when configuring a specific port, which could lead a broken network connection for our clients. We track the missing openflow rules on the compute itself, because we dont want to rely on a centralized system for that. So, to do that, we found a way to pull information about ports on the compute itself, from neutron server and database. Cheers, -- Arnaud Morin On 11.03.20 - 14:29, Krzysztof Klimonda wrote: > Hi, > > (This is stein deployment with 14.0.2 neutron release) > > I’ve just spent some time debugging a missing connection between two VMs running on OS stein with ovs+l2pop enabled and the direct cause was missing flows in table 20 and a very incomplete flood flow in table 22. Restarting neutron-openvswitch-agent on that host has fixed the issue. > > Last time we’ve encountered missing flood flows (in another pike-based deployment), we tracked it down to https://review.opendev.org/#/c/600151/ and since then it was stable. > > My initial thought was that we were hitting the same bug - a couple of VMs are scheduled on the same compute, 3 ports are activated at the same time, and the flood entry is not broadcasted to other computes. However that issue was only affecting one of the computes, and it was the only one missing both MAC entries in table 20 and VXLAN tunnels in table 22. > > The only other idea I have is that the compute with missing flows have not received them from rabbitmq, but there I see nothing in logs that would suggest that agent was disconnected from rabbitmq. > > So at this point I have three questions: > > - what would be a good place to look next to track down those missing flows > - for other operators, how stable do you find l2pop in general? and if you have problems with missing flows in your environment, do you try to monitor your deployment for that? > > -Chris From zhipengh512 at gmail.com Thu Mar 12 07:55:07 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Mar 2020 15:55:07 +0800 Subject: [cyborg] Proposing core reviewers In-Reply-To: References: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> Message-ID: I have no particular objection about removing these particular two previous active cores, however I do concern that when we start to build a new precedence, we should do it right which means we should have an agreed set of metrics that provides the objective qualification of the "core removal" process. The original proposed qualification is "18 months no participation in meetings, no code contributions and no reviews", I would like that we could make the clarification that: - Is it a consecutive 18 months period with the construed "absence criteria" met ? - For the "absence criteria", could we settle upon a set of exhaustive metrics: no meeting, no code contribution, no review, no email discussion participation, anything more ? - If there were a set of agreed "absence criteria"s, what are the logical connection between these pre-conditions ? Is it an "AND" (all of the pre-conditions shall be satisfied) or just "OR" (only one of the pre-conditions satisfies) Once we have a concrete rule setup, we are good to go with a current core reviewer vote for the record of removing, as far as I understand :) Due process is very important. On Thu, Mar 12, 2020 at 8:40 AM Nadathur, Sundar wrote: > > > From: Sean Mooney > > Sent: Wednesday, March 11, 2020 9:37 AM > > > > On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > > > Big +1 for Brin and shogo's nomination and well deserved :) > > > > > > I'm a little bit concerned over the 18 months period. The original > > > rule we setup is volunteer step down, since this is a small team we > > > want to acknowledge everyone that has made significant contributions. > > > Some of the inactive core reviewers like Justin Kilpatrick have moved > > > on a long time ago, and I don't see people like him could do any harm > to > > the project. > > > > > > But if the core reviewer has a size limit in the system, that would be > > > reasonable to replace the inactive ones with the new recruits :) > > it is generally considerd best pratice to maintian the core team adding > or > > removing people based on there activity. if a core is removed due to in > > activity and they come back they can always be restored but it give a bad > > perception if a project has like 20 core but only 2 are active. as a new > > contibutor you dont know which ones are active and it can be frustrating > to > > reach out to them and get no responce. > > also just form a project healt point of view it make the project look > like its > > more diverse or more active then it actully is which is also not > generally a > > good thing. > > > > that said core can step down if they feel like they can contribute time > > anymore when ever they like so and if a core is steping a way for a few > > months but intends to come back they can also say that in advance and > there > > is no harm in leaving them for a cycle or two but in general after a > period of > > in activity (usally more then a full release/6months) i think its good > to reduce > > back down the core team. > > > > > > Just my two cents > > As of now, Cyborg core team officially has 12 members [1]. That is hardly > small. > > Justin Kilpatrick seems to be gone for good; he didn't respond to my > emails. Rushil Chugh confirmed that he is not working on OpenStack anymore > and asked to step down as core. With due thanks to him for his > contributions, I'll go ahead. > > Those are the two cores I had in mind. Agree with Sean that it is better > to keep the list of core reviewers up to date. With all the changes in > Cyborg over the past 18 months, it will be tough for a person to jump in > after a long hiatus and resume as a core reviewer. Even if they want to > come back, it is better for them to come up to speed first. > > Given this background, if there is any objection to the removal of these > two cores, please let me know. > > [1] https://review.opendev.org/#/admin/groups/1243,members > > Regards, > Sundar > > > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar > > > > > > wrote: > > > > > > > Hello all, > > > > Brin Zhang has been actively contributing to Cyborg in various > > > > areas, adding new features, improving quality, reviewing patches, > > > > and generally helping others in the community. Despite the > > > > relatively short time, he has been one of the most prolific > > > > contributors, and brings an enthusiastic and active mindset. I would > > > > like to thank him and acknowledge his significant contributions by > > proposing him as a core reviewer for Cyborg. > > > > > > > > Shogo Saito has been active in Cyborg since Train release. He has > > > > been driving the Cyborg client improvements, including its revamp to > > > > use OpenStackSDK. Previously he was instrumental in the transition > > > > to Python 3, testing and fixing issues in the process. As he has > > > > access to real FPGA hardware, he brings a users’ perspective and > > > > also tests Cyborg with real hardware. I would like to thank and > > > > acknowledge him for his steady valuable contributions, and propose > him > > as a core reviewer for Cyborg. > > > > > > > > Some of the currently listed core reviewers have not been > > > > participating for a lengthy period of time. It is proposed that > > > > those who have had no contributions for the past 18 months – i.e. no > > > > participation in meetings, no code contributions and no reviews – be > > > > removed from the list of core reviewers. > > > > > > > > If no objections are made known by March 20, I will make the changes > > > > proposed above. > > > > > > > > Thanks. > > > > > > > > Regards, > > > > Sundar > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Mar 12 09:54:44 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Mar 2020 10:54:44 +0100 Subject: [all] Removing defunct meeting records In-Reply-To: <20200312010333.auxhcao54e6gbf42@yuggoth.org> References: <20200312010333.auxhcao54e6gbf42@yuggoth.org> Message-ID: Jeremy Stanley wrote: > On 2020-03-11 16:15:32 +0100 (+0100), Thierry Carrez wrote: > [...] >> we have too many meetings (76, in case you were wondering), too >> much energy spent running them, too much frustration when nobody >> joins. > [...] > > Here's a list of 25 currently defined meetings which have not been > held in 2020 (though it's possible some are being held with a > different meeting_id passed to #startmeeting than is listed in the > meeting record): > > CloudKitty Team Meeting > Congress Team Meeting > Containers Team Meeting > Documentation Team Meeting > First Contact SIG Meeting > Freezer Meeting > Glance Bug Squad Meeting > Group Based Policy Team Meeting > Heat (Orchestration) Team Meeting > I18N Team Meeting > Interop Working Group Meeting > Kuryr Project Office Hours > LOCI Development Meeting > Mistral Meeting > Networking VPP team meeting > OpenStack Charms > Placement Team Office Hour > PowerVM Driver Meeting > Public Cloud SIG > Searchlight Team Meeting > Telemetry Team Meeting > Trove (DBaaS) Team Meeting > Upgrades SIG > Vitrage Team Meeting > Zaqar Team Meeting Note that I already filed for removal of those which did not happen for over a year: https://review.opendev.org/#/q/topic:abandoned-meetings-q1-2020 -- Thierry Carrez (ttx) From ignaziocassano at gmail.com Thu Mar 12 10:38:44 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 12 Mar 2020 11:38:44 +0100 Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch Message-ID: Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2 This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances. Any workaround, please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Mar 12 11:48:26 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 12 Mar 2020 11:48:26 +0000 Subject: [cyborg] Proposing core reviewers In-Reply-To: References: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> Message-ID: On Thu, 2020-03-12 at 15:55 +0800, Zhipeng Huang wrote: > I have no particular objection about removing these particular two previous > active cores, however I do concern that when we start to build a new > precedence, we should do it right which means we should have an agreed set > of metrics that provides the objective qualification of the "core removal" > process. > > The original proposed qualification is "18 months no participation in > meetings, no code contributions and no reviews", I would like that we could > make the clarification that: > > - Is it a consecutive 18 months period with the construed "absence > criteria" met ? i would think 18 months is slightly too long, it certely should not be longer then that. between 12 and 18 feels right to me. after about 2 cycle things can have changed significantly after 3 even more so. 6 monts feels way to sort but 12 to 18 i think is about right. > - For the "absence criteria", could we settle upon a set of exhaustive > metrics: no meeting, no code contribution, no review, no email discussion > participation, anything more ? the only metric for being a core rerviewer for being a core review should be based on well code reviews. code contibution without review should not be a consideration to keep core reviewer status. meeting, irc and email is also not really relevent with one exception. if cyborg was to do a virtual pre-ptg where specs desigin was disucssed and review on the mailing list via eamil, placmenet and nova have tried this in the last cycle or two, then i would consider that the same as gerrit review. i should qulify this that the metric should not be based on the number of reviews alone but rather how how detailed and well reasoned the comments are should be a large factor. a large number of +1 with no comment is generally an anti patteren fro considering some as a core. asking questions to clarify the design choices and confrim the authors intent and your understanding are in sync and make sense is perfectly valid to do while also +1 because you belive in your view the patch is correct and should be encurraged over +1 and no comment. the +1/-1 ratio should also be a factor. its if someone always +1s and never -1 they likely are not reviewing correctly other factors such as email participation, meeting attendence, irc presence or other community partisatpation are supporting factors that suggest a good candiate for becomeing a core but on there own should not be a vaild critia for granting or retaining core reviewer role in a project. my understanding of the above is derived form the definition of what a core review is https://docs.openstack.org/project-team-guide/open-development.html#core-reviewers the review critia https://docs.openstack.org/project-team-guide/open-development.html#reviews-guidelines https://docs.openstack.org/project-team-guide/review-the-openstack-way.html and my general experience with contributing to different openstack project. The core reviewer role withing a project while similar in some repects to a maintianer role in other opensouce models is not the same. a maintainers role tends to focus more on code authorship in addtion to review which is not a factor in the core reviewer role in openstack. if you never write a singel line of code but make detail and technically correct reviews in openstack that makes you an amazing core reviewer. conversly closing 100 bugs in a release with commits and doing no code review would make you a good maintainer but a bad core reviewer, you would be an invaluable contibutor for all your bug fixing work but no a good reviewer which s the focus of the core reviewer role. > - If there were a set of agreed "absence criteria"s, what are the logical > connection between these pre-conditions ? Is it an "AND" (all of the > pre-conditions shall be satisfied) or just "OR" (only one of the > pre-conditions satisfies) > > Once we have a concrete rule setup, we are good to go with a current core > reviewer vote for the record of removing, as far as I understand :) well any core can step down without any kind of vote at any time. they just need to go to https://review.opendev.org/#/admin/groups/1243,members tick there name and remove them selvs and well tell the rest of the team so they know. unless the person being removed object or there is an object to the 2 people being proposed by one of the core team i don't think there is a reason to wait in this case but that is up to the core team to decide. > > Due process is very important. actually i would think that haveing concreate rules like this probably are not useful but if you do write them down you should stick with them. > > On Thu, Mar 12, 2020 at 8:40 AM Nadathur, Sundar > wrote: > > > > > > From: Sean Mooney > > > Sent: Wednesday, March 11, 2020 9:37 AM > > > > > > On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > > > > Big +1 for Brin and shogo's nomination and well deserved :) > > > > > > > > I'm a little bit concerned over the 18 months period. The original > > > > rule we setup is volunteer step down, since this is a small team we > > > > want to acknowledge everyone that has made significant contributions. > > > > Some of the inactive core reviewers like Justin Kilpatrick have moved > > > > on a long time ago, and I don't see people like him could do any harm > > > > to > > > the project. > > > > > > > > But if the core reviewer has a size limit in the system, that would be > > > > reasonable to replace the inactive ones with the new recruits :) > > > > > > it is generally considerd best pratice to maintian the core team adding > > > > or > > > removing people based on there activity. if a core is removed due to in > > > activity and they come back they can always be restored but it give a bad > > > perception if a project has like 20 core but only 2 are active. as a new > > > contibutor you dont know which ones are active and it can be frustrating > > > > to > > > reach out to them and get no responce. > > > also just form a project healt point of view it make the project look > > > > like its > > > more diverse or more active then it actully is which is also not > > > > generally a > > > good thing. > > > > > > that said core can step down if they feel like they can contribute time > > > anymore when ever they like so and if a core is steping a way for a few > > > months but intends to come back they can also say that in advance and > > > > there > > > is no harm in leaving them for a cycle or two but in general after a > > > > period of > > > in activity (usally more then a full release/6months) i think its good > > > > to reduce > > > back down the core team. > > > > > > > > Just my two cents > > > > As of now, Cyborg core team officially has 12 members [1]. That is hardly > > small. > > > > Justin Kilpatrick seems to be gone for good; he didn't respond to my > > emails. Rushil Chugh confirmed that he is not working on OpenStack anymore > > and asked to step down as core. With due thanks to him for his > > contributions, I'll go ahead. > > > > Those are the two cores I had in mind. Agree with Sean that it is better > > to keep the list of core reviewers up to date. With all the changes in > > Cyborg over the past 18 months, it will be tough for a person to jump in > > after a long hiatus and resume as a core reviewer. Even if they want to > > come back, it is better for them to come up to speed first. > > > > Given this background, if there is any objection to the removal of these > > two cores, please let me know. > > > > [1] https://review.opendev.org/#/admin/groups/1243,members > > > > Regards, > > Sundar > > > > > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar > > > > > > > > wrote: > > > > > > > > > Hello all, > > > > > Brin Zhang has been actively contributing to Cyborg in various > > > > > areas, adding new features, improving quality, reviewing patches, > > > > > and generally helping others in the community. Despite the > > > > > relatively short time, he has been one of the most prolific > > > > > contributors, and brings an enthusiastic and active mindset. I would > > > > > like to thank him and acknowledge his significant contributions by > > > > > > proposing him as a core reviewer for Cyborg. > > > > > > > > > > Shogo Saito has been active in Cyborg since Train release. He has > > > > > been driving the Cyborg client improvements, including its revamp to > > > > > use OpenStackSDK. Previously he was instrumental in the transition > > > > > to Python 3, testing and fixing issues in the process. As he has > > > > > access to real FPGA hardware, he brings a users’ perspective and > > > > > also tests Cyborg with real hardware. I would like to thank and > > > > > acknowledge him for his steady valuable contributions, and propose > > > > him > > > as a core reviewer for Cyborg. > > > > > > > > > > Some of the currently listed core reviewers have not been > > > > > participating for a lengthy period of time. It is proposed that > > > > > those who have had no contributions for the past 18 months – i.e. no > > > > > participation in meetings, no code contributions and no reviews – be > > > > > removed from the list of core reviewers. > > > > > > > > > > If no objections are made known by March 20, I will make the changes > > > > > proposed above. > > > > > > > > > > Thanks. > > > > > > > > > > Regards, > > > > > Sundar > > > > > > From radoslaw.piliszek at gmail.com Thu Mar 12 11:55:55 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 12 Mar 2020 12:55:55 +0100 Subject: [all][dev][qa] cirros 0.5.1 Message-ID: Hiya Folks, as you might have noticed, cinder 0.5.1 has been released. This build seems to be passing the current devstack gate. [1] Big thanks to hrw and smoser for letting cirros 0.5.1 happen (and cirros having bright future yet again). Also thanks to mordred for cleaning up SDK testing to pass. :-) I think it would be nice to merge this in Ussuri still, preferably before April. On the other hand, we all know that devstack gate is not super comprehensive and hence I would like to ask teams whose tests depend on interaction with guest OS to test their gates on this patch (or help me help you do that). I deliberately marked it W-1 to avoid merging too early. Let the discussion begin. :-) [1] https://review.opendev.org/711492 -yoctozepto From mats.karlsson at apistraining.com Thu Mar 12 12:12:38 2020 From: mats.karlsson at apistraining.com (Mats Karlsson) Date: Thu, 12 Mar 2020 12:12:38 +0000 Subject: Missing OVA in GitHub Message-ID: Hi, I’m new in OpenStack and found out that there is a an VM with OpenStack at https://github.com/openstack/upstream-institute-virtual-environment But it look like that the VM (http://bit.ly/vm-2019-shanghai-v1) is missing and I can’t file a support issue in that repo so that’s why I’m asking here. Is this a known issue ? Regards Mats Karlsson Trainer [cid:image002.jpg at 01D4C2DE.FE359640] Rosenlundsgatan 54 SE-118 63 Stockholm, Sweden M: +46 766 967 835 T: +46 8 555 105 15 E:mats.karlsson at apistraining.com W: www.apistraining.com Connect with us: LinkedIn Youtube Facebook Twitter Instagram -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 1657 bytes Desc: image002.jpg URL: From james.denton at rackspace.com Thu Mar 12 12:30:42 2020 From: james.denton at rackspace.com (James Denton) Date: Thu, 12 Mar 2020 12:30:42 +0000 Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch In-Reply-To: References: Message-ID: <803A7B19-7B9E-423C-9358-B0138332A105@rackspace.com> Hi Ignazio, I tested a process that converted iptables_hybrid to openvswitch in-place, but not without a hard reboot of the VM and some massaging of the existing bridges/veths. Since you are live-migrating, though, you might be able to get around that. Regardless, to make this work, I had to update the port’s vif_details in the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this: > use neutron; > update ml2_port_bindings set vif_details='{"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1; So, perhaps making that change prior to moving the VM back to the other compute node will do the trick. Good luck! James From: Ignazio Cassano Date: Thursday, March 12, 2020 at 6:41 AM To: openstack-discuss Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch CAUTION: This message originated externally, please use caution when clicking on links or opening attachments! Hello All, I am facing some problems migrating from iptables_hybrid frirewall to openvswitch firewall on centos 7 queens, I am doing this because I want enable security groups logs which require openvswitch firewall. I would like to migrate without restarting my instances. I startded moving all instances from compute node 1. Then I configured openvswitch firewall on compute node 1, Instances migrated from compute node 2 to compute node 1 without problems. Once the compute node 2 was empty, I migrated it to openvswitch. But now instances does not migrate from node 1 to node 2 because it requires the presence of qbr bridge on node 2 This happened because migrating instances from node2 with iptables_hybrid to compute node 1 with openvswitch, does not put the tap under br-int as requested by openvswich firewall, but qbr is still present on compute node 1. Once I enabled openvswitch on compute node 2, migration from compute node 1 fails because it exprects qbr on compute node 2 . So I think I should moving on the fly tap interfaces from qbr to br-int on compute node 1 before migrating to compute node 2 but it is a huge work on a lot of instances. Any workaround, please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Mar 12 12:54:00 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 12 Mar 2020 13:54:00 +0100 Subject: [qeeens][neutron] migrating from iptables_hybrid to openvswitch In-Reply-To: <803A7B19-7B9E-423C-9358-B0138332A105@rackspace.com> References: <803A7B19-7B9E-423C-9358-B0138332A105@rackspace.com> Message-ID: Hello James, I will try that. Many thanks Ignazio Il giorno gio 12 mar 2020 alle ore 13:30 James Denton < james.denton at rackspace.com> ha scritto: > Hi Ignazio, > > > > I tested a process that converted iptables_hybrid to openvswitch > in-place, but not without a hard reboot of the VM and some massaging of the > existing bridges/veths. Since you are live-migrating, though, you might be > able to get around that. > > > > Regardless, to make this work, I had to update the port’s vif_details in > the Neutron DB and set ‘ovs_hybrid_plug’ to false. Something like this: > > > > > use neutron; > > > update ml2_port_bindings set vif_details='{"port_filter": true, > "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": > false}' where port_id='3d88982a-6b39-4f7e-8772-69367c442939' limit 1; > > > > So, perhaps making that change prior to moving the VM back to the other > compute node will do the trick. > > > > Good luck! > > > > James > > > > *From: *Ignazio Cassano > *Date: *Thursday, March 12, 2020 at 6:41 AM > *To: *openstack-discuss > *Subject: *[qeeens][neutron] migrating from iptables_hybrid to openvswitch > > > > *CAUTION:* This message originated externally, please use caution when > clicking on links or opening attachments! > > > > Hello All, I am facing some problems migrating from iptables_hybrid > frirewall to openvswitch firewall on centos 7 queens, > > I am doing this because I want enable security groups logs which require > openvswitch firewall. > > I would like to migrate without restarting my instances. > > I startded moving all instances from compute node 1. > > Then I configured openvswitch firewall on compute node 1, > > Instances migrated from compute node 2 to compute node 1 without problems. > > Once the compute node 2 was empty, I migrated it to openvswitch. > > But now instances does not migrate from node 1 to node 2 because it > requires the presence of qbr bridge on node 2 > > > > This happened because migrating instances from node2 with iptables_hybrid > to compute node 1 with openvswitch, does not put the tap under br-int as > requested by openvswich firewall, but qbr is still present on compute node > 1. > > Once I enabled openvswitch on compute node 2, migration from compute node > 1 fails because it exprects qbr on compute node 2 . > > So I think I should moving on the fly tap interfaces from qbr to br-int on > compute node 1 before migrating to compute node 2 but it is a huge work on > a lot of instances. > > > > Any workaround, please ? > > > > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kklimonda at syntaxhighlighted.com Thu Mar 12 13:12:30 2020 From: kklimonda at syntaxhighlighted.com (Krzysztof Klimonda) Date: Thu, 12 Mar 2020 14:12:30 +0100 Subject: [neutron][largescale-sig] Debugging and tracking missing flows with l2pop In-Reply-To: <20200312064649.GI29109@sync> References: <6A0F6E0F-9D6E-4ED2-B4AC-F862885220B4@syntaxhighlighted.com> <20200312064649.GI29109@sync> Message-ID: <2319D79E-5C11-4E7F-A710-977807B894B9@syntaxhighlighted.com> Thanks. Do your tools query neutron for ports, or do you query the database directly? I’m a bit concerned about having ~100 nodes query neutron for a list of ports and flows every minute or so, and how much extra load will that add on our neutron-server. What do you mean by neutron leaking rules? Is it security group rules that you are concerned about? -Chris > On 12 Mar 2020, at 07:46, Arnaud Morin wrote: > > Hey Krzysztof, > > In my company we dont use l2pop, I remember that it has some downsides > when scaling a lot (more that 1k computes in a region) but I dont > remember the details. > > Anyway, our agent is based on an OVS Agent, which is also using OpenFlow > rules. > We do monitor the openflow rules out of neutron with custom tools. > We do that mainly for 2 reasons: > - we want to make sure that neutron wont leak any rule, this could be > very harmful > - we want to make sure that neutron did not miss any rule when > configuring a specific port, which could lead a broken network > connection for our clients. > > We track the missing openflow rules on the compute itself, because we > dont want to rely on a centralized system for that. So, to do that, we > found a way to pull information about ports on the compute itself, from > neutron server and database. > > Cheers, > > -- > Arnaud Morin > > On 11.03.20 - 14:29, Krzysztof Klimonda wrote: >> Hi, >> >> (This is stein deployment with 14.0.2 neutron release) >> >> I’ve just spent some time debugging a missing connection between two VMs running on OS stein with ovs+l2pop enabled and the direct cause was missing flows in table 20 and a very incomplete flood flow in table 22. Restarting neutron-openvswitch-agent on that host has fixed the issue. >> >> Last time we’ve encountered missing flood flows (in another pike-based deployment), we tracked it down to https://review.opendev.org/#/c/600151/ and since then it was stable. >> >> My initial thought was that we were hitting the same bug - a couple of VMs are scheduled on the same compute, 3 ports are activated at the same time, and the flood entry is not broadcasted to other computes. However that issue was only affecting one of the computes, and it was the only one missing both MAC entries in table 20 and VXLAN tunnels in table 22. >> >> The only other idea I have is that the compute with missing flows have not received them from rabbitmq, but there I see nothing in logs that would suggest that agent was disconnected from rabbitmq. >> >> So at this point I have three questions: >> >> - what would be a good place to look next to track down those missing flows >> - for other operators, how stable do you find l2pop in general? and if you have problems with missing flows in your environment, do you try to monitor your deployment for that? >> >> -Chris From zhipengh512 at gmail.com Thu Mar 12 13:20:56 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Mar 2020 21:20:56 +0800 Subject: [cyborg] Proposing core reviewers In-Reply-To: References: <8232aae9fd2fcd78bbcf039dc1cc680cba417ca0.camel@redhat.com> Message-ID: I like what Sean proposed, and a cycle bound time criteria (2 cycles or 12 months) would be very good, and if we center the quality criteria on meaningful reviews would largely reduced the burden of unnecessary computations. I agree that we should document this and stick to it. For me "12 months + no meaningful review" would be a good enough concrete criteria, for removing the non-active core reviewer in a non-voluntarily step down fashion. On Thu, Mar 12, 2020 at 7:48 PM Sean Mooney wrote: > On Thu, 2020-03-12 at 15:55 +0800, Zhipeng Huang wrote: > > I have no particular objection about removing these particular two > previous > > active cores, however I do concern that when we start to build a new > > precedence, we should do it right which means we should have an agreed > set > > of metrics that provides the objective qualification of the "core > removal" > > process. > > > > The original proposed qualification is "18 months no participation in > > meetings, no code contributions and no reviews", I would like that we > could > > make the clarification that: > > > > - Is it a consecutive 18 months period with the construed "absence > > criteria" met ? > i would think 18 months is slightly too long, it certely should not be > longer then that. > between 12 and 18 feels right to me. after about 2 cycle things can have > changed significantly > after 3 even more so. 6 monts feels way to sort but 12 to 18 i think is > about right. > > - For the "absence criteria", could we settle upon a set of exhaustive > > metrics: no meeting, no code contribution, no review, no email discussion > > participation, anything more ? > the only metric for being a core rerviewer for being a core review should > be based on well code reviews. > code contibution without review should not be a consideration to keep core > reviewer status. > meeting, irc and email is also not really relevent with one exception. if > cyborg was to do a virtual pre-ptg where specs > desigin was disucssed and review on the mailing list via eamil, placmenet > and nova have tried this in the last cycle or > two, then i would consider that the same as gerrit review. > > i should qulify this that the metric should not be based on the number of > reviews alone but rather how how detailed > and well reasoned the comments are should be a large factor. a large > number of +1 with no comment is generally an anti > patteren fro considering some as a core. asking questions to clarify the > design choices and confrim the authors intent > and your understanding are in sync and make sense is perfectly valid to do > while also +1 because you belive in your view > the patch is correct and should be encurraged over +1 and no comment. > > the +1/-1 ratio should also be a factor. its if someone always +1s and > never -1 they likely are not reviewing correctly > > other factors such as email participation, meeting attendence, irc > presence or other community partisatpation are > supporting factors that suggest a good candiate for becomeing a core but > on there own should not be a vaild critia > for granting or retaining core reviewer role in a project. > > my understanding of the above is derived form the definition of what a > core review is > > https://docs.openstack.org/project-team-guide/open-development.html#core-reviewers > the review critia > https://docs.openstack.org/project-team-guide/open-development.html#reviews-guidelines > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > and my general experience with contributing to different openstack project. > The core reviewer role withing a project while similar in some repects to > a maintianer role in other opensouce models > is not the same. a maintainers role tends to focus more on code authorship > in addtion to review which is not a factor in > the core reviewer role in openstack. if you never write a singel line of > code but make detail and technically correct > reviews in openstack that makes you an amazing core reviewer. conversly > closing 100 bugs in a release with commits and > doing no code review would make you a good maintainer but a bad core > reviewer, you would be an invaluable contibutor for > all your bug fixing work but no a good reviewer which s the focus of the > core reviewer role. > > > - If there were a set of agreed "absence criteria"s, what are the logical > > connection between these pre-conditions ? Is it an "AND" (all of the > > pre-conditions shall be satisfied) or just "OR" (only one of the > > pre-conditions satisfies) > > > > Once we have a concrete rule setup, we are good to go with a current core > > reviewer vote for the record of removing, as far as I understand :) > well any core can step down without any kind of vote at any time. > they just need to go to > https://review.opendev.org/#/admin/groups/1243,members > tick there name and remove them selvs and well tell the rest of the team > so they know. > > unless the person being removed object or there is an object to the 2 > people being proposed by one > of the core team i don't think there is a reason to wait in this case but > that is up to the core team to decide. > > > > Due process is very important. > actually i would think that haveing concreate rules like this probably are > not useful but if you > do write them down you should stick with them. > > > > On Thu, Mar 12, 2020 at 8:40 AM Nadathur, Sundar < > sundar.nadathur at intel.com> > > wrote: > > > > > > > > > From: Sean Mooney > > > > Sent: Wednesday, March 11, 2020 9:37 AM > > > > > > > > On Thu, 2020-03-12 at 00:17 +0800, Zhipeng Huang wrote: > > > > > Big +1 for Brin and shogo's nomination and well deserved :) > > > > > > > > > > I'm a little bit concerned over the 18 months period. The original > > > > > rule we setup is volunteer step down, since this is a small team we > > > > > want to acknowledge everyone that has made significant > contributions. > > > > > Some of the inactive core reviewers like Justin Kilpatrick have > moved > > > > > on a long time ago, and I don't see people like him could do any > harm > > > > > > to > > > > the project. > > > > > > > > > > But if the core reviewer has a size limit in the system, that > would be > > > > > reasonable to replace the inactive ones with the new recruits :) > > > > > > > > it is generally considerd best pratice to maintian the core team > adding > > > > > > or > > > > removing people based on there activity. if a core is removed due to > in > > > > activity and they come back they can always be restored but it give > a bad > > > > perception if a project has like 20 core but only 2 are active. as > a new > > > > contibutor you dont know which ones are active and it can be > frustrating > > > > > > to > > > > reach out to them and get no responce. > > > > also just form a project healt point of view it make the project look > > > > > > like its > > > > more diverse or more active then it actully is which is also not > > > > > > generally a > > > > good thing. > > > > > > > > that said core can step down if they feel like they can contribute > time > > > > anymore when ever they like so and if a core is steping a way for a > few > > > > months but intends to come back they can also say that in advance and > > > > > > there > > > > is no harm in leaving them for a cycle or two but in general after a > > > > > > period of > > > > in activity (usally more then a full release/6months) i think its > good > > > > > > to reduce > > > > back down the core team. > > > > > > > > > > Just my two cents > > > > > > As of now, Cyborg core team officially has 12 members [1]. That is > hardly > > > small. > > > > > > Justin Kilpatrick seems to be gone for good; he didn't respond to my > > > emails. Rushil Chugh confirmed that he is not working on OpenStack > anymore > > > and asked to step down as core. With due thanks to him for his > > > contributions, I'll go ahead. > > > > > > Those are the two cores I had in mind. Agree with Sean that it is > better > > > to keep the list of core reviewers up to date. With all the changes in > > > Cyborg over the past 18 months, it will be tough for a person to jump > in > > > after a long hiatus and resume as a core reviewer. Even if they want to > > > come back, it is better for them to come up to speed first. > > > > > > Given this background, if there is any objection to the removal of > these > > > two cores, please let me know. > > > > > > [1] https://review.opendev.org/#/admin/groups/1243,members > > > > > > Regards, > > > Sundar > > > > > > > > On Wed, Mar 11, 2020 at 10:19 PM Nadathur, Sundar > > > > > > > > > > wrote: > > > > > > > > > > > Hello all, > > > > > > Brin Zhang has been actively contributing to Cyborg in > various > > > > > > areas, adding new features, improving quality, reviewing patches, > > > > > > and generally helping others in the community. Despite the > > > > > > relatively short time, he has been one of the most prolific > > > > > > contributors, and brings an enthusiastic and active mindset. I > would > > > > > > like to thank him and acknowledge his significant contributions > by > > > > > > > > proposing him as a core reviewer for Cyborg. > > > > > > > > > > > > Shogo Saito has been active in Cyborg since Train release. He has > > > > > > been driving the Cyborg client improvements, including its > revamp to > > > > > > use OpenStackSDK. Previously he was instrumental in the > transition > > > > > > to Python 3, testing and fixing issues in the process. As he has > > > > > > access to real FPGA hardware, he brings a users’ perspective and > > > > > > also tests Cyborg with real hardware. I would like to thank and > > > > > > acknowledge him for his steady valuable contributions, and > propose > > > > > > him > > > > as a core reviewer for Cyborg. > > > > > > > > > > > > Some of the currently listed core reviewers have not been > > > > > > participating for a lengthy period of time. It is proposed that > > > > > > those who have had no contributions for the past 18 months – > i.e. no > > > > > > participation in meetings, no code contributions and no reviews > – be > > > > > > removed from the list of core reviewers. > > > > > > > > > > > > If no objections are made known by March 20, I will make the > changes > > > > > > proposed above. > > > > > > > > > > > > Thanks. > > > > > > > > > > > > Regards, > > > > > > Sundar > > > > > > > > > > > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Thu Mar 12 13:23:41 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Thu, 12 Mar 2020 13:23:41 +0000 Subject: [tripleo] In-Reply-To: References: <1B1A2143-A018-408D-9515-A367CEA952B5@inaugust.com> Message-ID: <87h7yt7llu.tristanC@fedora> On Tue, Mar 10, 2020 at 19:38 Emilien Macchi wrote: > On Tue, Mar 10, 2020 at 10:41 AM Monty Taylor wrote: > >> Yay! >> >> When you have brainspace after firefighting (always fun) - maybe we should >> find a time to talk about whether our image building and publishing >> automation could help you out here. No rush - this is one of those “we’ve >> got some tools we might be able to leverage to help” - just ping me >> whenever. >> > > Hey Monty, > > The CI team is presently busy with CentOS 8 fires but I would be happy to > help and work together on convergence. > Maybe I can start by explaining how our process works, then you can do the > same and we see where we can collaborate. > > The TL;DR is that we have built TripleO CLI and Ansible roles to consume > Kolla tooling and build our images. > For what its worth, we, the software factory project team, would like to investigate using zuul pipeline to periodically update, test and promote a collection of images. Note that the goal is to update and promote only valid layers (instead of a doing a full rebuild each time). We actually plan to work on that story[0] in the upcoming weeks, it seems like zuul-jobs already feature most of the image building roles we would need, but we might require some modifications to be able to detect if a layer needs to be tested (e.g. looks for "Nothing to do." in stdout) Perhaps we can adapt the zuul-jobs role in such a way that it would support the update use-case as well as using TripleO CLI and roles. Cheers, -Tristan [0] https://tree.taiga.io/project/morucci-software-factory/us/3419 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Thu Mar 12 14:05:25 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 12 Mar 2020 14:05:25 +0000 Subject: FW: 2020 OSF Events & coronavirus In-Reply-To: References: <1583528201.853712216@emailsrvr.com> Message-ID: <168c8eed885645daaa82340403231e65@AUSX13MPS308.AMER.DELL.COM> Agree that going virtual makes most sense given current status From: Emilien Macchi Sent: Wednesday, March 11, 2020 5:46 PM To: Mark Collier Cc: openstack-discuss; Jonathan Bryce Subject: Re: FW: 2020 OSF Events & coronavirus [EXTERNAL EMAIL] Hi Mark, Thanks for the transparency, as usual. I have a few thoughts, please read inline. On Fri, Mar 6, 2020 at 4:04 PM Mark Collier > wrote: upcoming event in Vancouver is no exception. The OpenDev tracks > each morning will be programmed by volunteers from the community, and the project > teams will be organizing their own conversations as well each afternoon M-W, and > all day Thursday. > > But the larger question is here: should the show go on? > > The short answer is that as of now, the Vancouver and Berlin events are still > scheduled to happen in June (8-11) and October (19-23), respectively. > > However, we are willing to cancel or approach the events in a different way (i.e. > virtual) if the facts indicate that is the best path, and we know the facts are > changing rapidly. One of the most critical inputs we need is to hear from each of > you. We know that many of you rely on the twice-annual events to get together and > make rapid progress on the software, which is one reason we are not making any > decisions in haste. We also know that many of you may be unable or unwilling to > travel in June, and that is critical information to hear as we get closer to the > event so that we can make the most informed decision. I believe that we, as a community should show the example and our strengths by cancelling the Vancouver event and organize a virtual event like some other big events are doing. There is an opportunity for the OSF to show leadership in Software communities and acknowledge the risk of spread during that meeting; not only for the people attending it but for also those in contact with these people later. I'm not a doctor nor I know much about the virus; but I'm not interested to travel and take the risk to 1) catch the virus and 2) spread it at home and in my country; and as a community member, I feel like our responsibility is also to maintain ourselves healthy. In my opinion, the sooner we cancel, the better we can focus on organizing the virtual meetings, and also we can influence more communities to take that kind of decisions. Thanks Mark for starting that discussion, it's a perfect sign of how healthy is our community; and hopefully it will continue to be. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Thu Mar 12 14:37:03 2020 From: mark at stackhpc.com (Mark Goddard) Date: Thu, 12 Mar 2020 14:37:03 +0000 Subject: [ironic] proposing Iury Gregory for bifrost-core, ironic-inspector-core, sushy-core In-Reply-To: References: Message-ID: On Wed, 11 Mar 2020 at 18:55, Julia Kreger wrote: > > Iury has been working hard across the ironic community and has been > quite active in changing and improving our CI, as well as reviewing > code contributions and helpfully pointing out issues or items that > need to be fixed. I feel that he is on track to join ironic-core in > the next few months, but first I propose we add him to bifrost-core, > ironic-inspector-core, and sushy-core. > > Any objections? > +1 From thierry at openstack.org Thu Mar 12 14:37:54 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Mar 2020 15:37:54 +0100 Subject: [all][tc] Moving PTL role to "Maintainers" In-Reply-To: <2e142636-0070-704c-c5f7-1e035bc9d406@openstack.org> References: <2e142636-0070-704c-c5f7-1e035bc9d406@openstack.org> Message-ID: Thierry Carrez wrote: > [...] > So one solution might be: > > - Define multiple roles (release liaison, event liaison, meeting > chair...) and allow them to be filled by the team as they want, for the > duration they want, replaced when they want (would just need +1 from > previous and new holder of the role) > > - Use the TC as a governance safety valve to resolve any conflict > (instead of PTL elections) Proposed as a strawman at: https://review.opendev.org/#/c/712696/ Feel free to comment on how crazy that is there... -- Thierry Carrez (ttx) From sbauza at redhat.com