From radoslaw.piliszek at gmail.com Sun May 2 10:09:17 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 2 May 2021 12:09:17 +0200 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: Dears, I have scraped the Zuul API to get names of jobs that *could* run on master branch and are still on bionic. [1] "Could" because I could not establish from the API whether they are included in any pipelines or not really (e.g., there are lots of transitive jobs there that have their nodeset overridden in children and children are likely used in pipelines, not them). [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ -yoctozepto On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > Hello Everyone, > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > by 2nd week of May. > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > Cinder- cinder-plugin-ceph-tempest-mn-aa > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > python-cinderclient - python-cinderclient-functional-py36 > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > Murani- murano-dashboard-sanity-check > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > -gmann > From fungi at yuggoth.org Sun May 2 16:13:59 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 2 May 2021 16:13:59 +0000 Subject: [all][qa][cinder][octavia][murano][ionfra][tact-sig] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: <20210502161358.rfma44rw2sloes2b@yuggoth.org> On 2021-05-02 12:09:17 +0200 (+0200), Radosław Piliszek wrote: > I have scraped the Zuul API to get names of jobs that *could* run > on master branch and are still on bionic. [...] On a related note, I've proposed https://review.opendev.org/789098 to switch OpenDev's default nodeset to ubuntu-focal, and added it as a topic for discussion in Tuesday's meeting. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Sun May 2 16:47:47 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Sun, 2 May 2021 18:47:47 +0200 Subject: [all][qa][cinder][octavia][murano][ionfra][tact-sig] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <20210502161358.rfma44rw2sloes2b@yuggoth.org> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <20210502161358.rfma44rw2sloes2b@yuggoth.org> Message-ID: On Sun, May 2, 2021 at 6:17 PM Jeremy Stanley wrote: > > On 2021-05-02 12:09:17 +0200 (+0200), Radosław Piliszek wrote: > > I have scraped the Zuul API to get names of jobs that *could* run > > on master branch and are still on bionic. > [...] > > On a related note, I've proposed https://review.opendev.org/789098 > to switch OpenDev's default nodeset to ubuntu-focal, and added it as > a topic for discussion in Tuesday's meeting. Good thinking. I've made the listing more detailed and I am now showing where the nodeset originates from (in brackets). [1] 878 jobs (remember, there are lots of transitive / non-in-any-pipeline jobs there) take their nodeset from base so they are going to be affected then. [1] https://paste.ubuntu.com/p/D8HtjRCmkd/ -yoctozepto From premkumar at aarnanetworks.com Mon May 3 07:45:56 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Mon, 3 May 2021 13:15:56 +0530 Subject: Openstack Stack issues In-Reply-To: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: Hi Zane, How can I bring up the heat service. root at aio1:/etc/systemd/system# service heat-api status Unit heat-api.service could not be found. root at aio1:/etc/systemd/system# service heat-api restart Failed to restart heat-api.service: Unit heat-api.service not found. root at aio1:/etc/systemd/system# service heat-api-cfn status Unit heat-api-cfn.service could not be found. root at aio1:/etc/systemd/system# service heat-api-cloudwatch status Unit heat-api-cloudwatch.service could not be found. root at aio1:/etc/systemd/system# service heat-engine status Unit heat-engine.service could not be found. Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter wrote: > On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: > > Hi, > > > > I am using the Openstack *USURI *version in *Centos7*. Due to some > > issues my disk size is full,I freed up the space. Afte that some service > > went down. After that I have issues in creating the stack and list > > stack. > > It looks like heat-api at least is still down. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucasagomes at gmail.com Mon May 3 07:56:11 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 3 May 2021 08:56:11 +0100 Subject: [neutron] Bug Deputy Report April 26 - May 3 Message-ID: Hi, This is the Neutron bug report of the week of 2021-04-26. Critical: * https://bugs.launchpad.net/neutron/+bug/1926109 - "SSH timeout (wait timeout) due to potential paramiko issue" Unassigned High: * https://bugs.launchpad.net/neutron/+bug/1926476 - "Deprecation of pyroute2.IPDB in favor of pyroute2.NDB" Assigned to: ralonsoh * https://bugs.launchpad.net/neutron/+bug/1926638 - " Neutron - "neutron-tempest-plugin-designate-scenario" gate fails all the time" Assigned to: Dr. Jens Harbott * https://bugs.launchpad.net/neutron/+bug/1926653 - "[ovn] ml2/ovn may time out connecting to ovsdb server and stays dead in the water" Assigned to: flaviof * https://bugs.launchpad.net/neutron/+bug/1926693 - " Logic to obtain hypervisor hostname is not completely compatible with libvirt" Assigned to: Takashi Kajinami * https://bugs.launchpad.net/neutron/+bug/1926780 - "Multicast traffic scenario test is failing sometimes on OVN job" Unassigned Medium: * https://bugs.launchpad.net/neutron/+bug/1926273 - "Port can be created with an invalid MAC address" Assigned to: ralonsoh * https://bugs.launchpad.net/neutron/+bug/1926417 - "ObjectChangeHandler excessive thread usage" Assigned to: Szymon Wróblewski * https://bugs.launchpad.net/neutron/+bug/1926515 - " DHCP for VM fails when removing security group default rules" Assigned to: slaweq Low: * https://bugs.launchpad.net/neutron/+bug/1926149 - " [OVN] Neutron server is filter by "subnet:segment_id", but not when OVS is used" Assigned to: ralonsoh Wishlist: * https://bugs.launchpad.net/neutron/+bug/1926787 - " [DB] Neutron quota request implementation can end in a lock status" Unassigned Needs further triage: * https://bugs.launchpad.net/neutron/+bug/1926428 - " allocate_dynamic_segment() returns different segment dicts if segment exists" Unassigned * https://bugs.launchpad.net/neutron/+bug/1926531 - " SNAT namespace prematurely created then deleted on hosts, resulting in removal of RFP/FPR link to FIP namespace" Unassigned * https://bugs.launchpad.net/neutron/+bug/1926838 - "[OVN] infinite loop in ovsdb_monitor" Unassigned -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 3 07:56:34 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 09:56:34 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: Just a friendly reminder to allow everyone to vote, the poll will be closed tonight. Le lun. 26 avr. 2021 à 11:16, Herve Beraud a écrit : > Hello everyone, > > As Thierry proposed during our PTG here is our new poll [1] about our > meeting time. > > Indeed, we have a few regular attendees of the Release Management meeting > who have conflicts > with the previously chosen meeting time. As a result, we would like to > find a new time to hold the meeting. I've created a Doodle poll [1] for > everyone to give their input on times. It's mostly limited to times that > reasonably overlap the working day in the US and Europe since that's where > most of > our attendees are located. > > If you attend the Release Management meeting, please fill out the poll so > we can hopefully find a time that works better for everyone. > > For the sake of organization and to allow everyone to schedule his agenda > accordingly, the poll will be closed on May 3rd. On that date, I will > announce the time of this meeting and the date on which it will take effect > . > > Notice that potentially that will force us to move our meeting on another > day than Thursdays. > > I'll soon initiate our meeting tracking etherpad for Xena, and since we > are at the beginning of a new series so we don't have a lot of topics to > discuss, so I think that it could be worth waiting until next week to > initiate our first meeting. Let me know if you are ok with that. That will > allow us to plan it accordingly to the chosen meeting time. > > Thanks! > > [1] https://doodle.com/poll/2kcdh83r3hmwmxie > > Le mer. 7 avr. 2021 à 12:14, Herve Beraud a écrit : > >> Greetings, >> >> The poll is now terminated, everybody voted and we reached a consensus, >> our new meeting time is at 2pm UTC on Thursdays. >> >> https://doodle.com/poll/ip6tg4fvznz7p3qx >> >> It will take effect from our next meeting, i.e tomorrow. >> >> I'm going to update our agenda accordingly. >> >> Thanks to everyone for your vote. >> >> Le mer. 31 mars 2021 à 17:55, Herve Beraud a écrit : >> >>> Hello deliveryers, >>> >>> Don't forget to vote for our new meeting time. >>> >>> Thank you >>> >>> Le ven. 26 mars 2021 à 13:43, Herve Beraud a >>> écrit : >>> >>>> Hello >>>> >>>> We have a few regular attendees of the Release Management meeting who >>>> have conflicts >>>> with the current meeting time. As a result, we would like to find a >>>> new time to hold the meeting. I've created a Doodle poll[1] for >>>> everyone to give their input on times. It's mostly limited to times that >>>> reasonably overlap the working day in the US and Europe since that's where >>>> most of >>>> our attendees are located. >>>> >>>> If you attend the Release Management meeting, please fill out the poll >>>> so we can hopefully find a time that works better for everyone. >>>> >>>> For the sake of organization and to allow everyone to schedule his >>>> agenda accordingly, the poll will be closed on April 5th. On that >>>> date, I will announce the time of this meeting and the date on which it >>>> will take effect. >>>> >>>> Thanks! >>>> >>>> [1] https://doodle.com/poll/ip6tg4fvznz7p3qx >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer at Red Hat >>>> irc: hberaud >>>> https://github.com/4383/ >>>> https://twitter.com/4383hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 3 08:19:01 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 10:19:01 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Hello, At first glance that makes sense. If I correctly understand the story, the ironic-python-agent [1] and the ironic-python-agent-builder [2] were within the same repo at the origin, correct? Does someone else use the ironic-python-agent-builder? [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml Le ven. 30 avr. 2021 à 16:34, Iury Gregory a écrit : > Hi Riccardo, > > Thanks for raising this! > I do like the idea of having stable branches for the ipa-builder +1 > > Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau > escreveu: > >> Hello fellow openstackers! >> >> During the recent xena ptg, the ironic community had a discussion about >> the need to move the ironic-python-agent-builder project from an >> independent model to the standard release model. >> When we initially split the builder from ironic-python-agent, we decided >> against it, but considering some problems we encountered during the road, >> the ironic community seems to be in favor of the change. >> The reasons for this are mainly to strictly align the image building >> project to ironic-python-agent releases, and ease dealing with the >> occasional upgrade of tinycore linux, the base image used to build the >> "tinyipa" ironic-python-agent ramdisk. >> >> We'd like to involve the release team to ask for advice, not only on the >> process, but also considering that we need to ask to cut the first branch >> for the wallaby stable release, and we know we're a bit late for that! :) >> >> Thank you in advance for your help! >> >> Riccardo >> > > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Part of the ironic-core and puppet-manager-core team in OpenStack* > *Software Engineer at Red Hat Czech* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon May 3 08:57:25 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 3 May 2021 10:57:25 +0200 Subject: [largescale-sig] Next meeting: May 5, 15utc Message-ID: Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210505T15 A number of topics have already been added to the agenda, including discussing our OpenInfra.Live show on May 20. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From syedammad83 at gmail.com Mon May 3 08:55:11 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 3 May 2021 13:55:11 +0500 Subject: [wallaby][magnum] Cluster Deployment Unhealthy Message-ID: Hi, I have upgraded my magnum environment from victoria to wallaby. The upgrade went successfully. When I am trying to deploy a cluster from template, the status of cluster shows UNHEALTHY but create complete. I have logged into the master nodes and found no error message in heat logs. The nodes status still sees NotReady. [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get nodes --all-namespaces NAME STATUS ROLES AGE VERSION k8s-cluster-iomfrpuadezp-master-0 NotReady master 14m v1.18.16 k8s-cluster-iomfrpuadezp-node-0 NotReady 9m51s v1.18.16 Also there is no pods running in kube-system namespace. [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get pods --all-namespaces No resources found I have checked the logs, the flannel was deployed. + printf 'Starting to run calico-service\n' + set -e + set +x + '[' flannel = calico ']' + printf 'Finished running calico-service\n' + set -e + set +x Finished running calico-service + '[' flannel = flannel ']' + _prefix=quay.io/coreos/ + FLANNEL_DEPLOY=/srv/magnum/kubernetes/manifests/flannel-deploy.yaml + '[' -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml ']' + echo 'Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml' Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml ++ dirname /srv/magnum/kubernetes/manifests/flannel-deploy.yaml + mkdir -p /srv/magnum/kubernetes/manifests + set +x + '[' '' = 0 ']' + /usr/bin/kubectl apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml --namespace=kube-system podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created I tried to deploy the flannel again, but it showing unchanged. [root at k8s-cluster-iomfrpuadezp-master-0 heat-config-script]# kubectl apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml --namespace=kube-system podsecuritypolicy.policy/psp.flannel.unprivileged configured clusterrole.rbac.authorization.k8s.io/flannel unchanged clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged serviceaccount/flannel unchanged configmap/kube-flannel-cfg unchanged daemonset.apps/kube-flannel-ds unchanged The other thing I have noticed that cluster deployment still uses old parameters of victoria like heat_agent_tag and others. Its not using latest default tags of wallaby release. I am using magnum on ubuntu 20.04. The other components in stack are already upgraded to wallaby release. -- Regards, Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon May 3 10:42:31 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 3 May 2021 12:42:31 +0200 Subject: [all][stable] Ocata - End of Life In-Reply-To: References: Message-ID: <21a2b0c3-8291-c4b7-fc3e-7299a765d1b2@est.tech> Hi, some update here: * Last week I've tried the new tooling (gerrit ACLs) and deleted cinder's ocata-eol tagged stable/ocata branch and it worked as it should so now I'll continue to *delete* all other ocata-eol tagged branches. * The next phase will be the deletion of *pike-eol* tagged branches. So this is a *warning* again for downstream and upstream CI maintainers. Thanks, Előd On 2021. 04. 20. 21:31, Előd Illés wrote: > Hi, > > Sorry, this will be long :) as there are 3 topics around old stable > branches and 'End of Life'. > > 1. Deletion of ocata-eol tagged branches > > With the introduction of Extended Maintenance process [1][2] some cycles > ago, the 'End of Life' (EOL) process also changed: > * branches were no longer EOL tagged and "mass-deleted" at the end of >   maintenance phase > * EOL'ing became a project decision > * if a project decides to cease maintenance of a branch that is in >   Extended Maintenance, then they can tag their branch with $series-eol > > However, the EOL-tagging process was not automated or redefined > process-wise, so that meant the branches that were tagged as EOL were > not deleted. Now (after some changing in tooling) Release Management > team finally will start to delete EOL-tagged branches. > > In this mail I'm sending a *WARNING* to consumers of old stable > branches, especially *ocata*, as we will start deleting the > *ocata-eol* tagged branches in a *week*. (And also newer *-eol branches > later on) > > > 2. Ocata branch > > Beyond the 1st topic we must clarify the future of Ocata stable branch > in general: tempest jobs became broken about ~ a year ago. That means > that projects had two ways forward: > > a. drop tempest testing to unblock gate > b. simply don't support ocata branch anymore > > As far as I see the latter one happened and stable/ocata became > unmaintained probably for every projects. > > So my questions are regarding this: > * Is any project still using/maintaining their stable/ocata branch? > * If not: can Release Team initiate a mass-EOL-tagging of stable/ocata? > > > 3. The 'next' old stable branches > > Some projects still support their Pike, Queens and Rocky branches. > These branches use Xenial and py2.7 and both are out of support. This > results broken gates time to time. Especially nowadays. These issues > suggest that these branches are closer and closer to being unmaintained. > So I call the attention of interested parties, who are for example > still consuming these stable branches and using them downstream to put > effort on maintaining the branches and their CI/gates. > > It is a good practice for stable maintainers to check if there are > failures in their projects' periodic-stable jobs [3], as those are > good indicators of the health of their stable branches. And if there > are, then try to fix it as soon as possible. > > > [1] > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html > [2] > https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases > [3] > http://lists.openstack.org/pipermail/openstack-stable-maint/2021-April/date.html > > > Thanks, > > Előd > > > From artem.goncharov at gmail.com Mon May 3 11:04:38 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 3 May 2021 13:04:38 +0200 Subject: [release] How to create feature branch Message-ID: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> Hi all, During PTG we agreed to proceed with the R1 as a feature branch for the OpenStackSDK to finally prepare the Big Bang. I have went through some documents I was able to find (namely https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#branches ) but haven’t found the working way to get it done (in Gerrit seems I lack required privileges and it is not clear now how to request those). Thus the question: what is the right way for a project to create/request a feature branch? Thanks, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 3 11:31:41 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 13:31:41 +0200 Subject: [release] How to create feature branch In-Reply-To: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> References: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> Message-ID: Hello, Here is an example of feature branch creation https://opendev.org/openstack/releases/commit/6f23a48c7a163ae4494e146e7224aead026d02ce Le lun. 3 mai 2021 à 13:07, Artem Goncharov a écrit : > Hi all, > > During PTG we agreed to proceed with the R1 as a feature branch for the > OpenStackSDK to finally prepare the Big Bang. I have went through some > documents I was able to find (namely > https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#branches) > but haven’t found the working way to get it done (in Gerrit seems I lack > required privileges and it is not clear now how to request those). Thus the > question: what is the right way for a project to create/request a feature > branch? > > Thanks, > Artem > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Mon May 3 12:02:55 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 3 May 2021 14:02:55 +0200 Subject: [release] How to create feature branch In-Reply-To: References: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> Message-ID: Ah, cool. Thanks > On 3. May 2021, at 13:31, Herve Beraud wrote: > > Hello, > > Here is an example of feature branch creation https://opendev.org/openstack/releases/commit/6f23a48c7a163ae4494e146e7224aead026d02ce > Le lun. 3 mai 2021 à 13:07, Artem Goncharov > a écrit : > Hi all, > > During PTG we agreed to proceed with the R1 as a feature branch for the OpenStackSDK to finally prepare the Big Bang. I have went through some documents I was able to find (namely https://docs.opendev.org/opendev/infra-manual/latest/drivers.html#branches ) but haven’t found the working way to get it done (in Gerrit seems I lack required privileges and it is not clear now how to request those). Thus the question: what is the right way for a project to create/request a feature branch? > > Thanks, > Artem > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Mon May 3 12:55:34 2021 From: kevin at cloudnull.com (Carter, Kevin) Date: Mon, 3 May 2021 07:55:34 -0500 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: Absolutely +1 On Thu, Apr 29, 2021 at 11:09 James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since then, > and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the end > of next week. > > > -- > -- James Slagle > -- > -- Kevin Carter IRC: Cloudnull -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Mon May 3 13:24:49 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 3 May 2021 07:24:49 -0600 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < premkumar at aarnanetworks.com> wrote: > Hi Zane, > > How can I bring up the heat service. > > root at aio1:/etc/systemd/system# service heat-api status > Unit heat-api.service could not be found. > root at aio1:/etc/systemd/system# service heat-api restart > Failed to restart heat-api.service: Unit heat-api.service not found. > root at aio1:/etc/systemd/system# service heat-api-cfn status > Unit heat-api-cfn.service could not be found. > root at aio1:/etc/systemd/system# service heat-api-cloudwatch status > Unit heat-api-cloudwatch.service could not be found. > root at aio1:/etc/systemd/system# service heat-engine status > Unit heat-engine.service could not be found. > > How did you install openstack? I believe Train was the last version with centos7 support on RDO. > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > *CRN Top 10 Coolest Edge Computing Startups of 2020 > * > > > On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter wrote: > >> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >> > Hi, >> > >> > I am using the Openstack *USURI *version in *Centos7*. Due to some >> > issues my disk size is full,I freed up the space. Afte that some >> service >> > went down. After that I have issues in creating the stack and list >> > stack. >> >> It looks like heat-api at least is still down. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From premkumar at aarnanetworks.com Mon May 3 13:37:56 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Mon, 3 May 2021 19:07:56 +0530 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: Hi Alex, My Current version is Ussuri. Having the issues in both centos7 and ubuntu 18.04. After restarting the machine. This is document i followed to bring the openstack AIO https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * On Mon, May 3, 2021 at 6:55 PM Alex Schultz wrote: > > > On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < > premkumar at aarnanetworks.com> wrote: > >> Hi Zane, >> >> How can I bring up the heat service. >> >> root at aio1:/etc/systemd/system# service heat-api status >> Unit heat-api.service could not be found. >> root at aio1:/etc/systemd/system# service heat-api restart >> Failed to restart heat-api.service: Unit heat-api.service not found. >> root at aio1:/etc/systemd/system# service heat-api-cfn status >> Unit heat-api-cfn.service could not be found. >> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >> Unit heat-api-cloudwatch.service could not be found. >> root at aio1:/etc/systemd/system# service heat-engine status >> Unit heat-engine.service could not be found. >> >> > How did you install openstack? I believe Train was the last version with > centos7 support on RDO. > > >> Warm Regards, >> Premkumar Subramaniyan >> Technical staff >> M: +91 9940743669 >> >> *CRN Top 10 Coolest Edge Computing Startups of 2020 >> * >> >> >> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter wrote: >> >>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>> > Hi, >>> > >>> > I am using the Openstack *USURI *version in *Centos7*. Due to some >>> > issues my disk size is full,I freed up the space. Afte that some >>> service >>> > went down. After that I have issues in creating the stack and list >>> > stack. >>> >>> It looks like heat-api at least is still down. >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 3 13:50:09 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 May 2021 08:50:09 -0500 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > Dears, > > I have scraped the Zuul API to get names of jobs that *could* run on > master branch and are still on bionic. [1] > "Could" because I could not establish from the API whether they are > included in any pipelines or not really (e.g., there are lots of > transitive jobs there that have their nodeset overridden in children > and children are likely used in pipelines, not them). > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ Thanks for the list. We need to only worried about jobs using devstack master branch. Along with non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. >From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them now: sahara, networking-sfc, manila, magnum, kuryr. -gmann > > -yoctozepto > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > by 2nd week of May. > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > python-cinderclient - python-cinderclient-functional-py36 > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > Murani- murano-dashboard-sanity-check > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > -gmann > > > > From ruslanas at lpic.lt Mon May 3 13:54:29 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 3 May 2021 16:54:29 +0300 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: Yeah Alex, with TripleO it is a limitation, but for ansible deployment, there is no limit :) Premkumar, are you running containerized deployment or baremetal? If you are running containerized, then you need to check docker ps -a or podman ps -a and see what containers failed to start using: grep -v Exited\ \(0 else you can try relaunch ansible deployment again, it should bring up missing services. On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < premkumar at aarnanetworks.com> wrote: > Hi Alex, > My Current version is Ussuri. Having the issues in both centos7 and > ubuntu 18.04. After restarting the machine. > > This is document i followed to bring the openstack AIO > > https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > *CRN Top 10 Coolest Edge Computing Startups of 2020 > * > > > On Mon, May 3, 2021 at 6:55 PM Alex Schultz wrote: > >> >> >> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >> premkumar at aarnanetworks.com> wrote: >> >>> Hi Zane, >>> >>> How can I bring up the heat service. >>> >>> root at aio1:/etc/systemd/system# service heat-api status >>> Unit heat-api.service could not be found. >>> root at aio1:/etc/systemd/system# service heat-api restart >>> Failed to restart heat-api.service: Unit heat-api.service not found. >>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>> Unit heat-api-cfn.service could not be found. >>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>> Unit heat-api-cloudwatch.service could not be found. >>> root at aio1:/etc/systemd/system# service heat-engine status >>> Unit heat-engine.service could not be found. >>> >>> >> How did you install openstack? I believe Train was the last version with >> centos7 support on RDO. >> >> >>> Warm Regards, >>> Premkumar Subramaniyan >>> Technical staff >>> M: +91 9940743669 >>> >>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>> * >>> >>> >>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter wrote: >>> >>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>> > Hi, >>>> > >>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>> some >>>> > issues my disk size is full,I freed up the space. Afte that some >>>> service >>>> > went down. After that I have issues in creating the stack and list >>>> > stack. >>>> >>>> It looks like heat-api at least is still down. >>>> >>>> -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 3 13:55:11 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 3 May 2021 13:55:11 +0000 Subject: [release][infra] How to create feature branch In-Reply-To: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> References: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> Message-ID: <20210503135511.irzg2o3kyv7pb7g2@yuggoth.org> On 2021-05-03 13:04:38 +0200 (+0200), Artem Goncharov wrote: > During PTG we agreed to proceed with the R1 as a feature branch > for the OpenStackSDK to finally prepare the Big Bang. I have went > through some documents I was able to find (namely > ) > but haven’t found the working way to get it done (in Gerrit seems > I lack required privileges and it is not clear now how to request > those). Thus the question: what is the right way for a project to > create/request a feature branch? In addition to the other responses about creating branches through OpenStack's release automation, you probably still want to pay attention to the Merge Commits section of that guide since you'll eventually want to be able to merge between your feature branch and master. Propose a change to the OpenStackSDK ACL in this file for additional permissions you need: https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/openstacksdk.config -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ltoscano at redhat.com Mon May 3 14:00:35 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Mon, 03 May 2021 16:00:35 +0200 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> Message-ID: <23546532.ouqheUzb2q@whitebase.usersys.redhat.com> On Monday, 3 May 2021 15:50:09 CEST Ghanshyam Mann wrote: > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek > wrote ---- > > Dears, > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > master branch and are still on bionic. [1] > > "Could" because I could not establish from the API whether they are > > included in any pipelines or not really (e.g., there are lots of > > transitive jobs there that have their nodeset overridden in children > > and children are likely used in pipelines, not them). > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > Thanks for the list. We need to only worried about jobs using devstack > master branch. Along with non-devstack jobs. there are many stable testing > jobs also on the master gate which is all good to pin the bionic nodeset, > for example - 'neutron-tempest-plugin-api-ussuri'. > > From the list, I see few more projects (other than listed in the subject of > this email) jobs, so tagging them now: sahara, networking-sfc, manila, > magnum, kuryr. The sahara-image-elements-* and sahara-extra-* jobs shouldn't depend (much) on the underlying platform, so no pinning should be done. The other sahara jobs (-ussuri, etc) pin the correct nodeset. -- Luigi From aschultz at redhat.com Mon May 3 14:01:00 2021 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 3 May 2021 08:01:00 -0600 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: On Mon, May 3, 2021 at 7:54 AM Ruslanas Gžibovskis wrote: > Yeah Alex, with TripleO it is a limitation, but for ansible deployment, > there is no limit :) > > There are no centos7 RDO packages for Ussuri which is why I asked. Containers might work but you may run into other issues. > Premkumar, are you running containerized deployment or baremetal? > > If you are running containerized, then you need to check docker ps -a or > podman ps -a and see what containers failed to start using: grep -v Exited\ > \(0 > else you can try relaunch ansible deployment again, it should bring up > missing services. > > > On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < > premkumar at aarnanetworks.com> wrote: > >> Hi Alex, >> My Current version is Ussuri. Having the issues in both centos7 and >> ubuntu 18.04. After restarting the machine. >> >> This is document i followed to bring the openstack AIO >> >> https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html >> >> >> Warm Regards, >> Premkumar Subramaniyan >> Technical staff >> M: +91 9940743669 >> >> *CRN Top 10 Coolest Edge Computing Startups of 2020 >> * >> >> >> On Mon, May 3, 2021 at 6:55 PM Alex Schultz wrote: >> >>> >>> >>> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >>> premkumar at aarnanetworks.com> wrote: >>> >>>> Hi Zane, >>>> >>>> How can I bring up the heat service. >>>> >>>> root at aio1:/etc/systemd/system# service heat-api status >>>> Unit heat-api.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-api restart >>>> Failed to restart heat-api.service: Unit heat-api.service not found. >>>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>>> Unit heat-api-cfn.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>>> Unit heat-api-cloudwatch.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-engine status >>>> Unit heat-engine.service could not be found. >>>> >>>> >>> How did you install openstack? I believe Train was the last version >>> with centos7 support on RDO. >>> >>> >>>> Warm Regards, >>>> Premkumar Subramaniyan >>>> Technical staff >>>> M: +91 9940743669 >>>> >>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>> * >>>> >>>> >>>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter >>>> wrote: >>>> >>>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>>> > Hi, >>>>> > >>>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>>> some >>>>> > issues my disk size is full,I freed up the space. Afte that some >>>>> service >>>>> > went down. After that I have issues in creating the stack and list >>>>> > stack. >>>>> >>>>> It looks like heat-api at least is still down. >>>>> >>>>> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Mon May 3 14:05:58 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 3 May 2021 16:05:58 +0200 Subject: [release][infra] How to create feature branch In-Reply-To: <20210503135511.irzg2o3kyv7pb7g2@yuggoth.org> References: <325FC430-60D4-4B8B-BE41-6CFE1CF9A13B@gmail.com> <20210503135511.irzg2o3kyv7pb7g2@yuggoth.org> Message-ID: <90FFB432-4E0D-4DFE-954F-B5BB15B68BDA@gmail.com> Thanks Jeremy > On 3. May 2021, at 15:55, Jeremy Stanley wrote: > > On 2021-05-03 13:04:38 +0200 (+0200), Artem Goncharov wrote: >> During PTG we agreed to proceed with the R1 as a feature branch >> for the OpenStackSDK to finally prepare the Big Bang. I have went >> through some documents I was able to find (namely >> ) >> but haven’t found the working way to get it done (in Gerrit seems >> I lack required privileges and it is not clear now how to request >> those). Thus the question: what is the right way for a project to >> create/request a feature branch? > > In addition to the other responses about creating branches through > OpenStack's release automation, you probably still want to pay > attention to the Merge Commits section of that guide since you'll > eventually want to be able to merge between your feature branch and > master. Propose a change to the OpenStackSDK ACL in this file for > additional permissions you need: > > https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/openstacksdk.config > > -- > Jeremy Stanley From premkumar at aarnanetworks.com Mon May 3 14:13:13 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Mon, 3 May 2021 19:43:13 +0530 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: Hi Ruslanas I am running in barmetal. root at aio1:~# lxc-ls -1 aio1_cinder_api_container-845d8e39 aio1_galera_container-efc46f93 aio1_glance_container-611c15ef aio1_heat_api_container-da2feba5 aio1_horizon_container-1d6b0098 aio1_keystone_container-d2986dca aio1_memcached_container-ff56f467 aio1_neutron_server_container-261222e4 aio1_nova_api_container-670ab083 aio1_placement_container-32a0e966 aio1_rabbit_mq_container-fdacf98f aio1_repo_container-8dc59ab6 aio1_utility_container-924a5576 Relaunch means I need to run this one openstack-ansible setup-openstack.yml. If yes means, If I run this one my whole openstack itself going to crash. I need some document where I can check all the service status and restart the service. The only problem is the heat stack is down . Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * On Mon, May 3, 2021 at 7:24 PM Ruslanas Gžibovskis wrote: > Yeah Alex, with TripleO it is a limitation, but for ansible deployment, > there is no limit :) > > Premkumar, are you running containerized deployment or baremetal? > > If you are running containerized, then you need to check docker ps -a or > podman ps -a and see what containers failed to start using: grep -v Exited\ > \(0 > else you can try relaunch ansible deployment again, it should bring up > missing services. > > > On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < > premkumar at aarnanetworks.com> wrote: > >> Hi Alex, >> My Current version is Ussuri. Having the issues in both centos7 and >> ubuntu 18.04. After restarting the machine. >> >> This is document i followed to bring the openstack AIO >> >> https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html >> >> >> Warm Regards, >> Premkumar Subramaniyan >> Technical staff >> M: +91 9940743669 >> >> *CRN Top 10 Coolest Edge Computing Startups of 2020 >> * >> >> >> On Mon, May 3, 2021 at 6:55 PM Alex Schultz wrote: >> >>> >>> >>> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >>> premkumar at aarnanetworks.com> wrote: >>> >>>> Hi Zane, >>>> >>>> How can I bring up the heat service. >>>> >>>> root at aio1:/etc/systemd/system# service heat-api status >>>> Unit heat-api.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-api restart >>>> Failed to restart heat-api.service: Unit heat-api.service not found. >>>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>>> Unit heat-api-cfn.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>>> Unit heat-api-cloudwatch.service could not be found. >>>> root at aio1:/etc/systemd/system# service heat-engine status >>>> Unit heat-engine.service could not be found. >>>> >>>> >>> How did you install openstack? I believe Train was the last version >>> with centos7 support on RDO. >>> >>> >>>> Warm Regards, >>>> Premkumar Subramaniyan >>>> Technical staff >>>> M: +91 9940743669 >>>> >>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>> * >>>> >>>> >>>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter >>>> wrote: >>>> >>>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>>> > Hi, >>>>> > >>>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>>> some >>>>> > issues my disk size is full,I freed up the space. Afte that some >>>>> service >>>>> > went down. After that I have issues in creating the stack and list >>>>> > stack. >>>>> >>>>> It looks like heat-api at least is still down. >>>>> >>>>> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon May 3 14:41:38 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 3 May 2021 11:41:38 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today Message-ID: Hello guys, I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. I still have to summarize the PTL meeting. I will do so today. Again, sorry for the inconvenience; see you guys at our next meeting. -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Mon May 3 14:50:44 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 3 May 2021 16:50:44 +0200 Subject: [CLOUDKITTY] Missed CloudKitty meeting today In-Reply-To: References: Message-ID: Hi Rafael, No worries: today is a bank holiday in the United Kingdom, so it probably would have been just you and me. Best wishes, Pierre On Mon, 3 May 2021 at 16:42, Rafael Weingärtner wrote: > > Hello guys, > I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. > > I still have to summarize the PTL meeting. I will do so today. > > Again, sorry for the inconvenience; see you guys at our next meeting. > > -- > Rafael Weingärtner From gmann at ghanshyammann.com Mon May 3 14:52:55 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 May 2021 09:52:55 -0500 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <4829611.ejJDZkT8p0@whitebase.usersys.redhat.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <4829611.ejJDZkT8p0@whitebase.usersys.redhat.com> Message-ID: <17932b7c6b0.d6180768892890.5344872938882170861@ghanshyammann.com> ---- On Fri, 30 Apr 2021 03:53:32 -0500 Luigi Toscano wrote ---- > On Friday, 30 April 2021 00:25:12 CEST Ghanshyam Mann wrote: > > Hello Everyone, > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to > > Ubuntu Focal 20.04 but it seems there are few jobs still running on Bionic. > > As devstack team is planning to drop the Bionic support you need to move > > those to Focal otherwise they will start failing. We are planning to merge > > the devstack patch by 2nd week of May. > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > I have not listed all the job but few of them which were failing with ' > > rtslib-fb-targetctl error' are below: > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > - > > https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f6 > > 69eec72fee7ce/.zuul.yaml#L166 > > Looking at this job, I suspect the idea was just to use the proper nodeset > with an exiting job, and at the time the default nodeset was the bionic one. I > suspect we may avoid future bumps for this job (and probably others) by > defining a set of nodeset to track the default nodeset used by devstack. > > We would just need openstack-single-node-devstackdefault, openstack-two-nodes- > devstackdefault. > > Unfortunately the nodeset definitions don't support inheritance or aliases, so > that would mean duplicating some definition in the devstack repository, but > - it would be just one additional place to maintain > - aliasing could be added to zuul in the future maybe. > > What do you think? Currently, we have nodeset defined in devstack with distro suffix which makes it clear in term of checking the nodeset definition or so for example: "openstack-single-node-focal" and then base jobs like 'devstack-minimal' [1] or 'devstack-multinode'[2] keep moving their nodeset to the latest one as per testing runtime. Any derived job from these base jobs does not need to explicitly set the nodeset (unless there are a different sets of configurations you want) and can rely on the base job's nodeset. When we modify the base job's ndoeset we do this as part of a large effort like community-wide goal with proper testing and when all jobs are good to migrate to new nodeset. So specific jobs using devstack base jobs do not need to add safer guards as such. For this 'cinder-plugin-ceph-tempest-mn-aa' job which is multinode job but derived from a single node devstack-plugin-ceph-tempest-py3 can be modified to derived from 'devstack-plugin-ceph-multinode-tempest-py3' and remove the nodeset from its definition. [1] https://opendev.org/openstack/devstack/src/commit/166c88b610d2007535367ebe2cf464df9273e6c5/.zuul.yaml#L395 [2] https://opendev.org/openstack/devstack/src/commit/166c88b610d2007535367ebe2cf464df9273e6c5/.zuul.yaml#L570 -gmann > > > > > python-cinderclient - python-cinderclient-functional-py36 > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > Octavia- > > https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul > > .d/jobs.yaml#L182 > > > > Murani- murano-dashboard-sanity-check > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6 > > 650450273004a41575d2d68/.zuul.yaml#L15 > > For the record, this is the last legacy job left voting in the gates, but it > is a bit tricky to port, as it tries to run horizon integration tests with a > custom setup. It may be ported by just wrapping the old scripts in the > meantime, but I suspect it's broken anyway now. > > > -- > Luigi > > > > From ruslanas at lpic.lt Mon May 3 15:13:18 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Mon, 3 May 2021 18:13:18 +0300 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: ok, so relaunch it from lxd then :) And try checking lxd container output also, still interesting what is in your lxd containers. maybe you have podman/docker there... connect to lxd and run podman or docker ps -a On Mon, 3 May 2021 at 17:13, Premkumar Subramaniyan < premkumar at aarnanetworks.com> wrote: > Hi Ruslanas > I am running in barmetal. > root at aio1:~# lxc-ls -1 > aio1_cinder_api_container-845d8e39 > aio1_galera_container-efc46f93 > aio1_glance_container-611c15ef > aio1_heat_api_container-da2feba5 > aio1_horizon_container-1d6b0098 > aio1_keystone_container-d2986dca > aio1_memcached_container-ff56f467 > aio1_neutron_server_container-261222e4 > aio1_nova_api_container-670ab083 > aio1_placement_container-32a0e966 > aio1_rabbit_mq_container-fdacf98f > aio1_repo_container-8dc59ab6 > aio1_utility_container-924a5576 > > Relaunch means I need to run this one openstack-ansible > setup-openstack.yml. > If yes means, If I run this one my whole openstack itself going to crash. > I need some document where I can check all the service status and restart > the service. > The only problem is the heat stack is down . > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > *CRN Top 10 Coolest Edge Computing Startups of 2020 > * > > > On Mon, May 3, 2021 at 7:24 PM Ruslanas Gžibovskis > wrote: > >> Yeah Alex, with TripleO it is a limitation, but for ansible deployment, >> there is no limit :) >> >> Premkumar, are you running containerized deployment or baremetal? >> >> If you are running containerized, then you need to check docker ps -a or >> podman ps -a and see what containers failed to start using: grep -v Exited\ >> \(0 >> else you can try relaunch ansible deployment again, it should bring up >> missing services. >> >> >> On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < >> premkumar at aarnanetworks.com> wrote: >> >>> Hi Alex, >>> My Current version is Ussuri. Having the issues in both centos7 and >>> ubuntu 18.04. After restarting the machine. >>> >>> This is document i followed to bring the openstack AIO >>> >>> https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html >>> >>> >>> Warm Regards, >>> Premkumar Subramaniyan >>> Technical staff >>> M: +91 9940743669 >>> >>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>> * >>> >>> >>> On Mon, May 3, 2021 at 6:55 PM Alex Schultz wrote: >>> >>>> >>>> >>>> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >>>> premkumar at aarnanetworks.com> wrote: >>>> >>>>> Hi Zane, >>>>> >>>>> How can I bring up the heat service. >>>>> >>>>> root at aio1:/etc/systemd/system# service heat-api status >>>>> Unit heat-api.service could not be found. >>>>> root at aio1:/etc/systemd/system# service heat-api restart >>>>> Failed to restart heat-api.service: Unit heat-api.service not found. >>>>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>>>> Unit heat-api-cfn.service could not be found. >>>>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>>>> Unit heat-api-cloudwatch.service could not be found. >>>>> root at aio1:/etc/systemd/system# service heat-engine status >>>>> Unit heat-engine.service could not be found. >>>>> >>>>> >>>> How did you install openstack? I believe Train was the last version >>>> with centos7 support on RDO. >>>> >>>> >>>>> Warm Regards, >>>>> Premkumar Subramaniyan >>>>> Technical staff >>>>> M: +91 9940743669 >>>>> >>>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>>> * >>>>> >>>>> >>>>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter >>>>> wrote: >>>>> >>>>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>>>> > Hi, >>>>>> > >>>>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>>>> some >>>>>> > issues my disk size is full,I freed up the space. Afte that some >>>>>> service >>>>>> > went down. After that I have issues in creating the stack and list >>>>>> > stack. >>>>>> >>>>>> It looks like heat-api at least is still down. >>>>>> >>>>>> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kgiusti at gmail.com Mon May 3 15:37:56 2021 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 3 May 2021 11:37:56 -0400 Subject: [oslo] Stepping down as Oslo core Message-ID: Hi all, The time has come to face the fact that I'm unable to give my role as an Oslo core contributor the attention and effort it deserves. I'd say that it has been great to work will all of you - and it most certainly has - but that would be a bit premature because this isn't "good bye". I'll still be kicking around the oslo.messaging project, helping out as time permits. Cheers -- Ken Giusti (kgiusti at gmail.com) -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Mon May 3 15:48:53 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 4 May 2021 00:48:53 +0900 Subject: [puppet][glare] Propose retiring puppet-glare In-Reply-To: References: Message-ID: Adding [glare] tag just in case any folks from the project are still around and have any feedback about this. On Sun, Apr 25, 2021 at 10:55 PM Takashi Kajinami wrote: > Hello, > > > I'd like to propose retiring puppet-galre project, because the Glare[1] > project > looks inactive for a while based on the following three points > - No actual development is made for 2 years > - No release was made since the last Rocky release > - setup.cfg is not maintained and the python versions listed are very > outdated > > [1] https://opendev.org/x/glare > > I'll wait for 1-2 weeks to hear opinions from others. > If anybody is interested in keeping the puppet-glare project or has > intention to > maintain Glare itself then please let me know. > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon May 3 16:03:21 2021 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 3 May 2021 11:03:21 -0500 Subject: [oslo] Stepping down as Oslo core In-Reply-To: References: Message-ID: <71a631b2-6abe-77bf-b477-6de2dde2ba37@nemebean.com> Thanks for all your work over the years! Glad to hear you'll still be around so I can keep referring all messaging questions to you. ;-) On 5/3/21 10:37 AM, Ken Giusti wrote: > Hi all, > > The time has come to face the fact that I'm unable to give my role as an > Oslo core contributor the attention and effort it deserves. > > I'd say that it has been great to work will all of you - and it most > certainly has - but that would be a bit premature because this isn't > "good bye".  I'll still be kicking around the oslo.messaging project, > helping out as time permits. > > Cheers > > -- > Ken Giusti  (kgiusti at gmail.com ) From wakorins at gmail.com Mon May 3 09:17:04 2021 From: wakorins at gmail.com (Wada Akor) Date: Mon, 3 May 2021 10:17:04 +0100 Subject: Rocky Linux for Openstack Message-ID: Good day , Please I want to know when will openstack provide information on how to installation of openstack on Rocky Linux be available. Thanks & Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon May 3 16:15:51 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 3 May 2021 12:15:51 -0400 Subject: [ptg][cinder] xena virtual PTG summary Message-ID: <93920379-95f3-eb89-6edc-5457677d10b4@gmail.com> Sorry for the delay, it's posted here: https://wiki.openstack.org/wiki/CinderXenaPTGSummary The wiki page contains links to all the recordings. cheers, brian From hberaud at redhat.com Mon May 3 16:20:14 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 18:20:14 +0200 Subject: [barbican][oslo][nova][glance][cinder] cursive library status In-Reply-To: References: <35dfc43f-6613-757b-ed7b-b6530df21289@gmail.com> Message-ID: Hello, Do you've some updates to share with us about this topic? Here is the process to follow if you rename a project to move out from “openstack” namespace to any other namespace and vice versa: https://docs.openstack.org/project-team-guide/repository.html#project-renames Please let us know if you have any questions or concerns Le mer. 13 janv. 2021 à 14:44, Brian Rosmaita a écrit : > On 1/11/21 10:57 AM, Moises Guimaraes de Medeiros wrote: > > Hi Brian, > > > > During Oslo's last weekly meeting [1] we decided that Oslo can take > > cursive under its umbrella with collaboration of Barbican folks. I just > > waited a bit with this confirmation as the Barbican PTL was on PTO and I > > wanted to confirm with him. > > > > What are the next steps from here? > > Thanks so much for following up! I think you need to do something like > these patches from Ghanshyam to move devstack-plugin-nfs from x/ to > openstack/ and bring it under QA governance: > > https://review.opendev.org/c/openstack/project-config/+/711834 > https://review.opendev.org/c/openstack/governance/+/711835 > > LMK if you want me to propose the patches, my intent is to get this > issue solved, not to make more work for you! > > cheers, > brian > > > > > [1]: > > > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log.html#l-64 > > < > http://eavesdrop.openstack.org/meetings/oslo/2021/oslo.2021-01-04-16.00.log.html#l-64 > > > > > > Thanks, > > Moisés > > > > On Fri, Dec 18, 2020 at 10:06 PM Douglas Mendizabal > > wrote: > > > > On 12/16/20 12:50 PM, Ben Nemec wrote: > > > > > > > > > On 12/16/20 12:02 PM, Brian Rosmaita wrote: > > >> Hello Barbican team, > > >> > > >> Apologies for not including barbican in the previous thread on > this > > >> topic: > > >> > > >> > > > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html > > < > http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019430.html > > > > > > >> > > >> > > >> The situation is that cursive is used by Nova, Glance, and > > Cinder and > > >> we'd like to move it out of the 'x' namespace into openstack > > >> governance. The question is then what team would oversee it. > It > > >> seems like a good fit for Oslo, and the Oslo team seems OK with > > that, > > >> but since barbican-core is currently included in cursive-core, > > it make > > >> sense to give the Barbican team first dibs. > > >> > > >> From the consuming teams' side, I don't think we have a > > preference as > > >> long as it's clear who we need to bother about approvals if a > > bugfix > > >> is posted for review. > > >> > > >> Thus my ask is that the Barbican team indicate whether they'd > > like to > > >> move cursive to the 'openstack' namespace under their > > governance, or > > >> whether they'd prefer Oslo to oversee the library. > > > > > > Note that this is not necessarily an either/or thing. Castellan > > is under > > > Oslo governance but is co-owned by the Oslo and Barbican teams. > > We could > > > do a similar thing with Cursive. > > > > > > > Hi Brian and Ben, > > > > Sorry I missed the original thread. Given that the end of the year > is > > around the corner, most of the Barbican team is out on PTO and we > > haven't had a chance to discuss this in our weekly meeting. > > > > That said, I doubt anyone would object to moving cursive into the > > openstack namespace. > > > > I personally do not mind the Oslo team taking over maintenace, and I > am > > also willing to help review patches if the Oslo team would like to > > co-own this library just like we currently do for Castellan. > > > > - Douglas Mendizábal (redrobot) > > > > > > > > > > -- > > > > Moisés Guimarães > > > > Software Engineer > > > > Red Hat > > > > > > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From premkumar at aarnanetworks.com Mon May 3 16:24:11 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Mon, 3 May 2021 21:54:11 +0530 Subject: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: podman or docker ps -a both are not installed in the lxc container root at aio1:~# lxc-attach aio1_heat_api_container-da2feba5 root at aio1-heat-api-container-da2feba5:~# podman ps -a bash: podman: command not found root at aio1-heat-api-container-da2feba5:~# docker ps bash: docker: command not found root at aio1-heat-api-container-da2feba5:~# Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * On Mon, May 3, 2021 at 8:43 PM Ruslanas Gžibovskis wrote: > ok, so relaunch it from lxd then :) > And try checking lxd container output also, still interesting what is in > your lxd containers. maybe you have podman/docker there... > connect to lxd and run podman or docker ps -a > > > > On Mon, 3 May 2021 at 17:13, Premkumar Subramaniyan < > premkumar at aarnanetworks.com> wrote: > >> Hi Ruslanas >> I am running in barmetal. >> root at aio1:~# lxc-ls -1 >> aio1_cinder_api_container-845d8e39 >> aio1_galera_container-efc46f93 >> aio1_glance_container-611c15ef >> aio1_heat_api_container-da2feba5 >> aio1_horizon_container-1d6b0098 >> aio1_keystone_container-d2986dca >> aio1_memcached_container-ff56f467 >> aio1_neutron_server_container-261222e4 >> aio1_nova_api_container-670ab083 >> aio1_placement_container-32a0e966 >> aio1_rabbit_mq_container-fdacf98f >> aio1_repo_container-8dc59ab6 >> aio1_utility_container-924a5576 >> >> Relaunch means I need to run this one openstack-ansible >> setup-openstack.yml. >> If yes means, If I run this one my whole openstack itself going to >> crash. >> I need some document where I can check all the service status and restart >> the service. >> The only problem is the heat stack is down . >> >> >> Warm Regards, >> Premkumar Subramaniyan >> Technical staff >> M: +91 9940743669 >> >> *CRN Top 10 Coolest Edge Computing Startups of 2020 >> * >> >> >> On Mon, May 3, 2021 at 7:24 PM Ruslanas Gžibovskis >> wrote: >> >>> Yeah Alex, with TripleO it is a limitation, but for ansible deployment, >>> there is no limit :) >>> >>> Premkumar, are you running containerized deployment or baremetal? >>> >>> If you are running containerized, then you need to check docker ps -a or >>> podman ps -a and see what containers failed to start using: grep -v Exited\ >>> \(0 >>> else you can try relaunch ansible deployment again, it should bring up >>> missing services. >>> >>> >>> On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < >>> premkumar at aarnanetworks.com> wrote: >>> >>>> Hi Alex, >>>> My Current version is Ussuri. Having the issues in both centos7 >>>> and ubuntu 18.04. After restarting the machine. >>>> >>>> This is document i followed to bring the openstack AIO >>>> >>>> https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html >>>> >>>> >>>> Warm Regards, >>>> Premkumar Subramaniyan >>>> Technical staff >>>> M: +91 9940743669 >>>> >>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>> * >>>> >>>> >>>> On Mon, May 3, 2021 at 6:55 PM Alex Schultz >>>> wrote: >>>> >>>>> >>>>> >>>>> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >>>>> premkumar at aarnanetworks.com> wrote: >>>>> >>>>>> Hi Zane, >>>>>> >>>>>> How can I bring up the heat service. >>>>>> >>>>>> root at aio1:/etc/systemd/system# service heat-api status >>>>>> Unit heat-api.service could not be found. >>>>>> root at aio1:/etc/systemd/system# service heat-api restart >>>>>> Failed to restart heat-api.service: Unit heat-api.service not found. >>>>>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>>>>> Unit heat-api-cfn.service could not be found. >>>>>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>>>>> Unit heat-api-cloudwatch.service could not be found. >>>>>> root at aio1:/etc/systemd/system# service heat-engine status >>>>>> Unit heat-engine.service could not be found. >>>>>> >>>>>> >>>>> How did you install openstack? I believe Train was the last version >>>>> with centos7 support on RDO. >>>>> >>>>> >>>>>> Warm Regards, >>>>>> Premkumar Subramaniyan >>>>>> Technical staff >>>>>> M: +91 9940743669 >>>>>> >>>>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>>>> * >>>>>> >>>>>> >>>>>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter >>>>>> wrote: >>>>>> >>>>>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>>>>> > Hi, >>>>>>> > >>>>>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>>>>> some >>>>>>> > issues my disk size is full,I freed up the space. Afte that some >>>>>>> service >>>>>>> > went down. After that I have issues in creating the stack and list >>>>>>> > stack. >>>>>>> >>>>>>> It looks like heat-api at least is still down. >>>>>>> >>>>>>> >>> >>> -- >>> Ruslanas Gžibovskis >>> +370 6030 7030 >>> >> > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 3 16:26:41 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 18:26:41 +0200 Subject: [oslo] Stepping down as Oslo core In-Reply-To: <71a631b2-6abe-77bf-b477-6de2dde2ba37@nemebean.com> References: <71a631b2-6abe-77bf-b477-6de2dde2ba37@nemebean.com> Message-ID: Thank you Ken for all your help and your great contributions on Oslo! Le lun. 3 mai 2021 à 18:06, Ben Nemec a écrit : > Thanks for all your work over the years! Glad to hear you'll still be > around so I can keep referring all messaging questions to you. ;-) > > On 5/3/21 10:37 AM, Ken Giusti wrote: > > Hi all, > > > > The time has come to face the fact that I'm unable to give my role as an > > Oslo core contributor the attention and effort it deserves. > > > > I'd say that it has been great to work will all of you - and it most > > certainly has - but that would be a bit premature because this isn't > > "good bye". I'll still be kicking around the oslo.messaging project, > > helping out as time permits. > > > > Cheers > > > > -- > > Ken Giusti (kgiusti at gmail.com ) > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From feilong at catalyst.net.nz Mon May 3 17:21:41 2021 From: feilong at catalyst.net.nz (feilong) Date: Tue, 4 May 2021 05:21:41 +1200 Subject: [wallaby][magnum] Cluster Deployment Unhealthy In-Reply-To: References: Message-ID: Hi Ammad, What's the error of your kubelet? If the node is in not ready, then you should be able to see some errors from the kubelet log. On 3/05/21 8:55 pm, Ammad Syed wrote: > Hi, > > I have upgraded my magnum environment from victoria to wallaby. The > upgrade went successfully. When I am trying to deploy a cluster from > template, the status of cluster shows UNHEALTHY but create complete.  > > I have logged into the master nodes and found no error message in heat > logs. The nodes status still sees NotReady. > > [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get nodes > --all-namespaces > NAME                                STATUS     ROLES    AGE     VERSION > k8s-cluster-iomfrpuadezp-master-0   NotReady   master   14m     v1.18.16 > k8s-cluster-iomfrpuadezp-node-0     NotReady     9m51s   v1.18.16 > > Also there is no pods running in kube-system namespace.  > > [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get pods > --all-namespaces > No resources found > > I have checked the logs, the flannel was deployed. > > + printf 'Starting to run calico-service\n' > + set -e > + set +x > + '[' flannel = calico ']' > + printf 'Finished running calico-service\n' > + set -e > + set +x > Finished running calico-service > + '[' flannel = flannel ']' > + _prefix=quay.io/coreos/ > + FLANNEL_DEPLOY=/srv/magnum/kubernetes/manifests/flannel-deploy.yaml > + '[' -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml ']' > + echo 'Writing File: > /srv/magnum/kubernetes/manifests/flannel-deploy.yaml' > Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > ++ dirname /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > + mkdir -p /srv/magnum/kubernetes/manifests > + set +x > + '[' '' = 0 ']' > + /usr/bin/kubectl apply -f > /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > --namespace=kube-system > podsecuritypolicy.policy/psp.flannel.unprivileged created > clusterrole.rbac.authorization.k8s.io/flannel > created > clusterrolebinding.rbac.authorization.k8s.io/flannel > created > serviceaccount/flannel created > configmap/kube-flannel-cfg created > daemonset.apps/kube-flannel-ds created > > I tried to deploy the flannel again, but it showing unchanged. > > [root at k8s-cluster-iomfrpuadezp-master-0 heat-config-script]# kubectl > apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > --namespace=kube-system > podsecuritypolicy.policy/psp.flannel.unprivileged configured > clusterrole.rbac.authorization.k8s.io/flannel > unchanged > clusterrolebinding.rbac.authorization.k8s.io/flannel > unchanged > serviceaccount/flannel unchanged > configmap/kube-flannel-cfg unchanged > daemonset.apps/kube-flannel-ds unchanged > > The other thing I have noticed that cluster deployment still uses old > parameters of victoria like heat_agent_tag and others. Its not using > latest default tags of wallaby release.  > > I am using magnum on ubuntu 20.04. The other components in stack are > already upgraded to wallaby release.  > > -- > Regards, > Ammad Ali -- Cheers & Best regards, Feilong Wang (王飞龙) ------------------------------------------------------ Senior Cloud Software Engineer Tel: +64-48032246 Email: flwang at catalyst.net.nz Catalyst IT Limited Level 6, Catalyst House, 150 Willis Street, Wellington ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 3 17:23:01 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 May 2021 19:23:01 +0200 Subject: [release] Meeting Time Poll In-Reply-To: References: Message-ID: So, almost everybody voted for your new meeting time and we have ex aequo results in terms of number of votes. The later our meeting will be, the better we would be able to handle the last minute emergencies, so, I prefer to select Friday 14:00 UTC rather than Wednesday 16:00 UTC. Wednesday is a bit early from our weekly process point of view, so the Friday date looks better and fits well with our needs. I'll update our pad and calendar to refer to this new meeting time. Thank you everyone for your votes. [1] https://doodle.com/poll/2kcdh83r3hmwmxie Le lun. 3 mai 2021 à 09:56, Herve Beraud a écrit : > Just a friendly reminder to allow everyone to vote, the poll will be > closed tonight. > > Le lun. 26 avr. 2021 à 11:16, Herve Beraud a écrit : > >> Hello everyone, >> >> As Thierry proposed during our PTG here is our new poll [1] about our >> meeting time. >> >> Indeed, we have a few regular attendees of the Release Management meeting >> who have conflicts >> with the previously chosen meeting time. As a result, we would like to >> find a new time to hold the meeting. I've created a Doodle poll [1] for >> everyone to give their input on times. It's mostly limited to times that >> reasonably overlap the working day in the US and Europe since that's where >> most of >> our attendees are located. >> >> If you attend the Release Management meeting, please fill out the poll >> so we can hopefully find a time that works better for everyone. >> >> For the sake of organization and to allow everyone to schedule his agenda >> accordingly, the poll will be closed on May 3rd. On that date, I will >> announce the time of this meeting and the date on which it will take effect >> . >> >> Notice that potentially that will force us to move our meeting on another >> day than Thursdays. >> >> I'll soon initiate our meeting tracking etherpad for Xena, and since we >> are at the beginning of a new series so we don't have a lot of topics to >> discuss, so I think that it could be worth waiting until next week to >> initiate our first meeting. Let me know if you are ok with that. That will >> allow us to plan it accordingly to the chosen meeting time. >> >> Thanks! >> >> [1] https://doodle.com/poll/2kcdh83r3hmwmxie >> >> Le mer. 7 avr. 2021 à 12:14, Herve Beraud a écrit : >> >>> Greetings, >>> >>> The poll is now terminated, everybody voted and we reached a consensus, >>> our new meeting time is at 2pm UTC on Thursdays. >>> >>> https://doodle.com/poll/ip6tg4fvznz7p3qx >>> >>> It will take effect from our next meeting, i.e tomorrow. >>> >>> I'm going to update our agenda accordingly. >>> >>> Thanks to everyone for your vote. >>> >>> Le mer. 31 mars 2021 à 17:55, Herve Beraud a >>> écrit : >>> >>>> Hello deliveryers, >>>> >>>> Don't forget to vote for our new meeting time. >>>> >>>> Thank you >>>> >>>> Le ven. 26 mars 2021 à 13:43, Herve Beraud a >>>> écrit : >>>> >>>>> Hello >>>>> >>>>> We have a few regular attendees of the Release Management meeting who >>>>> have conflicts >>>>> with the current meeting time. As a result, we would like to find a >>>>> new time to hold the meeting. I've created a Doodle poll[1] for >>>>> everyone to give their input on times. It's mostly limited to times that >>>>> reasonably overlap the working day in the US and Europe since that's where >>>>> most of >>>>> our attendees are located. >>>>> >>>>> If you attend the Release Management meeting, please fill out the >>>>> poll so we can hopefully find a time that works better for everyone. >>>>> >>>>> For the sake of organization and to allow everyone to schedule his >>>>> agenda accordingly, the poll will be closed on April 5th. On that >>>>> date, I will announce the time of this meeting and the date on which it >>>>> will take effect. >>>>> >>>>> Thanks! >>>>> >>>>> [1] https://doodle.com/poll/ip6tg4fvznz7p3qx >>>>> -- >>>>> Hervé Beraud >>>>> Senior Software Engineer at Red Hat >>>>> irc: hberaud >>>>> https://github.com/4383/ >>>>> https://twitter.com/4383hberaud >>>>> -----BEGIN PGP SIGNATURE----- >>>>> >>>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>>> v6rDpkeNksZ9fFSyoY2o >>>>> =ECSj >>>>> -----END PGP SIGNATURE----- >>>>> >>>>> >>>> >>>> -- >>>> Hervé Beraud >>>> Senior Software Engineer at Red Hat >>>> irc: hberaud >>>> https://github.com/4383/ >>>> https://twitter.com/4383hberaud >>>> -----BEGIN PGP SIGNATURE----- >>>> >>>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>>> v6rDpkeNksZ9fFSyoY2o >>>> =ECSj >>>> -----END PGP SIGNATURE----- >>>> >>>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> >>> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Mon May 3 18:13:04 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Mon, 3 May 2021 19:13:04 +0100 Subject: [wallaby][magnum] Cluster Deployment Unhealthy In-Reply-To: References: Message-ID: <53CEE388-90D9-4309-842F-FAE5C4B08651@stackhpc.com> Can you try the calico plugin? The flannel plug-in has been unmaintained for a while. Sent from my iPhone > On 3 May 2021, at 18:25, feilong wrote: > >  > Hi Ammad, > > What's the error of your kubelet? If the node is in not ready, then you should be able to see some errors from the kubelet log. > > > >> On 3/05/21 8:55 pm, Ammad Syed wrote: >> Hi, >> >> I have upgraded my magnum environment from victoria to wallaby. The upgrade went successfully. When I am trying to deploy a cluster from template, the status of cluster shows UNHEALTHY but create complete. >> >> I have logged into the master nodes and found no error message in heat logs. The nodes status still sees NotReady. >> >> [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get nodes --all-namespaces >> NAME STATUS ROLES AGE VERSION >> k8s-cluster-iomfrpuadezp-master-0 NotReady master 14m v1.18.16 >> k8s-cluster-iomfrpuadezp-node-0 NotReady 9m51s v1.18.16 >> >> Also there is no pods running in kube-system namespace. >> >> [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get pods --all-namespaces >> No resources found >> >> I have checked the logs, the flannel was deployed. >> >> + printf 'Starting to run calico-service\n' >> + set -e >> + set +x >> + '[' flannel = calico ']' >> + printf 'Finished running calico-service\n' >> + set -e >> + set +x >> Finished running calico-service >> + '[' flannel = flannel ']' >> + _prefix=quay.io/coreos/ >> + FLANNEL_DEPLOY=/srv/magnum/kubernetes/manifests/flannel-deploy.yaml >> + '[' -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml ']' >> + echo 'Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml' >> Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml >> ++ dirname /srv/magnum/kubernetes/manifests/flannel-deploy.yaml >> + mkdir -p /srv/magnum/kubernetes/manifests >> + set +x >> + '[' '' = 0 ']' >> + /usr/bin/kubectl apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml --namespace=kube-system >> podsecuritypolicy.policy/psp.flannel.unprivileged created >> clusterrole.rbac.authorization.k8s.io/flannel created >> clusterrolebinding.rbac.authorization.k8s.io/flannel created >> serviceaccount/flannel created >> configmap/kube-flannel-cfg created >> daemonset.apps/kube-flannel-ds created >> >> I tried to deploy the flannel again, but it showing unchanged. >> >> [root at k8s-cluster-iomfrpuadezp-master-0 heat-config-script]# kubectl apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml --namespace=kube-system >> podsecuritypolicy.policy/psp.flannel.unprivileged configured >> clusterrole.rbac.authorization.k8s.io/flannel unchanged >> clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged >> serviceaccount/flannel unchanged >> configmap/kube-flannel-cfg unchanged >> daemonset.apps/kube-flannel-ds unchanged >> >> The other thing I have noticed that cluster deployment still uses old parameters of victoria like heat_agent_tag and others. Its not using latest default tags of wallaby release. >> >> I am using magnum on ubuntu 20.04. The other components in stack are already upgraded to wallaby release. >> >> -- >> Regards, >> Ammad Ali > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Mon May 3 18:29:37 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 3 May 2021 23:29:37 +0500 Subject: [wallaby][magnum] Cluster Deployment Unhealthy In-Reply-To: <53CEE388-90D9-4309-842F-FAE5C4B08651@stackhpc.com> References: <53CEE388-90D9-4309-842F-FAE5C4B08651@stackhpc.com> Message-ID: Hi Bharat / Feilong, The problem look like in ubuntu wallaby repo's magnum DEB package. The package does not have all the changes that a wallaby release should contain. Like heat_container_agent_tag that should be wallaby-stable-1. In that package it was still pointing to victoria-dev. I have cloned stable/wallaby from https://opendev.org/openstack/magnum.git and replaced magnum directory in /lib/python3/dist-packages with downloaded files in stable/magnum branch. Now everything is working a expected. [root at k8s-cluster-2zcsd5n6qnre-master-0 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-cluster-2zcsd5n6qnre-master-0 Ready master 5m43s v1.19.10 k8s-cluster-2zcsd5n6qnre-node-0 Ready 39s v1.19.10 [root at k8s-cluster-2zcsd5n6qnre-master-0 ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-57999d5467-2l46z 1/1 Running 0 5m43s coredns-57999d5467-rn8d9 1/1 Running 0 5m43s csi-cinder-controllerplugin-0 5/5 Running 0 5m36s csi-cinder-nodeplugin-nmjnb 2/2 Running 0 30s dashboard-metrics-scraper-7b59f7d4df-mzmb6 1/1 Running 0 5m41s k8s-keystone-auth-knhk7 1/1 Running 0 5m39s kube-dns-autoscaler-f57cd985f-7dqj5 1/1 Running 0 5m43s kube-flannel-ds-8d7pc 1/1 Running 0 5m42s kube-flannel-ds-pcncq 1/1 Running 0 60s kubernetes-dashboard-7fb447bf79-x6kvj 1/1 Running 0 5m41s npd-hcw25 1/1 Running 0 30s openstack-cloud-controller-manager-x759f 1/1 Running 0 5m45s - Ammad On Mon, May 3, 2021 at 11:13 PM Bharat Kunwar wrote: > Can you try the calico plugin? The flannel plug-in has been unmaintained > for a while. > > Sent from my iPhone > > On 3 May 2021, at 18:25, feilong wrote: > >  > > Hi Ammad, > > What's the error of your kubelet? If the node is in not ready, then you > should be able to see some errors from the kubelet log. > > > On 3/05/21 8:55 pm, Ammad Syed wrote: > > Hi, > > I have upgraded my magnum environment from victoria to wallaby. The > upgrade went successfully. When I am trying to deploy a cluster from > template, the status of cluster shows UNHEALTHY but create complete. > > I have logged into the master nodes and found no error message in heat > logs. The nodes status still sees NotReady. > > [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get nodes > --all-namespaces > NAME STATUS ROLES AGE VERSION > k8s-cluster-iomfrpuadezp-master-0 NotReady master 14m v1.18.16 > k8s-cluster-iomfrpuadezp-node-0 NotReady 9m51s v1.18.16 > > Also there is no pods running in kube-system namespace. > > [root at k8s-cluster-iomfrpuadezp-master-0 kubernetes]# kubectl get pods > --all-namespaces > No resources found > > I have checked the logs, the flannel was deployed. > > + printf 'Starting to run calico-service\n' > + set -e > + set +x > + '[' flannel = calico ']' > + printf 'Finished running calico-service\n' > + set -e > + set +x > Finished running calico-service > + '[' flannel = flannel ']' > + _prefix=quay.io/coreos/ > + FLANNEL_DEPLOY=/srv/magnum/kubernetes/manifests/flannel-deploy.yaml > + '[' -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml ']' > + echo 'Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml' > Writing File: /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > ++ dirname /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > + mkdir -p /srv/magnum/kubernetes/manifests > + set +x > + '[' '' = 0 ']' > + /usr/bin/kubectl apply -f > /srv/magnum/kubernetes/manifests/flannel-deploy.yaml --namespace=kube-system > podsecuritypolicy.policy/psp.flannel.unprivileged created > clusterrole.rbac.authorization.k8s.io/flannel created > clusterrolebinding.rbac.authorization.k8s.io/flannel created > serviceaccount/flannel created > configmap/kube-flannel-cfg created > daemonset.apps/kube-flannel-ds created > > I tried to deploy the flannel again, but it showing unchanged. > > [root at k8s-cluster-iomfrpuadezp-master-0 heat-config-script]# kubectl > apply -f /srv/magnum/kubernetes/manifests/flannel-deploy.yaml > --namespace=kube-system > podsecuritypolicy.policy/psp.flannel.unprivileged configured > clusterrole.rbac.authorization.k8s.io/flannel unchanged > clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged > serviceaccount/flannel unchanged > configmap/kube-flannel-cfg unchanged > daemonset.apps/kube-flannel-ds unchanged > > The other thing I have noticed that cluster deployment still uses old > parameters of victoria like heat_agent_tag and others. Its not using latest > default tags of wallaby release. > > I am using magnum on ubuntu 20.04. The other components in stack are > already upgraded to wallaby release. > > -- > Regards, > Ammad Ali > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Mon May 3 19:18:36 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 3 May 2021 15:18:36 -0400 Subject: [charms] OpenStack Charms 21.04 release is now available Message-ID: The 21.04 release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Stein, Ussuri, Victoria, Wallaby and many stable combinations of Ubuntu + OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2104.html == Highlights == * OpenStack Wallaby OpenStack Wallaby is now supported on Ubuntu 20.04 LTS (via UCA) and Ubuntu 21.04 natively. * Ceph Pacific The Pacific release of Ceph is now supported, starting with OpenStack Wallaby. * Denying service restarts and charm hook calls The deferred service events feature can be enabled on a per-charm basis to avoid sudden service interruptions caused by maintenance and operational procedures applied to the cloud. This allows operators to apply the service restarts and hook calls at their prerogative. * Cloud operational improvements Improvements have been implemented at the operational level through the addition of scaling down actions for the hacluster and nova-compute charms. It is also now possible to query a nova-compute unit's hypervisor node name, list all hypervisor node names, and mark individual Ceph OSDs as 'out' or 'in'. * Sync Juju availability zones with Nova availability zones The nova-cloud-controller charm has a new action that synchronises Juju availability zones with Nova availability zones. This effectively configures host aggregates in Nova using the AZs already known to the nova-compute application. * Tech-preview charms Two tech-preview charms are now available for the deployment of OpenStack Magnum. The list of Manila charms is also expanded by two tech-preview charms. The new charms are magnum, magnum-dashboard, manila-dashboard, and manila-netapp. * Documentation updates Ongoing improvements to the OpenStack Charms Deployment Guide, the OpenStack Charm Guide, and the charm READMEs. A central documentation presence is also now available: https://ubuntu.com/openstack/docs . == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on Freenode. == Thank you == Lots of thanks to the below 37 contributors who squashed 54 bugs, enabled new features, and improved the documentation! Alex Kavanagh Alvaro Uria Arif Ali Aurelien Lourot Bartosz Woronicz Bayani Carbone Billy Olsen Chris MacNaughton Corey Bryant Cornellius Metto Cory Johns Dariusz Smigiel David Ames Dmitrii Shcherbakov Edward Hope-Morley Erlon R. Felipe Reyes Frode Nordahl Garrett Thompson Hemanth Nakkina Hernan Garcia Ionut Balutoiu James Page James Vaughn Liam Young Marius Oprin Martin Kalcok Mauricio Faria Nicolas Bock Nobuto Murata Pedro Guimaraes Peter Matulis Przemysław Lal Robert Gildein Rodrigo Barbieri Seyeong Kim Yanos Angelopoulos -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon May 3 20:47:59 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 3 May 2021 17:47:59 -0300 Subject: [CloudKitty] April 2021 - PTG Summary Message-ID: Hello, CloudKitty community, Below is my summary of the CloudKitty session during the PTG meeting in April. - CloudKitty community -- we started discussing the CloudKitty community and how to get more people engaged with the project. If we were having physical summits, it would be easier to onboard people. However, that is not our case right now. Therefore, we agreed that if we improve the user stories of CloudKitty showcasing different use cases might help increase people's awareness. These use cases could be added to the following page: https://docs.openstack.org/cloudkitty/latest/#what-can-be-done-with-cloudkitty-what-can-t. Furthermore, we also agreed to monitor Storyboard more: https://storyboard.openstack.org/#!/project_group/cloudkitty, to engage with people reporting bugs. - Review of past releases -- after discussing the CloudKitty community, we moved on to revisit the past release (Wallaby). We agreed that it might be interesting to add more effort to merge things at the beginning of the release cycle; then, we work from the middle to the end to stabilize things. This would allow/enable us to add more new features and bug fixes into the release. - CloudKitty Next release main focus -- we discussed that besides the already ongoing efforts to fix bugs, and create new features, we will try to work towards the following new features: - Reprocessing API - Reprocessing with data still on Backend - Reprocessing without data on the backend (for cases when the data has already been deleted) will not be addressed now. - V2 CI testing: https://review.opendev.org/c/openstack/cloudkitty-tempest-plugin/+/685344 - Date to start a new price for a metric Those are the main topics we covered during the PTG. If I forgot something, please let me know. Now, we just gotta work to keep evolving this awesome billing stack :) Link for the PTG Etherpad: https://etherpad.opendev.org/p/apr2021-ptg-cloudkitty -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon May 3 21:07:51 2021 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 3 May 2021 17:07:51 -0400 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so I don't see any reason why you couldn't use RPMs from RDO: https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository [1] Source: https://rockylinux.org On Mon, May 3, 2021 at 12:11 PM Wada Akor wrote: > Good day , > Please I want to know when will openstack provide information on how to > installation of openstack on Rocky Linux be available. > > Thanks & Regards > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon May 3 21:08:20 2021 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 3 May 2021 17:08:20 -0400 Subject: [puppet][glare] Propose retiring puppet-glare In-Reply-To: References: Message-ID: +1 - I don't think anyone is using it anyway. On Mon, May 3, 2021 at 11:50 AM Takashi Kajinami wrote: > Adding [glare] tag just in case any folks from the project are still around > and have any feedback about this. > > On Sun, Apr 25, 2021 at 10:55 PM Takashi Kajinami > wrote: > >> Hello, >> >> >> I'd like to propose retiring puppet-galre project, because the Glare[1] >> project >> looks inactive for a while based on the following three points >> - No actual development is made for 2 years >> - No release was made since the last Rocky release >> - setup.cfg is not maintained and the python versions listed are very >> outdated >> >> [1] https://opendev.org/x/glare >> >> I'll wait for 1-2 weeks to hear opinions from others. >> If anybody is interested in keeping the puppet-glare project or has >> intention to >> maintain Glare itself then please let me know. >> >> Thank you, >> Takashi >> > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon May 3 21:09:38 2021 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 3 May 2021 17:09:38 -0400 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: References: Message-ID: Not a core anymore but I worked with Cédric long enough to figure that he'll be a great core member, so +1. On Mon, May 3, 2021 at 8:56 AM Carter, Kevin wrote: > Absolutely +1 > > On Thu, Apr 29, 2021 at 11:09 James Slagle wrote: > >> I'm proposing we formally promote Cédric to full tripleo-core duties. He >> is already in the gerrit group with the understanding that his +2 is for >> validations. His experience and contributions have grown a lot since then, >> and I'd like to see that +2 expanded to all of TripleO. >> >> If there are no objections, we'll consider the change official at the end >> of next week. >> >> >> -- >> -- James Slagle >> -- >> > -- > Kevin Carter > IRC: Cloudnull > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon May 3 21:44:35 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Mon, 3 May 2021 21:44:35 +0000 Subject: [TC][Interop] process change proposal for interoperability Message-ID: Team, Based on guidance from the Board of Directors, Interop WG is changing its process. Please, review https://review.opendev.org/c/osf/interop/+/787646 Guidelines will no longer need to be approved by the board. It is approved by "committee" consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. Will be happy to discuss and provide more details. Looking for TC review and approval of the proposal before I present it to the board. Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Mon May 3 21:50:47 2021 From: helena at openstack.org (helena at openstack.org) Date: Mon, 3 May 2021 17:50:47 -0400 (EDT) Subject: [ptg] Summaries blogpost Message-ID: <1620078647.028332740@apps.rackspace.com> Hello everyone! Congrats to everyone on an awesome PTG week and thank you to everyone who submitted project recaps via the mailing list. I have collected all the posts and linked them to this blog post [1]. If you would still like to add a project summary feel free to reach out and I can get that added. Cheers, Helena [1] [ https://www.openstack.org/blog/xena-vptg-summaries/ ]( https://www.openstack.org/blog/xena-vptg-summaries/ ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon May 3 22:15:17 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 3 May 2021 17:15:17 -0500 Subject: [openstack-helm] Meeting Cancelled Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting today May 4th, the meeting is cancelled. Our next meeting will be May 11th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 3 23:59:07 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 May 2021 18:59:07 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 30th April, 21: Reading: 5 min Message-ID: <17934abd729.c6a882a3966573.7534248887661164759@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities, Sorry for missing to send this on Friday 30th April. 1. What we completed this week: ========================= Project updates: ------------------- ** None Other updates: ------------------ ** Updated TC onboarding guide for correcting the meeting times etc[1]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-04-29-15.00.log.html * We will have next week's meeting on May 6th, Thursday 15:00 UTC[2]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the below etherpad for Xena cycle working item. We will be checking and updating the status biweekly in same etherpad. - https://etherpad.opendev.org/p/tc-xena-tracker Open Reviews ----------------- * Four open reviews for ongoing activities[3]. Starting the 'Y' release naming process --------------------------------------------- * Governance patch is up to finalize the dates for Y release naming process[4] * https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Others --------- * Swift and Ironic applying for "assert:supports-standalone" tag * Reduce office hours to one per week[5] * Update Nodejs Runtime to Nodejs14 From Nodejs10 for Xena Cycle[6] PTG ----- TC had discussions on various topics in virtual PTG. I have summarized the PTG discussion on ML[7]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[8]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [9] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [10] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://governance.openstack.org/tc/reference/tc-guide.html [2] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [3] https://review.opendev.org/q/project:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/789385 [5] https://review.opendev.org/c/openstack/governance/+/788618 [6] https://review.opendev.org/c/openstack/governance/+/788306 [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/022120.html [8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [9] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [10] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From tpb at dyncloud.net Tue May 4 00:11:11 2021 From: tpb at dyncloud.net (Tom Barron) Date: Mon, 3 May 2021 20:11:11 -0400 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> Message-ID: <20210504001111.52o2fgjeyizhiwts@barron.net> On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > > Dears, > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > master branch and are still on bionic. [1] > > "Could" because I could not establish from the API whether they are > > included in any pipelines or not really (e.g., there are lots of > > transitive jobs there that have their nodeset overridden in children > > and children are likely used in pipelines, not them). > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ The manila-image-elements and manila-test-image jobs listed here are not pinned and are running with bionic but I made reviews with them pinned to focal [2] [3] and they run fine. So I think manila is OK w.r.t. dropping bionic support. [2] https://review.opendev.org/c/openstack/manila-image-elements/+/789296 [3] https://review.opendev.org/c/openstack/manila-test-image/+/789409 > >Thanks for the list. We need to only worried about jobs using devstack master branch. Along with >non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to >pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > >From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them >now: sahara, networking-sfc, manila, magnum, kuryr. > >-gmann > > > > > -yoctozepto > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > > > > Hello Everyone, > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > by 2nd week of May. > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > Murani- murano-dashboard-sanity-check > > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > -gmann > > > > > > > > From skaplons at redhat.com Tue May 4 06:05:57 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 04 May 2021 08:05:57 +0200 Subject: [neutron] Team meeting 4.05.2021 cancelled Message-ID: <19672546.RN8jRIFG5x@p1> Hi, Sorry for the late notice but I'm on PTO today and I will not be able to chair our team meeting so let's cancel it this week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Tue May 4 06:06:20 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 04 May 2021 08:06:20 +0200 Subject: [neutron] CI meeting 4.05.2021 cancelled Message-ID: <5838604.LaasPRylO0@p1> Hi, Sorry for the late notice but I'm on PTO today and I will not be able to chair our team meeting so let's cancel it this week. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Tue May 4 06:23:42 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 04 May 2021 08:23:42 +0200 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> Message-ID: <2950268.F7XZ1j6D0K@p1> Hi, Dnia poniedziałek, 3 maja 2021 15:50:09 CEST Ghanshyam Mann pisze: > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > > > Dears, > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > master branch and are still on bionic. [1] > > "Could" because I could not establish from the API whether they are > > included in any pipelines or not really (e.g., there are lots of > > transitive jobs there that have their nodeset overridden in children > > and children are likely used in pipelines, not them). > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > Thanks for the list. We need to only worried about jobs using devstack master branch. Along with > non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to > pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them > now: sahara, networking-sfc, manila, magnum, kuryr. > > -gmann > > > -yoctozepto > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/ CD to Ubuntu Focal 20.04 but > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > by 2nd week of May. > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > - https://opendev.org/openstack/cinder/src/commit/ 7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/ branch/master/zuul.d/jobs.yaml#L182 > > > > > > Murani- murano-dashboard-sanity-check > > > -https://opendev.org/openstack/murano-dashboard/src/commit/ b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/ victoria.html > > > > > > -gmann I checked neutron-* jobs on that list. All with "legacy" in the name are some old jobs which may be run on some stable branches only. Also neutron-tempest-plugin jobs on that list are for older stable branches and I think they should be still running on Bionic. In overall I think we are good on the Neutron with dropping support for Bionic in the master branch. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From marios at redhat.com Tue May 4 06:47:07 2021 From: marios at redhat.com (Marios Andreou) Date: Tue, 4 May 2021 09:47:07 +0300 Subject: [ptg] Summaries blogpost In-Reply-To: <1620078647.028332740@apps.rackspace.com> References: <1620078647.028332740@apps.rackspace.com> Message-ID: On Tue, May 4, 2021 at 12:52 AM helena at openstack.org wrote: > > Hello everyone! hello o/ > > > > Congrats to everyone on an awesome PTG week and thank you to everyone who submitted project recaps via the mailing list. I have collected all the posts and linked them to this blog post [1]. If you would still like to add a project summary feel free to reach out and I can get that added. > > Thanks for taking the time to put that together it is a great resource to have for future reference regards, marios > > Cheers, > > Helena > > > > [1] https://www.openstack.org/blog/xena-vptg-summaries/ From jonathan.rosser at rd.bbc.co.uk Tue May 4 07:21:31 2021 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Tue, 4 May 2021 08:21:31 +0100 Subject: [openstack-ansible] Re: Openstack Stack issues In-Reply-To: References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> Message-ID: <5ef9fab1-a3e2-f2ea-81bc-16c915d69bb2@rd.bbc.co.uk> You would attach to the heat container as you've already done, then use: systemctl status heat-api journalctl -u heat-api to see the service status and log for heat-api. Remember that on an openstack-ansible deployment the lxc containers are machine containers, not application containers so the docker/podman patterns do not apply. If you join the IRC channel #openstack-ansible we can help out further. Regards, Jonathan. On 03/05/2021 17:24, Premkumar Subramaniyan wrote: > > podman or docker ps -a both are not installed in the lxc container > root at aio1:~# lxc-attach aio1_heat_api_container-da2feba5 > root at aio1-heat-api-container-da2feba5:~# podman ps -a > bash: podman: command not found > root at aio1-heat-api-container-da2feba5:~# docker ps > bash: docker: command not found > root at aio1-heat-api-container-da2feba5:~# > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > _/CRN Top 10 Coolest Edge Computing Startups of 2020 > /_ > > > On Mon, May 3, 2021 at 8:43 PM Ruslanas Gžibovskis > wrote: > > ok, so relaunch it from lxd then :) > And try checking lxd container output also, still interesting what > is in your lxd containers. maybe you have podman/docker there... > connect to lxd and run podman or docker ps -a > > > > On Mon, 3 May 2021 at 17:13, Premkumar Subramaniyan > > > wrote: > > Hi Ruslanas > I am running in barmetal. > root at aio1:~# lxc-ls -1 > aio1_cinder_api_container-845d8e39 > aio1_galera_container-efc46f93 > aio1_glance_container-611c15ef > aio1_heat_api_container-da2feba5 > aio1_horizon_container-1d6b0098 > aio1_keystone_container-d2986dca > aio1_memcached_container-ff56f467 > aio1_neutron_server_container-261222e4 > aio1_nova_api_container-670ab083 > aio1_placement_container-32a0e966 > aio1_rabbit_mq_container-fdacf98f > aio1_repo_container-8dc59ab6 > aio1_utility_container-924a5576 > > Relaunch means I need to run this one openstack-ansible > setup-openstack.yml. > If yes  means, If I run this one my whole openstack > itself going to crash. > I need some document where I can check all the service status > and restart the service. > The only problem is the heat stack is down . > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > _/CRN Top 10 Coolest Edge Computing Startups of 2020 > /_ > > > On Mon, May 3, 2021 at 7:24 PM Ruslanas Gžibovskis > > wrote: > > Yeah Alex, with TripleO it is a limitation, but for > ansible deployment, there is no limit :) > > Premkumar, are you running containerized deployment or > baremetal? > > If you are running containerized, then you need to check > docker ps -a or podman ps -a and see what containers > failed to start using: grep -v Exited\ \(0 > else you can try relaunch ansible deployment again, it > should bring up missing services. > > > On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan > > wrote: > > Hi Alex, >      My Current version is Ussuri. Having the issues > in both centos7 and ubuntu 18.04. After restarting the > machine. > > This is document i followed to bring the openstack AIO > https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html > > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > _/CRN Top 10 Coolest Edge Computing Startups of 2020 > /_ > > > On Mon, May 3, 2021 at 6:55 PM Alex Schultz > > wrote: > > > > On Mon, May 3, 2021 at 1:53 AM Premkumar > Subramaniyan > wrote: > > Hi Zane, > > How can I bring up the heat service. > > root at aio1:/etc/systemd/system#  service > heat-api status > Unit heat-api.service could not be found. > root at aio1:/etc/systemd/system#  service > heat-api restart > Failed to restart heat-api.service: Unit > heat-api.service not found. > root at aio1:/etc/systemd/system# service > heat-api-cfn status > Unit heat-api-cfn.service could not be found. > root at aio1:/etc/systemd/system# service > heat-api-cloudwatch status > Unit heat-api-cloudwatch.service could not be > found. > root at aio1:/etc/systemd/system# service > heat-engine status > Unit heat-engine.service could not be found. > > > How did you install openstack?  I believe Train > was the last version with centos7 support on RDO. > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > _/CRN Top 10 Coolest Edge Computing Startups > of 2020 > /_ > > > On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter > > wrote: > > On 30/04/21 1:06 am, Premkumar > Subramaniyan wrote: > > Hi, > > > >     I am using the Openstack *USURI > *version in *Centos7*. Due to some > > issues my disk size is full,I freed up > the space. Afte that some service > > went down. After that I have issues in > creating the stack and list > > stack. > > It looks like heat-api at least is still down. > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue May 4 07:40:31 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 04 May 2021 07:40:31 +0000 Subject: [Ussuri][neutron] How to accomplish what allow_same_net_traffic did Message-ID: <20210504074031.Horde.rn6vNPcSL96Dag2o23f-3yX@webmail.nde.ag> Hi *, I was wondering how other operators deal with this. Our cloud started somewhere in Kilo or Liberty version and in older versions the option allow_same_net_traffic allowed to control whether instances in our shared network could connect to each other between different projects. That option worked for us but is now deprecated and the Pike release notes [1] state: > Given that there are other better documented and better tested ways > to approach this, such as through use of neutron’s native port > filtering or security groups, this functionality has been removed. > > Users should instead rely on one of these alternatives. Does that mean all security groups need to be changed in a way that this specific shared network is not reachable? That would be a lot of work if you have many projects. Is there any easier way? Regards, Eugen [1] https://docs.openstack.org/releasenotes/nova/pike.html From elfosardo at gmail.com Tue May 4 07:47:22 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 4 May 2021 09:47:22 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Hey Hervé, Thank you for your reply. That's correct, ironic-python-agent and ironic-python-agent-builder were originally part of the same repository. We decided to split them at some point, and we'd like to keep them separated, but we realized that it would make sense to keep them synced by branch. At my knowledge, ipa-builder is also used by TripleO to generate the ipa ramdisks. Current projects can keep using the master branch as usual, it's unlikely this change will break anything. In any case, there should be enough time to adapt to the new branched model during the xena cycle, and I (and the ironic community) will be available to provide help if needed. There were no big changes on ipa-builder since the wallaby release, and of course we will be able to backport any bugfix, so I don't see issues in cutting the stable branch now. Thanks, Riccardo On Mon, May 3, 2021 at 10:19 AM Herve Beraud wrote: > Hello, > > At first glance that makes sense. > > If I correctly understand the story, the ironic-python-agent [1] and the > ironic-python-agent-builder [2] were within the same repo at the origin, > correct? > > Does someone else use the ironic-python-agent-builder? > > [1] > https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml > [2] > https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml > > Le ven. 30 avr. 2021 à 16:34, Iury Gregory a > écrit : > >> Hi Riccardo, >> >> Thanks for raising this! >> I do like the idea of having stable branches for the ipa-builder +1 >> >> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau < >> elfosardo at gmail.com> escreveu: >> >>> Hello fellow openstackers! >>> >>> During the recent xena ptg, the ironic community had a discussion about >>> the need to move the ironic-python-agent-builder project from an >>> independent model to the standard release model. >>> When we initially split the builder from ironic-python-agent, we decided >>> against it, but considering some problems we encountered during the road, >>> the ironic community seems to be in favor of the change. >>> The reasons for this are mainly to strictly align the image building >>> project to ironic-python-agent releases, and ease dealing with the >>> occasional upgrade of tinycore linux, the base image used to build the >>> "tinyipa" ironic-python-agent ramdisk. >>> >>> We'd like to involve the release team to ask for advice, not only on the >>> process, but also considering that we need to ask to cut the first branch >>> for the wallaby stable release, and we know we're a bit late for that! :) >>> >>> Thank you in advance for your help! >>> >>> Riccardo >>> >> >> >> -- >> >> >> *Att[]'sIury Gregory Melo Ferreira * >> *MSc in Computer Science at UFCG* >> *Part of the ironic-core and puppet-manager-core team in OpenStack* >> *Software Engineer at Red Hat Czech* >> *Social*: https://www.linkedin.com/in/iurygregory >> *E-mail: iurygregory at gmail.com * >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue May 4 08:03:43 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 4 May 2021 10:03:43 +0200 Subject: [CLOUDKITTY] Missed CloudKitty meeting today In-Reply-To: <121819A3-C3C0-4978-86F0-B98F3EA3626E@vscaler.com> References: <121819A3-C3C0-4978-86F0-B98F3EA3626E@vscaler.com> Message-ID: Hi Mariusz, Here's the link to the PTG Etherpad: https://etherpad.opendev.org/p/apr2021-ptg-cloudkitty You can also check Rafael's summary, which he sent to the list yesterday. As for review priorities, I would quite like to see my patches to the stable branches merged: https://review.opendev.org/q/owner:pierre%2540stackhpc.com+project:openstack/cloudkitty+status:open Some of them require CI fixes. The more we wait to merge them, the more likely it is that some jobs will break again. Cheers, Pierre On Tue, 4 May 2021 at 09:53, Mariusz Karpiarz wrote: > > Pierre's right. I would have not participated myself due to yesterday being a bank holiday. > > However we still can discuss review priorities. I should be able to spend some time on Cloudkitty this week, so which patches do you want me to look into? > Also, what's the link to the Xena vPTG Etherpad? > > > On 03/05/2021, 15:51, "Pierre Riteau" wrote: > > Hi Rafael, > > No worries: today is a bank holiday in the United Kingdom, so it > probably would have been just you and me. > > Best wishes, > Pierre > > On Mon, 3 May 2021 at 16:42, Rafael Weingärtner > wrote: > > > > Hello guys, > > I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. > > > > I still have to summarize the PTL meeting. I will do so today. > > > > Again, sorry for the inconvenience; see you guys at our next meeting. > > > > -- > > Rafael Weingärtner > From skaplons at redhat.com Tue May 4 08:07:21 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 04 May 2021 10:07:21 +0200 Subject: [Ussuri][neutron] How to accomplish what allow_same_net_traffic did In-Reply-To: <20210504074031.Horde.rn6vNPcSL96Dag2o23f-3yX@webmail.nde.ag> References: <20210504074031.Horde.rn6vNPcSL96Dag2o23f-3yX@webmail.nde.ag> Message-ID: <4932756.pbuIlyIbIF@p1> Hi, Dnia wtorek, 4 maja 2021 09:40:31 CEST Eugen Block pisze: > Hi *, > > I was wondering how other operators deal with this. Our cloud started > somewhere in Kilo or Liberty version and in older versions the option > allow_same_net_traffic allowed to control whether instances in our > shared network could connect to each other between different projects. > That option worked for us but is now deprecated and the Pike release > > notes [1] state: > > Given that there are other better documented and better tested ways > > to approach this, such as through use of neutron’s native port > > filtering or security groups, this functionality has been removed. > > > > > Users should instead rely on one of these alternatives. > > Does that mean all security groups need to be changed in a way that > this specific shared network is not reachable? That would be a lot of > work if you have many projects. Is there any easier way? > > Regards, > Eugen > > [1] https://docs.openstack.org/releasenotes/nova/pike.html I don't know about this option TBH but from the quick search it looks for me that it's Nova's option. So you are probably using nova-network still. Is that correct? If yes, I think You need to migrate to Neutron in never versions and in Neutron each "default" SG has got rule to allow ingress traffic from all other ports which uses same SG. If that will not help You, I think that You will need to add own rules to Your SGs to achieve that. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From dtantsur at redhat.com Tue May 4 08:15:44 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 4 May 2021 10:15:44 +0200 Subject: [ironic] Announcing deprecation of the iSCSI deploy interface In-Reply-To: References: Message-ID: Hi all, We're about to remove the iSCSI deploy interface. Please make sure that you update your deployments **before** upgrading to Xena. Please let us know if you have any troubles with migrating to a different deploy interface. Dmitry On Wed, Sep 2, 2020 at 11:40 AM Dmitry Tantsur wrote: > Hi all, > > Following up to the previous mailing list [1] and virtual meetup [2] > discussions, I would like to announce the plans to deprecate the 'iscsi' > deploy interface. > > This is the updated plan discussed on the virtual meetup: > 1) In the Victoria cycle (i.e. right now): > - Fill in the detected feature gaps [3]. > - Switch off the iscsi deploy interface by default. > - Change [agent]image_dowload_source to HTTP by default. > - Give the direct deploy a higher priority, so that it's used by default > unless disabled. > - Mark it as deprecated in the code (causing warnings when enabled). > - Release a major version of ironic to highlight the defaults changes. > 2) In the W cycle: > - Keep the iscsi deploy deprecated. > - Listen to operators' feedback. > 3) In the X cycle > - Remove the iscsi deploy completely from ironic and IPA. > - Remove support code from ironic-lib with a major version bump. > > Please let us know if you have any questions or concerns. > > Dmitry > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016681.html > [2] https://etherpad.opendev.org/p/Ironic-Victoria-midcycle > [3] https://storyboard.openstack.org/#!/story/2008075 > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue May 4 08:29:46 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 04 May 2021 08:29:46 +0000 Subject: [Ussuri][neutron] How to accomplish what allow_same_net_traffic did In-Reply-To: <4932756.pbuIlyIbIF@p1> References: <20210504074031.Horde.rn6vNPcSL96Dag2o23f-3yX@webmail.nde.ag> <4932756.pbuIlyIbIF@p1> Message-ID: <20210504082946.Horde.X1SwB7c1TyhqsdxTQisDUfi@webmail.nde.ag> Hi, > I don't know about this option TBH but from the quick search it looks for me > that it's Nova's option. So you are probably using nova-network > still. Is that correct? I should have mentioned that we did switch from nova-network to neutron a couple of releases ago. We just noticed recently that the cross-project traffic was not filtered anymore. > If yes, I think You need to migrate to Neutron in never versions and > in Neutron each "default" SG has got rule to allow ingress traffic from all > other ports which uses same SG. > If that will not help You, I think that You will need to add own > rules to Your SGs to achieve that. That is my impression at the moment, too. If there's no easier way we'll have to adjust our SGs. Thanks! Eugen Zitat von Slawek Kaplonski : > Hi, > > Dnia wtorek, 4 maja 2021 09:40:31 CEST Eugen Block pisze: >> Hi *, >> >> I was wondering how other operators deal with this. Our cloud started >> somewhere in Kilo or Liberty version and in older versions the option >> allow_same_net_traffic allowed to control whether instances in our >> shared network could connect to each other between different projects. >> That option worked for us but is now deprecated and the Pike release >> >> notes [1] state: >> > Given that there are other better documented and better tested ways >> > to approach this, such as through use of neutron’s native port >> > filtering or security groups, this functionality has been removed. >> > >> > > Users should instead rely on one of these alternatives. >> >> Does that mean all security groups need to be changed in a way that >> this specific shared network is not reachable? That would be a lot of >> work if you have many projects. Is there any easier way? >> >> Regards, >> Eugen >> >> [1] https://docs.openstack.org/releasenotes/nova/pike.html > > I don't know about this option TBH but from the quick search it looks for me > that it's Nova's option. So you are probably using nova-network > still. Is that > correct? If yes, I think You need to migrate to Neutron in never versions and > in Neutron each "default" SG has got rule to allow ingress traffic from all > other ports which uses same SG. > If that will not help You, I think that You will need to add own > rules to Your > SGs to achieve that. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From tim.bell at cern.ch Tue May 4 08:37:42 2021 From: tim.bell at cern.ch (Tim Bell) Date: Tue, 4 May 2021 10:37:42 +0200 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: > On 3 May 2021, at 23:07, Emilien Macchi wrote: > > Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so I don't see any reason why you couldn't use RPMs from RDO: > https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository > > [1] Source: https://rockylinux.org I wonder if there would be some compatibility problems with using RDO on a RHEL compatible OS. If RDO is built against CentOS Stream [1], could it potentially have some dependencies on python packages which are due to be released in the next RHEL minor update (since Stream is on the latest version) ? Tim [1] https://lists.rdoproject.org/pipermail/users/2021-January/000967.html > On Mon, May 3, 2021 at 12:11 PM Wada Akor > wrote: > Good day , > Please I want to know when will openstack provide information on how to installation of openstack on Rocky Linux be available. > > Thanks & Regards > > > -- > Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From premkumar at aarnanetworks.com Tue May 4 09:36:57 2021 From: premkumar at aarnanetworks.com (Premkumar Subramaniyan) Date: Tue, 4 May 2021 15:06:57 +0530 Subject: [openstack-ansible] Re: Openstack Stack issues In-Reply-To: <5ef9fab1-a3e2-f2ea-81bc-16c915d69bb2@rd.bbc.co.uk> References: <0a850443-ab52-7066-deaa-05a161a5f6cf@redhat.com> <5ef9fab1-a3e2-f2ea-81bc-16c915d69bb2@rd.bbc.co.uk> Message-ID: Hi Jonathan, *Thanks for the detailed explanation. Could please share the link to join in the IRC channel #openstack-ansible* After reinstallation, the heat stack problem solved. No, I have issues in the nova-compute. Stack creation failed due this any idea for this. fatal: [aio1 -> 172.29.238.211]: FAILED! => {"attempts": 8, "changed": false, "cmd": ["/openstack/venvs/nova-21.2.4/bin/nova-status", "upgrade", "check"], "delta": "0:00:02.559395", "end": "2021-05-04 07:32:41.648653", "failed_when_result": true, "msg": "non-zero return code", "rc": 255, "start": "2021-05-04 07:32:39.089258", "stderr": "", "stderr_lines": [], "stdout": "Error:\nTraceback (most recent call last):\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 478, in main\n ret = fn(*fn_args, **fn_kwargs)\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/oslo_upgradecheck/upgradecheck.py\", line 102, in check\n result = func(self)\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 165, in _check_placement\n versions = self._placement_get(\"/\")\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 155, in _placement_get\n return client.get(path, raise_exc=True).json()\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 386, in get\n return self.request(url, 'GET', **kwargs)\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 248, in request\n return self.session.request(url, method, **kwargs)\n File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/session.py\", line 968, in request\n raise exceptions.from_response(resp, method, url)\nkeystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)", "stdout_lines": ["Error:", "Traceback (most recent call last):", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 478, in main", " ret = fn(*fn_args, **fn_kwargs)", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/oslo_upgradecheck/upgradecheck.py\", line 102, in check", " result = func(self)", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 165, in _check_placement", " versions = self._placement_get(\"/\")", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/nova/cmd/status.py\", line 155, in _placement_get", " return client.get(path, raise_exc=True).json()", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 386, in get", " return self.request(url, 'GET', **kwargs)", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 248, in request", " return self.session.request(url, method, **kwargs)", " File \"/openstack/venvs/nova-21.2.4/lib/python3.6/site-packages/keystoneauth1/session.py\", line 968, in request", " raise exceptions.from_response(resp, method, url)", "keystoneauth1.exceptions.http.ServiceUnavailable: Service Unavailable (HTTP 503)"]} NO MORE HOSTS LEFT Warm Regards, Premkumar Subramaniyan Technical staff M: +91 9940743669 *CRN Top 10 Coolest Edge Computing Startups of 2020 * On Tue, May 4, 2021 at 12:56 PM Jonathan Rosser < jonathan.rosser at rd.bbc.co.uk> wrote: > You would attach to the heat container as you've already done, then use: > > systemctl status heat-api > journalctl -u heat-api > > to see the service status and log for heat-api. Remember that on an > openstack-ansible deployment the lxc containers are machine containers, not > application containers so the docker/podman patterns do not apply. > > If you join the IRC channel #openstack-ansible we can help out further. > > Regards, > Jonathan. > On 03/05/2021 17:24, Premkumar Subramaniyan wrote: > > > podman or docker ps -a both are not installed in the lxc container > root at aio1:~# lxc-attach aio1_heat_api_container-da2feba5 > root at aio1-heat-api-container-da2feba5:~# podman ps -a > bash: podman: command not found > root at aio1-heat-api-container-da2feba5:~# docker ps > bash: docker: command not found > root at aio1-heat-api-container-da2feba5:~# > > > Warm Regards, > Premkumar Subramaniyan > Technical staff > M: +91 9940743669 > > *CRN Top 10 Coolest Edge Computing Startups of 2020 > * > > > On Mon, May 3, 2021 at 8:43 PM Ruslanas Gžibovskis > wrote: > >> ok, so relaunch it from lxd then :) >> And try checking lxd container output also, still interesting what is in >> your lxd containers. maybe you have podman/docker there... >> connect to lxd and run podman or docker ps -a >> >> >> >> On Mon, 3 May 2021 at 17:13, Premkumar Subramaniyan < >> premkumar at aarnanetworks.com> wrote: >> >>> Hi Ruslanas >>> I am running in barmetal. >>> root at aio1:~# lxc-ls -1 >>> aio1_cinder_api_container-845d8e39 >>> aio1_galera_container-efc46f93 >>> aio1_glance_container-611c15ef >>> aio1_heat_api_container-da2feba5 >>> aio1_horizon_container-1d6b0098 >>> aio1_keystone_container-d2986dca >>> aio1_memcached_container-ff56f467 >>> aio1_neutron_server_container-261222e4 >>> aio1_nova_api_container-670ab083 >>> aio1_placement_container-32a0e966 >>> aio1_rabbit_mq_container-fdacf98f >>> aio1_repo_container-8dc59ab6 >>> aio1_utility_container-924a5576 >>> >>> Relaunch means I need to run this one openstack-ansible >>> setup-openstack.yml. >>> If yes means, If I run this one my whole openstack itself going to >>> crash. >>> I need some document where I can check all the service status and >>> restart the service. >>> The only problem is the heat stack is down . >>> >>> >>> Warm Regards, >>> Premkumar Subramaniyan >>> Technical staff >>> M: +91 9940743669 >>> >>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>> * >>> >>> >>> On Mon, May 3, 2021 at 7:24 PM Ruslanas Gžibovskis >>> wrote: >>> >>>> Yeah Alex, with TripleO it is a limitation, but for ansible deployment, >>>> there is no limit :) >>>> >>>> Premkumar, are you running containerized deployment or baremetal? >>>> >>>> If you are running containerized, then you need to check docker ps -a >>>> or podman ps -a and see what containers failed to start using: grep -v >>>> Exited\ \(0 >>>> else you can try relaunch ansible deployment again, it should bring up >>>> missing services. >>>> >>>> >>>> On Mon, 3 May 2021 at 16:40, Premkumar Subramaniyan < >>>> premkumar at aarnanetworks.com> wrote: >>>> >>>>> Hi Alex, >>>>> My Current version is Ussuri. Having the issues in both centos7 >>>>> and ubuntu 18.04. After restarting the machine. >>>>> >>>>> This is document i followed to bring the openstack AIO >>>>> >>>>> https://docs.openstack.org/openstack-ansible/ussuri/user/aio/quickstart.html >>>>> >>>>> >>>>> Warm Regards, >>>>> Premkumar Subramaniyan >>>>> Technical staff >>>>> M: +91 9940743669 >>>>> >>>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>>> * >>>>> >>>>> >>>>> On Mon, May 3, 2021 at 6:55 PM Alex Schultz >>>>> wrote: >>>>> >>>>>> >>>>>> >>>>>> On Mon, May 3, 2021 at 1:53 AM Premkumar Subramaniyan < >>>>>> premkumar at aarnanetworks.com> wrote: >>>>>> >>>>>>> Hi Zane, >>>>>>> >>>>>>> How can I bring up the heat service. >>>>>>> >>>>>>> root at aio1:/etc/systemd/system# service heat-api status >>>>>>> Unit heat-api.service could not be found. >>>>>>> root at aio1:/etc/systemd/system# service heat-api restart >>>>>>> Failed to restart heat-api.service: Unit heat-api.service not found. >>>>>>> root at aio1:/etc/systemd/system# service heat-api-cfn status >>>>>>> Unit heat-api-cfn.service could not be found. >>>>>>> root at aio1:/etc/systemd/system# service heat-api-cloudwatch status >>>>>>> Unit heat-api-cloudwatch.service could not be found. >>>>>>> root at aio1:/etc/systemd/system# service heat-engine status >>>>>>> Unit heat-engine.service could not be found. >>>>>>> >>>>>>> >>>>>> How did you install openstack? I believe Train was the last version >>>>>> with centos7 support on RDO. >>>>>> >>>>>> >>>>>>> Warm Regards, >>>>>>> Premkumar Subramaniyan >>>>>>> Technical staff >>>>>>> M: +91 9940743669 >>>>>>> >>>>>>> *CRN Top 10 Coolest Edge Computing Startups of 2020 >>>>>>> * >>>>>>> >>>>>>> >>>>>>> On Fri, Apr 30, 2021 at 10:54 PM Zane Bitter >>>>>>> wrote: >>>>>>> >>>>>>>> On 30/04/21 1:06 am, Premkumar Subramaniyan wrote: >>>>>>>> > Hi, >>>>>>>> > >>>>>>>> > I am using the Openstack *USURI *version in *Centos7*. Due to >>>>>>>> some >>>>>>>> > issues my disk size is full,I freed up the space. Afte that some >>>>>>>> service >>>>>>>> > went down. After that I have issues in creating the stack and >>>>>>>> list >>>>>>>> > stack. >>>>>>>> >>>>>>>> It looks like heat-api at least is still down. >>>>>>>> >>>>>>>> >>>> >>>> -- >>>> Ruslanas Gžibovskis >>>> +370 6030 7030 >>>> >>> >> >> -- >> Ruslanas Gžibovskis >> +370 6030 7030 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue May 4 09:53:01 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 4 May 2021 11:53:01 +0200 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: On Tue, May 4, 2021 at 10:39 AM Tim Bell wrote: > > > > On 3 May 2021, at 23:07, Emilien Macchi wrote: > > Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so I don't see any reason why you couldn't use RPMs from RDO: > https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository > > [1] Source: https://rockylinux.org > > > I wonder if there would be some compatibility problems with using RDO on a RHEL compatible OS. If RDO is built against CentOS Stream [1], could it potentially have some dependencies on python packages which are due to be released in the next RHEL minor update (since Stream is on the latest version) ? > > Tim I feel the same. I guess Rocky Linux would need to come up with their own OpenStack release process. Or, perhaps, collaborate with RDO so that it supports both distros. ;-) -yoctozepto From mark at stackhpc.com Tue May 4 11:37:37 2021 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 4 May 2021 12:37:37 +0100 Subject: [kolla][ptg] Kolla PTG summary Message-ID: Hi, Thank you to everyone who attended the Kolla Xena PTG. Thank you also to those who helped out with chairing and driving discussions, since I had a cold and my brain was far from 100%. Here is a short summary of the discussions. See the Etherpad [1] for full notes. # Wallaby Retrospective On the positive side, we merged a number of useful features this cycle, and eventually mitigated the Dockerhub pull limit issues in CI by switching to quay.io. On the negative side, we feel that we have lost some review bandwidth recently. We really need more people in the community helping out with reviews, and ideally moving to become members of the core team. The barrier for entry is probably lower than you think, and the workload does not have to be too heavy - every little helps. PLEASE get in touch if you are interested in helping out. # Review Wallaby PTG actions Many of these are incomplete and still relevant. As usual, it's easier to think of things to do than find time to do them. I've added them at the end of the actions section of the etherpad, and we can revisit them throughout the Xena cycle. # General topics ## Deprecations In the continuing drive to reduce the maintenance overhead of the project, we discussed which components or features might need to be deprecated next. With Tripleo out of the picture, RHEL support is not tested, so it will be deprecated in Wallaby for removal in Xena. We discussed standardising on source image type and dropping support for binary images, but agreed (once again) that it would be too disruptive for users, and the repository model may be easier for users to mirror. ## Release process We agreed to document some more of our release process. We also agreed to try a new approach in Xena, where we begin the cycle deploying the previous release of OpenStack (Wallaby) on our master branches, to avoid firefighting caused by breakage in OpenStack projects. This should provide us with more stability while we add our own features. The downside is that we may build up some technical debt to converge with the new release. Some details are still to be decided. ## Elasticsearch -> OpenSearch The recent Elasticsearch licence change may be a blocker in some organisations, and Amazon's OpenSearch appears to provide an alternative. The final OSS release of Elastic (7.10) EOL is 2022-05-10, so we have time to consider our options. Christian offered to investigate a possible migration. ## Reflection The team reflected a little on how things are going. We agreed that the 4 most active core team members form quite a tight unit that effectively makes decisions and keeps the project moving forward, with the help of other core team members and other community members. We also agreed that we are quite vulnerable to the loss of any of those 4, especially when we consider how many patches are authored and approved by any 3 of those 4, and their areas of expertise. ## Future leadership I have decided that this will be my last cycle as Kolla PTL. I still enjoy the role, but I think it is healthy to rotate the leadership from time to time. We have at least one person lined up for nomination, so watch this space! # Kolla (images) topics ## CentOS Stream 8 We have added support for CentOS Stream 8 (and dropped CentOS Linux 8) in the Wallaby release. Since CentOS Linux 8 will be EOL at the end of this year (THANK YOU VERY MUCH FOR THAT BY THE WAY), we need to consider which stable releases may need to have CentOS Stream 8 support backported. RDO will not support Train on stream. Ussuri will move to Extended Maintenance before the end of 2021, however we know that many users are still a few releases behind upstream. In the end, we agreed to start with backporting support to Victoria, then consider Ussuri once that is done. We agreed to keep CentOS Linux 8 as the default base image due to backwards compatibility promises. Stream images will have a victoria-centos8s tag to differentiate them, similarly to how we handled CentOS 8. ## Official plugins We agreed to remove non-official plugins from our images. Often these projects branch late and break our release processes. We agreed to provide example config snippets and documentation to help adding these plugins back into images. # Kolla Ansible ## Glance metadef APIs (OSSN-0088) Glance team discussed this issue at the PTG, we have an action to follow up on their decision. ## Opinionated, hardened configuration Should we provide an opinionated, hardened configuration? For example TLS by default, conform to OpenStack security checklist, etc. Some nods of approval, but no one offering to implement it. ## Running kolla-ansible bootstrap-servers without stress Currently, running kolla-ansible bootstrap-servers on an existing system can restart docker on all nodes concurrently. This can take out clustered services such as MariaDB and RabbitMQ. There are various options to improve this, including a serial restart, or an intelligent parallel restart (including waiting for services to be up). We decided on an action to investigate enabling Docker live restore by default, and what its shortcomings may be. ## Removal/cleanup of services We again discussed how to do a more fine grained cleanup of services. Ideally this would be per-service, and could include things like dropping DBs and Keystone endpoints. yoctozepto may have time to look into this. ## Cinder active/active bug We are still affected by [2]. We have a fairly good plan to resolve it, but some details around migration of existing clusters need ironing out. yoctozepto and mnasiadka to continue looking at it. ## More fine grained skipping of tasks, e.g. allow to skip service registration We agreed to abandon the tag based approach in favour of pushing an existing effort [3] to make a split genconfig, deploy-containers command pair work. This achieves one of the main gains of skipping the bootstrap and registration steps, and is useful for other reasons. ## Letsencrypt We agreed to split the existing patches [4] into the contentious and non-contentious parts. Namely, separate HAProxy automatic certificate reload from certbot support. The HAProxy reload is currently implemented via supervisord and cron in the container, which does not fit well with our single process container model. We will look into an HAProxy upgrade for automatic certificate rotation detection as a possible solution. ## ProxySQL kevko outlined the proposed ProxySQL patch chain [5], as well as some more details on the nature of the problem being solved. In Wallaby we added initial support for multiple MariaDB clusters. ProxySQL builds on this, providing sharding at the database schema level between different OpenStack projects (nova, neutron, etc.). # Kayobe ## Running host configure during upgrades Pierre proposed a change [6] to our documented upgrade procedure to include a 'host configure' step. Sometimes this is necessary to upgrade or reconfigure Docker, or other parts of the host setup. We agreed that this should be done, but that we should also improve the 'kayobe host upgrade' commands to avoid a full host configure run. ## Support provisioning infrastructure VMs We walked through the proposed feature [7], including the various milestones proposed. An MVP if simply provisioning infra VMs on the seed hypervisor was agreed. ## Ubuntu We discussed what has been achieved [8] in Wallaby so far, what still needs to be done before the release, and what we have left to look at in Xena and beyond. We are on track to have most features available in Wallaby. We discussed whether to backport this work to Victoria. It will depend on how cleanly the code applies. ## Multiple environments Pierre implemented the majority of this long-awaited feature [9] in Wallaby. We still require CI testing, and the ability to share common Kolla configuration between environments. These will be looked at in Xena. ## Multiple host images Pierre started work [10] on this feature. The aim is to describe multiple root disk images with different properties, then map these images to different overcloud hosts. # Priorities We usually vote for community priorities, however this cycle it was felt that they only have a minimal effect on activity. Let's see how it goes without them. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg [2] https://bugs.launchpad.net/kolla-ansible/+bug/1904062 [3] https://review.opendev.org/c/openstack/kolla-ansible/+/773246 [4] https://review.opendev.org/q/topic:%22bp%252Fletsencrypt-https%22+(status:open%20OR%20status:merged) [5] https://review.opendev.org/q/hashtag:%22proxysql%22+(status:open%20OR%20status:merged [6] https://review.opendev.org/c/openstack/kayobe/+/783053 [7] https://storyboard.openstack.org/#!/story/2008741 [8] https://storyboard.openstack.org/#!/story/2004960 [9] https://storyboard.openstack.org/#!/story/2002009 [10] https://storyboard.openstack.org/#!/story/2002098 From hberaud at redhat.com Tue May 4 12:11:02 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 4 May 2021 14:11:02 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: We can consider this context as an exception [1] to switch back this deliverable to the cycle-with-intermediary model [2]. However, before we start anything, I prefer to discuss that first more deeply to see the possible side effects from a release management point of view, hence, I added this topic to our next meeting (Friday 2pm UTC - May 7). Feel free to join us to discuss it and plan how we will proceed. [1] https://releases.openstack.org/reference/release_models.html#openstack-related-libraries [2] https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary [3] https://etherpad.opendev.org/p/xena-relmgt-tracking Le mar. 4 mai 2021 à 09:47, Riccardo Pittau a écrit : > Hey Hervé, > > Thank you for your reply. > > That's correct, ironic-python-agent and ironic-python-agent-builder were > originally part of the same repository. > We decided to split them at some point, and we'd like to keep them > separated, but we realized that it would make sense to keep them synced by > branch. > > At my knowledge, ipa-builder is also used by TripleO to generate the ipa > ramdisks. > Current projects can keep using the master branch as usual, it's unlikely > this change will break anything. > In any case, there should be enough time to adapt to the new branched > model during the xena cycle, and I (and the ironic community) will be > available to provide help if needed. > There were no big changes on ipa-builder since the wallaby release, and of > course we will be able to backport any bugfix, so I don't see issues in > cutting the stable branch now. > > Thanks, > > Riccardo > > > > On Mon, May 3, 2021 at 10:19 AM Herve Beraud wrote: > >> Hello, >> >> At first glance that makes sense. >> >> If I correctly understand the story, the ironic-python-agent [1] and the >> ironic-python-agent-builder [2] were within the same repo at the origin, >> correct? >> >> Does someone else use the ironic-python-agent-builder? >> >> [1] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml >> [2] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml >> >> Le ven. 30 avr. 2021 à 16:34, Iury Gregory a >> écrit : >> >>> Hi Riccardo, >>> >>> Thanks for raising this! >>> I do like the idea of having stable branches for the ipa-builder +1 >>> >>> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau < >>> elfosardo at gmail.com> escreveu: >>> >>>> Hello fellow openstackers! >>>> >>>> During the recent xena ptg, the ironic community had a discussion about >>>> the need to move the ironic-python-agent-builder project from an >>>> independent model to the standard release model. >>>> When we initially split the builder from ironic-python-agent, we >>>> decided against it, but considering some problems we encountered during the >>>> road, the ironic community seems to be in favor of the change. >>>> The reasons for this are mainly to strictly align the image building >>>> project to ironic-python-agent releases, and ease dealing with the >>>> occasional upgrade of tinycore linux, the base image used to build the >>>> "tinyipa" ironic-python-agent ramdisk. >>>> >>>> We'd like to involve the release team to ask for advice, not only on >>>> the process, but also considering that we need to ask to cut the first >>>> branch for the wallaby stable release, and we know we're a bit late for >>>> that! :) >>>> >>>> Thank you in advance for your help! >>>> >>>> Riccardo >>>> >>> >>> >>> -- >>> >>> >>> *Att[]'sIury Gregory Melo Ferreira * >>> *MSc in Computer Science at UFCG* >>> *Part of the ironic-core and puppet-manager-core team in OpenStack* >>> *Software Engineer at Red Hat Czech* >>> *Social*: https://www.linkedin.com/in/iurygregory >>> *E-mail: iurygregory at gmail.com * >>> >> >> >> -- >> Hervé Beraud >> Senior Software Engineer at Red Hat >> irc: hberaud >> https://github.com/4383/ >> https://twitter.com/4383hberaud >> -----BEGIN PGP SIGNATURE----- >> >> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> v6rDpkeNksZ9fFSyoY2o >> =ECSj >> -----END PGP SIGNATURE----- >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusz.karpiarz at vscaler.com Tue May 4 07:53:35 2021 From: mariusz.karpiarz at vscaler.com (Mariusz Karpiarz) Date: Tue, 4 May 2021 07:53:35 +0000 Subject: [CLOUDKITTY] Missed CloudKitty meeting today In-Reply-To: References: Message-ID: <121819A3-C3C0-4978-86F0-B98F3EA3626E@vscaler.com> Pierre's right. I would have not participated myself due to yesterday being a bank holiday. However we still can discuss review priorities. I should be able to spend some time on Cloudkitty this week, so which patches do you want me to look into? Also, what's the link to the Xena vPTG Etherpad? On 03/05/2021, 15:51, "Pierre Riteau" wrote: Hi Rafael, No worries: today is a bank holiday in the United Kingdom, so it probably would have been just you and me. Best wishes, Pierre On Mon, 3 May 2021 at 16:42, Rafael Weingärtner wrote: > > Hello guys, > I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. > > I still have to summarize the PTL meeting. I will do so today. > > Again, sorry for the inconvenience; see you guys at our next meeting. > > -- > Rafael Weingärtner From wakorins at gmail.com Tue May 4 09:59:50 2021 From: wakorins at gmail.com (Wada Akor) Date: Tue, 4 May 2021 10:59:50 +0100 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: I agree with you. That's what I was thinking too. On Tue, May 4, 2021, 10:53 Radosław Piliszek wrote: > On Tue, May 4, 2021 at 10:39 AM Tim Bell wrote: > > > > > > > > On 3 May 2021, at 23:07, Emilien Macchi wrote: > > > > Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], > so I don't see any reason why you couldn't use RPMs from RDO: > > > https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository > > > > [1] Source: https://rockylinux.org > > > > > > I wonder if there would be some compatibility problems with using RDO on > a RHEL compatible OS. If RDO is built against CentOS Stream [1], could it > potentially have some dependencies on python packages which are due to be > released in the next RHEL minor update (since Stream is on the latest > version) ? > > > > Tim > > I feel the same. > I guess Rocky Linux would need to come up with their own OpenStack > release process. > Or, perhaps, collaborate with RDO so that it supports both distros. ;-) > > -yoctozepto > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusz.karpiarz at vscaler.com Tue May 4 10:33:52 2021 From: mariusz.karpiarz at vscaler.com (Mariusz Karpiarz) Date: Tue, 4 May 2021 10:33:52 +0000 Subject: [CLOUDKITTY] Missed CloudKitty meeting today In-Reply-To: References: <121819A3-C3C0-4978-86F0-B98F3EA3626E@vscaler.com> Message-ID: <03011137-166B-42EA-B0FB-54CEC7EC748E@vscaler.com> Thanks, Pierre! On 04/05/2021, 09:04, "Pierre Riteau" wrote: Hi Mariusz, Here's the link to the PTG Etherpad: https://etherpad.opendev.org/p/apr2021-ptg-cloudkitty You can also check Rafael's summary, which he sent to the list yesterday. As for review priorities, I would quite like to see my patches to the stable branches merged: https://review.opendev.org/q/owner:pierre%2540stackhpc.com+project:openstack/cloudkitty+status:open Some of them require CI fixes. The more we wait to merge them, the more likely it is that some jobs will break again. Cheers, Pierre On Tue, 4 May 2021 at 09:53, Mariusz Karpiarz wrote: > > Pierre's right. I would have not participated myself due to yesterday being a bank holiday. > > However we still can discuss review priorities. I should be able to spend some time on Cloudkitty this week, so which patches do you want me to look into? > Also, what's the link to the Xena vPTG Etherpad? > > > On 03/05/2021, 15:51, "Pierre Riteau" wrote: > > Hi Rafael, > > No worries: today is a bank holiday in the United Kingdom, so it > probably would have been just you and me. > > Best wishes, > Pierre > > On Mon, 3 May 2021 at 16:42, Rafael Weingärtner > wrote: > > > > Hello guys, > > I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. > > > > I still have to summarize the PTL meeting. I will do so today. > > > > Again, sorry for the inconvenience; see you guys at our next meeting. > > > > -- > > Rafael Weingärtner > From ricolin at ricolky.com Tue May 4 13:08:36 2021 From: ricolin at ricolky.com (Rico Lin) Date: Tue, 4 May 2021 21:08:36 +0800 Subject: [tc][all] What should we have as community goals for Y series?( Starting community-wide goals ideas) Message-ID: Dear all, We're now in R-22 week for Xena cycle which sounds like a perfect time to start calling for community-wide goals ideas for Y-series. According to the goal process schedule [1], we need to find potential goals, and champions before Xena milestone-1 and provide proper discussion in the community right after that to give a clear view and detail on each goal. And if we would like to keep up with the schedule, we should start right away to identify potential goals. So please help to provide ideas for Y series community-wide goals in [2]. Community-wide goals are important in terms of solving and improving a technical area across OpenStack as a whole. It has a lot more benefits to be considered from users as well from a developer's perspective. See [3] for more details about community-wide goals and processes. Also, you can refer to the backlogs of community-wide goals from this[4] and victoria cycle goals[5] (also ussuri[6]). We took cool-down cycle goal step for Xena cycle [7], so no selected goals for Xena. [1] https://governance.openstack.org/tc/goals/#goal-selection-schedule [2] https://etherpad.opendev.org/p/y-series-goals [3] https://governance.openstack.org/tc/goals/index.html [4] https://etherpad.openstack.org/p/community-goals [5] https://etherpad.openstack.org/p/YVR-v-series-goals [6] https://etherpad.openstack.org/p/PVG-u-series-goals [7] https://review.opendev.org/c/openstack/governance/+/770616 *Rico Lin* OIF Board director, OpenStack TC, Multi-arch SIG chair, Heat PTL, Senior Software Engineer at EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue May 4 13:35:49 2021 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 4 May 2021 07:35:49 -0600 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: On Tue, May 4, 2021 at 2:42 AM Tim Bell wrote: > > > On 3 May 2021, at 23:07, Emilien Macchi wrote: > > Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so > I don't see any reason why you couldn't use RPMs from RDO: > > https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository > > [1] Source: https://rockylinux.org > > > I wonder if there would be some compatibility problems with using RDO on a > RHEL compatible OS. If RDO is built against CentOS Stream [1], could it > potentially have some dependencies on python packages which are due to be > released in the next RHEL minor update (since Stream is on the latest > version) ? > > Generally RDO isn't specifying a version when building packages so it's likely to be compatible. CentOS Stream will just have things that'll show up in the next version of Rocky Linux. For stable releases it's likely to just be compatible, whereas master might run into issues if new base os dependencies get added. > Tim > [1] https://lists.rdoproject.org/pipermail/users/2021-January/000967.html > > On Mon, May 3, 2021 at 12:11 PM Wada Akor wrote: > >> Good day , >> Please I want to know when will openstack provide information on how to >> installation of openstack on Rocky Linux be available. >> >> Thanks & Regards >> > > > -- > Emilien Macchi > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue May 4 13:49:55 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 4 May 2021 06:49:55 -0700 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: On Tue, May 4, 2021 at 5:16 AM Herve Beraud wrote: > > We can consider this context as an exception [1] to switch back this deliverable to the cycle-with-intermediary model [2]. > Ultimately it is not a library, it is a set of tools where having a tagged release makes more sense so we can fix breakpoints when integrations in other aspects break on us, such as kernel versioning from distributions where chroot suddenly no longer works as expected. > However, before we start anything, I prefer to discuss that first more deeply to see the possible side effects from a release management point of view, hence, I added this topic to our next meeting (Friday 2pm UTC - May 7). Feel free to join us to discuss it and plan how we will proceed. > I believe there is a contextual disconnect. The ironic contributor context is we have suffered CI pain over the past year where code handling distributions has broken several times. Largely these breaks have had to be navigated around with specific version pinning, and settings changes, because we were unable to fix something a release or two back inside the tooling, in this case ipa-b. In other words, if we had that capability and those branches/tags, we would have been less in a world of pain because we wouldn't have had to make changes on every job/branch across every repository where we were hitting issues with ipa-b. Reduction of pain is why this change is being proposed. If the release team would please convey their concerns of possible side effects in writing on this thread, I would greatly appreciate it because I'll be on a flight on Friday during the release team meeting. Ultimately I see Riccardo trying to do the needful of the lazy consensus that has been obtained within the Ironic community. I don't see this is anything that requires an explicit blessing by the release team, but that perception is based upon my context. -Julia > [1] https://releases.openstack.org/reference/release_models.html#openstack-related-libraries > [2] https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary > [3] https://etherpad.opendev.org/p/xena-relmgt-tracking > > Le mar. 4 mai 2021 à 09:47, Riccardo Pittau a écrit : >> >> Hey Hervé, >> >> Thank you for your reply. >> >> That's correct, ironic-python-agent and ironic-python-agent-builder were originally part of the same repository. >> We decided to split them at some point, and we'd like to keep them separated, but we realized that it would make sense to keep them synced by branch. >> >> At my knowledge, ipa-builder is also used by TripleO to generate the ipa ramdisks. >> Current projects can keep using the master branch as usual, it's unlikely this change will break anything. >> In any case, there should be enough time to adapt to the new branched model during the xena cycle, and I (and the ironic community) will be available to provide help if needed. >> There were no big changes on ipa-builder since the wallaby release, and of course we will be able to backport any bugfix, so I don't see issues in cutting the stable branch now. >> >> Thanks, >> >> Riccardo >> >> >> >> On Mon, May 3, 2021 at 10:19 AM Herve Beraud wrote: >>> >>> Hello, >>> >>> At first glance that makes sense. >>> >>> If I correctly understand the story, the ironic-python-agent [1] and the ironic-python-agent-builder [2] were within the same repo at the origin, correct? >>> >>> Does someone else use the ironic-python-agent-builder? >>> >>> [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml >>> [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml >>> >>> Le ven. 30 avr. 2021 à 16:34, Iury Gregory a écrit : >>>> >>>> Hi Riccardo, >>>> >>>> Thanks for raising this! >>>> I do like the idea of having stable branches for the ipa-builder +1 >>>> >>>> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau escreveu: >>>>> >>>>> Hello fellow openstackers! >>>>> >>>>> During the recent xena ptg, the ironic community had a discussion about the need to move the ironic-python-agent-builder project from an independent model to the standard release model. >>>>> When we initially split the builder from ironic-python-agent, we decided against it, but considering some problems we encountered during the road, the ironic community seems to be in favor of the change. >>>>> The reasons for this are mainly to strictly align the image building project to ironic-python-agent releases, and ease dealing with the occasional upgrade of tinycore linux, the base image used to build the "tinyipa" ironic-python-agent ramdisk. >>>>> >>>>> We'd like to involve the release team to ask for advice, not only on the process, but also considering that we need to ask to cut the first branch for the wallaby stable release, and we know we're a bit late for that! :) >>>>> >>>>> Thank you in advance for your help! >>>>> >>>>> Riccardo >>>> >>>> >>>> >>>> -- >>>> Att[]'s >>>> Iury Gregory Melo Ferreira >>>> MSc in Computer Science at UFCG >>>> Part of the ironic-core and puppet-manager-core team in OpenStack >>>> Software Engineer at Red Hat Czech >>>> Social: https://www.linkedin.com/in/iurygregory >>>> E-mail: iurygregory at gmail.com >>> >>> >>> >>> -- >>> Hervé Beraud >>> Senior Software Engineer at Red Hat >>> irc: hberaud >>> https://github.com/4383/ >>> https://twitter.com/4383hberaud >>> -----BEGIN PGP SIGNATURE----- >>> >>> wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >>> Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >>> RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >>> F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >>> 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >>> glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >>> m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >>> hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >>> qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >>> F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >>> B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >>> v6rDpkeNksZ9fFSyoY2o >>> =ECSj >>> -----END PGP SIGNATURE----- >>> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From gmann at ghanshyammann.com Tue May 4 13:51:33 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 08:51:33 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 6th at 1500 UTC Message-ID: <17937a5f288.1169270675465.2499892921963661204@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 6th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 5th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From rafaelweingartner at gmail.com Tue May 4 13:53:43 2021 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Tue, 4 May 2021 10:53:43 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today In-Reply-To: <03011137-166B-42EA-B0FB-54CEC7EC748E@vscaler.com> References: <121819A3-C3C0-4978-86F0-B98F3EA3626E@vscaler.com> <03011137-166B-42EA-B0FB-54CEC7EC748E@vscaler.com> Message-ID: Thanks, I'll check them out. I have also already updated my patches. Could you guys review them? On Tue, May 4, 2021 at 7:33 AM Mariusz Karpiarz < mariusz.karpiarz at vscaler.com> wrote: > Thanks, Pierre! > > On 04/05/2021, 09:04, "Pierre Riteau" wrote: > > Hi Mariusz, > > Here's the link to the PTG Etherpad: > https://etherpad.opendev.org/p/apr2021-ptg-cloudkitty > You can also check Rafael's summary, which he sent to the list > yesterday. > > As for review priorities, I would quite like to see my patches to the > stable branches merged: > > https://review.opendev.org/q/owner:pierre%2540stackhpc.com+project:openstack/cloudkitty+status:open > > Some of them require CI fixes. The more we wait to merge them, the > more likely it is that some jobs will break again. > > Cheers, > Pierre > > On Tue, 4 May 2021 at 09:53, Mariusz Karpiarz > wrote: > > > > Pierre's right. I would have not participated myself due to > yesterday being a bank holiday. > > > > However we still can discuss review priorities. I should be able to > spend some time on Cloudkitty this week, so which patches do you want me to > look into? > > Also, what's the link to the Xena vPTG Etherpad? > > > > > > On 03/05/2021, 15:51, "Pierre Riteau" wrote: > > > > Hi Rafael, > > > > No worries: today is a bank holiday in the United Kingdom, so it > > probably would have been just you and me. > > > > Best wishes, > > Pierre > > > > On Mon, 3 May 2021 at 16:42, Rafael Weingärtner > > wrote: > > > > > > Hello guys, > > > I would like to apologize for missing the CloudKitty meeting > today. I was concentrating on some work, and my alarm for the meeting did > not ring. > > > > > > I still have to summarize the PTL meeting. I will do so today. > > > > > > Again, sorry for the inconvenience; see you guys at our next > meeting. > > > > > > -- > > > Rafael Weingärtner > > > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 4 13:55:06 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 08:55:06 -0500 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <20210504001111.52o2fgjeyizhiwts@barron.net> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> Message-ID: <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> ---- On Mon, 03 May 2021 19:11:11 -0500 Tom Barron wrote ---- > On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > > > Dears, > > > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > > master branch and are still on bionic. [1] > > > "Could" because I could not establish from the API whether they are > > > included in any pipelines or not really (e.g., there are lots of > > > transitive jobs there that have their nodeset overridden in children > > > and children are likely used in pipelines, not them). > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > The manila-image-elements and manila-test-image jobs listed here are > not pinned and are running with bionic but I made reviews with them > pinned to focal [2] [3] and they run fine. So I think manila is OK > w.r.t. dropping bionic support. > > [2] https://review.opendev.org/c/openstack/manila-image-elements/+/789296 > > [3] https://review.opendev.org/c/openstack/manila-test-image/+/789409 Thanks, Tom for testing. Please merge these patches before devstack patch merge. -gmann > > > > >Thanks for the list. We need to only worried about jobs using devstack master branch. Along with > >non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to > >pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > > >From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them > >now: sahara, networking-sfc, manila, magnum, kuryr. > > > >-gmann > > > > > > > > -yoctozepto > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > > > > > > Hello Everyone, > > > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > > by 2nd week of May. > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > Murani- murano-dashboard-sanity-check > > > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > > > -gmann > > > > > > > > > > > > > > From amy at demarco.com Tue May 4 13:56:41 2021 From: amy at demarco.com (Amy Marrich) Date: Tue, 4 May 2021 08:56:41 -0500 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: In theory the RDO instructions should work but as mentioned there may be libraries or other packages where the versions on the Operating System level may vary and causee an issue. Rocky Linux can work with the Docs SiG if someone would like to provide instructions specifically for installing OpenStack on Rocky Linux and if they would like to contribute beyond that, including testing on that platform, the RPM Packaging SiG might be the right place to start those conversations. RDO meets Wednesdays at 14:00 UTC in the #RDO channel on Freenode just add an agenda item[0] if you'd like to learn more on that front. Thanks, Amy (spotz) 0 - .https://etherpad.opendev.org/p/RDO-Meeting On Tue, May 4, 2021 at 8:40 AM Alex Schultz wrote: > > > On Tue, May 4, 2021 at 2:42 AM Tim Bell wrote: > >> >> >> On 3 May 2021, at 23:07, Emilien Macchi wrote: >> >> Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so >> I don't see any reason why you couldn't use RPMs from RDO: >> >> https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository >> >> [1] Source: https://rockylinux.org >> >> >> I wonder if there would be some compatibility problems with using RDO on >> a RHEL compatible OS. If RDO is built against CentOS Stream [1], could it >> potentially have some dependencies on python packages which are due to be >> released in the next RHEL minor update (since Stream is on the latest >> version) ? >> >> > Generally RDO isn't specifying a version when building packages so it's > likely to be compatible. CentOS Stream will just have things that'll show > up in the next version of Rocky Linux. For stable releases it's likely to > just be compatible, whereas master might run into issues if new base os > dependencies get added. > > >> Tim >> [1] https://lists.rdoproject.org/pipermail/users/2021-January/000967.html >> >> On Mon, May 3, 2021 at 12:11 PM Wada Akor wrote: >> >>> Good day , >>> Please I want to know when will openstack provide information on how to >>> installation of openstack on Rocky Linux be available. >>> >>> Thanks & Regards >>> >> >> >> -- >> Emilien Macchi >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 4 13:58:41 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 08:58:41 -0500 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <2950268.F7XZ1j6D0K@p1> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <2950268.F7XZ1j6D0K@p1> Message-ID: <17937ac7ad8.dab64d1f6035.2421869115471104088@ghanshyammann.com> ---- On Tue, 04 May 2021 01:23:42 -0500 Slawek Kaplonski wrote ---- > Hi, > > Dnia poniedziałek, 3 maja 2021 15:50:09 CEST Ghanshyam Mann pisze: > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek > wrote ---- > > > > > Dears, > > > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > > master branch and are still on bionic. [1] > > > "Could" because I could not establish from the API whether they are > > > included in any pipelines or not really (e.g., there are lots of > > > transitive jobs there that have their nodeset overridden in children > > > and children are likely used in pipelines, not them). > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > Thanks for the list. We need to only worried about jobs using devstack > master branch. Along with > > non-devstack jobs. there are many stable testing jobs also on the master > gate which is all good to > > pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > > > From the list, I see few more projects (other than listed in the subject of > this email) jobs, so tagging them > > now: sahara, networking-sfc, manila, magnum, kuryr. > > > > -gmann > > > > > -yoctozepto > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann > wrote: > > > > Hello Everyone, > > > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/ > CD to Ubuntu Focal 20.04 but > > > > it seems there are few jobs still running on Bionic. As devstack team > is planning to drop the Bionic support > > > > you need to move those to Focal otherwise they will start failing. We > are planning to merge the devstack patch > > > > by 2nd week of May. > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > I have not listed all the job but few of them which were failing with ' > rtslib-fb-targetctl error' are below: > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > - https://opendev.org/openstack/cinder/src/commit/ > 7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/ > branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > Murani- murano-dashboard-sanity-check > > > > -https://opendev.org/openstack/murano-dashboard/src/commit/ > b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to > migrate it to Focal before devstack patch merge. > > > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/ > victoria.html > > > > > > > > -gmann > > I checked neutron-* jobs on that list. All with "legacy" in the name are some > old jobs which may be run on some stable branches only. > Also neutron-tempest-plugin jobs on that list are for older stable branches > and I think they should be still running on Bionic. > In overall I think we are good on the Neutron with dropping support for Bionic > in the master branch. Thanks, Slawek for checking and confirmation, I saw this job 'networking-sfc-tempest-multinode'[1] in networking-sfc which might be running on master? [1] https://opendev.org/openstack/networking-sfc/src/branch/master/zuul.d/jobs.yaml#L16 -gmann > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From akekane at redhat.com Tue May 4 14:26:37 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Tue, 4 May 2021 19:56:37 +0530 Subject: [glance][ptg] Glance Xena PTG summary Message-ID: Hello Everyone, Apologies for delay in sending the PTG summary and Thank you to everyone who attended the Glance Xena PTG. We had extremely eventful discussions around Secure RBAC and some new ideas for improvement in Glance. Here is a short summary of the discussions. See the Etherpad [1] for full notes. Tuesday, April 20 # Wallaby Retrospective On the positive note, we merged a number of useful features this cycle. We managed to implement a project scope of secure RBAC for images API and Distributed image import stuff. On the other side we had usual problems of review bandwidth and we were not able to keep our focus on reducing/managing the glance bugs backlog. We really need more people in the community helping out with reviews, and ideally moving to become members of the core team. We are happy to onboard new members with appropriate help. # Bug squashing per milestone Unfortunately (due to lack of contributors) glance community was unable to keep track of its bug backlog for the past couple of cycles. This cycle our main focus is to revisit old bugs and reduce the bug backlogs for glance, glance_store and python-glanceclient. We agreed to discuss existing bugs in our weekly meeting after every two weeks for 15-20 minutes. # Interop WG interlock During this session we discussed the state of existing glance tempest coverage and what action needs to be taken if there are any API changes or new API is introduced. Wednesday, April 21 # Secure RBAC - OSSN-0088 This entire day we discussed implementing Secure RBAC in glance. We also decided to discuss with Lance/Gmann whether it is fine to add deprecation warnings for OSSN-0088 on master branch or we should add those directly to stable/wallaby branch where we have defaulted some metaded APIs to admin only. # Glance policy - revisit, restructure We also discussed to revisit and restructure our policy layer. At the moment glance is injecting policies at different layers and most of the policies are injected closed to the database layer. This approach is causing problems in implementing the secure RBAC for location/tasks APIs. During this cycle we are going to experiment on restructuring the policy layer of glance (approach will be to work on restructuring modiffy_image policy and then submit the spec on the basis of that finding before moving forward). # Secure RBAC - Hardening project scope, Implementing System scope/personas During discussion on this topic we identified that to implement system scope in glance we first need to restructure the glance policy layer. Which means we need to keep our focus on restructuring the glance policy layer in this cycle. Also at the moment only publicize_image policy is an appropriate candidate for system scope. So we need to identify whether there are any other APIs which can also use system scope. Thursday, April 22 # Native Image Encryption As this work has dependency on Barbican which is yet to be completed, we decided to revisit the progress of the same around Milestone 2 and decide whether we are ready to implement this feature in Xena cycle or postpone it to next cycle. # Multi-format images We need to identify regression on Nova if we decide to implement the same. I need to connect with dansmith to understand more about it. If there are no side effects then we will be working on design/specification for this feature in this cycle and implement the same in the next cycle. Erno also suggested that we should improve the image conversion plugin based on multiple stores support. # Cinder - Glance cross project discussion During this discussion Rajat (cinder driver maintainer for glance) walked us through the current state of cinder driver of glance and how we could add support for the new attachment API for cinder driver. Friday, April 23 # Cache-API We already agreed on implementation design about the same, the only reason it is pending is we shifted our focus on RBAC in the last cycle. So it is decided to wait for a couple of weeks in this cycle if we get any new contributor to work on it or else implement the same during milestone 1. # Glance Quotas This topic was raised on the fly by belmoreira during the PTG so we discussed the same. We decided to assess the use of keystone's unified limits and put up a design/specification in glance to add quotas for images. Apart from above topics during Open discussion we also discussed some of the swift related bugs which we will be addressing during this cycle. You will find the detailed information about the same in the PTG etherpad [1] along with the recordings of the sessions. I would once again like to thank everyone for joining us in the PTG. [1] https://etherpad.opendev.org/p/xena-glance-ptg Thanks and Regards, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue May 4 14:34:08 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 4 May 2021 16:34:08 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Hi Hervé, Kayobe (part of the Kolla project) also uses ironic-python-agent-builder to build IPA images. We use its master branch [1], since it is the only one available. However, this has caused issues. For example, IPA builds broke on Ussuri [2] after ipa-builder started to use a feature of diskimage-builder that isn't available in the maximum DIB version allowed by Ussuri upper constraints. This wouldn't have happened had we been able to use stable branches. Of course we could use tags, but they can break too and can't be fixed. So as far as Kayobe is concerned, we fully support the proposed release model change and see no downside. On a related note, stable branches for diskimage-builder would be useful too. [1] https://opendev.org/openstack/kayobe/src/branch/master/ansible/group_vars/all/ipa#L20 [2] https://review.opendev.org/c/openstack/kayobe/+/775936 On Mon, 3 May 2021 at 10:20, Herve Beraud wrote: > > Hello, > > At first glance that makes sense. > > If I correctly understand the story, the ironic-python-agent [1] and the ironic-python-agent-builder [2] were within the same repo at the origin, correct? > > Does someone else use the ironic-python-agent-builder? > > [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml > [2] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml > > Le ven. 30 avr. 2021 à 16:34, Iury Gregory a écrit : >> >> Hi Riccardo, >> >> Thanks for raising this! >> I do like the idea of having stable branches for the ipa-builder +1 >> >> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau escreveu: >>> >>> Hello fellow openstackers! >>> >>> During the recent xena ptg, the ironic community had a discussion about the need to move the ironic-python-agent-builder project from an independent model to the standard release model. >>> When we initially split the builder from ironic-python-agent, we decided against it, but considering some problems we encountered during the road, the ironic community seems to be in favor of the change. >>> The reasons for this are mainly to strictly align the image building project to ironic-python-agent releases, and ease dealing with the occasional upgrade of tinycore linux, the base image used to build the "tinyipa" ironic-python-agent ramdisk. >>> >>> We'd like to involve the release team to ask for advice, not only on the process, but also considering that we need to ask to cut the first branch for the wallaby stable release, and we know we're a bit late for that! :) >>> >>> Thank you in advance for your help! >>> >>> Riccardo >> >> >> >> -- >> Att[]'s >> Iury Gregory Melo Ferreira >> MSc in Computer Science at UFCG >> Part of the ironic-core and puppet-manager-core team in OpenStack >> Software Engineer at Red Hat Czech >> Social: https://www.linkedin.com/in/iurygregory >> E-mail: iurygregory at gmail.com > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From tpb at dyncloud.net Tue May 4 14:36:08 2021 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 4 May 2021 10:36:08 -0400 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> Message-ID: <20210504143608.mcmov6clb6vgkrpl@barron.net> On 04/05/21 08:55 -0500, Ghanshyam Mann wrote: > ---- On Mon, 03 May 2021 19:11:11 -0500 Tom Barron wrote ---- > > On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > > > > Dears, > > > > > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > > > master branch and are still on bionic. [1] > > > > "Could" because I could not establish from the API whether they are > > > > included in any pipelines or not really (e.g., there are lots of > > > > transitive jobs there that have their nodeset overridden in children > > > > and children are likely used in pipelines, not them). > > > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > The manila-image-elements and manila-test-image jobs listed here are > > not pinned and are running with bionic but I made reviews with them > > pinned to focal [2] [3] and they run fine. So I think manila is OK > > w.r.t. dropping bionic support. > > > > [2] https://review.opendev.org/c/openstack/manila-image-elements/+/789296 > > > > [3] https://review.opendev.org/c/openstack/manila-test-image/+/789409 > >Thanks, Tom for testing. Please merge these patches before devstack patch merge. > >-gmann Dumb question probably, but ... Do we need to pin the nodepool for these jobs, or will they just start picking up focal? -- Tom > > > > > > > > >Thanks for the list. We need to only worried about jobs using devstack master branch. Along with > > >non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to > > >pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > > > > >From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them > > >now: sahara, networking-sfc, manila, magnum, kuryr. > > > > > >-gmann > > > > > > > > > > > -yoctozepto > > > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > > > > > > > > Hello Everyone, > > > > > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > > > by 2nd week of May. > > > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > > > Murani- murano-dashboard-sanity-check > > > > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > From zigo at debian.org Tue May 4 14:41:21 2021 From: zigo at debian.org (Thomas Goirand) Date: Tue, 4 May 2021 16:41:21 +0200 Subject: [horizon] Support for Angular 1.8.x in Horizon (fixing Debian Bullseye) Message-ID: <0dbeab98-a93e-efba-c71f-dbf22596f585@debian.org> Hi, In Debian Bullseye, we've noticed that the ssh keypair and Glance image panels are broken. We have python3-xstatic-angular that used to depends on libjs-angularjs, and that libjs-angularjs moved to 1.8.2. Therefore, Horizon in Bullseye appears broken. I have re-embedded Angula within the python3-xstatic-angular and ask the Debian release team for an unblock, but due to the fact that the Debian policy is to *not* allow twice the same library with different versions, I have little hope for this unblock request to be approved. See the discussion here: https://bugs.debian.org/988054 So my question is: how hard would it be to fix Horizon so that it could work with libjs-angularjs 1.8.2 ? Is there any patch already available for this? Cheers, Thomas Goirand (zigo) From hberaud at redhat.com Tue May 4 15:03:26 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 4 May 2021 17:03:26 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Thank you everyone for all these details. To simplify things I proposed a patch to host the discussion between teams. That will allow everyone to follow the advancement in an async way (@Julia let me know if that fits your needs). Feel free to vote and leave comments directly on this patch. https://review.opendev.org/c/openstack/releases/+/789587 Usually we avoid adding new deliverables to a stable series but I think that here we are in a special case and since switching back to cwi isn't something usual I think that we can do an exception for this one especially if that can help to solve several issues. My patch propose to: - change the deliverable model - release and cut this deliverable for Wallaby - init this deliverable for Xena Hervé Le mar. 4 mai 2021 à 16:34, Pierre Riteau a écrit : > Hi Hervé, > > Kayobe (part of the Kolla project) also uses > ironic-python-agent-builder to build IPA images. We use its master > branch [1], since it is the only one available. However, this has > caused issues. For example, IPA builds broke on Ussuri [2] after > ipa-builder started to use a feature of diskimage-builder that isn't > available in the maximum DIB version allowed by Ussuri upper > constraints. This wouldn't have happened had we been able to use > stable branches. Of course we could use tags, but they can break too > and can't be fixed. > > So as far as Kayobe is concerned, we fully support the proposed > release model change and see no downside. > > On a related note, stable branches for diskimage-builder would be useful > too. > > [1] > https://opendev.org/openstack/kayobe/src/branch/master/ansible/group_vars/all/ipa#L20 > [2] https://review.opendev.org/c/openstack/kayobe/+/775936 > > On Mon, 3 May 2021 at 10:20, Herve Beraud wrote: > > > > Hello, > > > > At first glance that makes sense. > > > > If I correctly understand the story, the ironic-python-agent [1] and the > ironic-python-agent-builder [2] were within the same repo at the origin, > correct? > > > > Does someone else use the ironic-python-agent-builder? > > > > [1] > https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml > > [2] > https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml > > > > Le ven. 30 avr. 2021 à 16:34, Iury Gregory a > écrit : > >> > >> Hi Riccardo, > >> > >> Thanks for raising this! > >> I do like the idea of having stable branches for the ipa-builder +1 > >> > >> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau < > elfosardo at gmail.com> escreveu: > >>> > >>> Hello fellow openstackers! > >>> > >>> During the recent xena ptg, the ironic community had a discussion > about the need to move the ironic-python-agent-builder project from an > independent model to the standard release model. > >>> When we initially split the builder from ironic-python-agent, we > decided against it, but considering some problems we encountered during the > road, the ironic community seems to be in favor of the change. > >>> The reasons for this are mainly to strictly align the image building > project to ironic-python-agent releases, and ease dealing with the > occasional upgrade of tinycore linux, the base image used to build the > "tinyipa" ironic-python-agent ramdisk. > >>> > >>> We'd like to involve the release team to ask for advice, not only on > the process, but also considering that we need to ask to cut the first > branch for the wallaby stable release, and we know we're a bit late for > that! :) > >>> > >>> Thank you in advance for your help! > >>> > >>> Riccardo > >> > >> > >> > >> -- > >> Att[]'s > >> Iury Gregory Melo Ferreira > >> MSc in Computer Science at UFCG > >> Part of the ironic-core and puppet-manager-core team in OpenStack > >> Software Engineer at Red Hat Czech > >> Social: https://www.linkedin.com/in/iurygregory > >> E-mail: iurygregory at gmail.com > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Tue May 4 15:35:48 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Tue, 4 May 2021 17:35:48 +0200 Subject: [release][ironic] ironic-python-agent-builder release model change In-Reply-To: References: Message-ID: Thanks Hervé! I'm also not sure I will be able to join the discussion this Friday, but I'll do my best, so having that patch already up greatly helps. Thanks also everyone who chipped in to clarify and provide better context :) Riccardo On Tue, May 4, 2021 at 5:03 PM Herve Beraud wrote: > Thank you everyone for all these details. > > To simplify things I proposed a patch to host the discussion between > teams. That will allow everyone to follow the advancement in an async way > (@Julia let me know if that fits your needs). Feel free to vote and leave > comments directly on this patch. > > https://review.opendev.org/c/openstack/releases/+/789587 > > Usually we avoid adding new deliverables to a stable series but I think > that here we are in a special case and since switching back to cwi isn't > something usual I think that we can do an exception for this one especially > if that can help to solve several issues. > > My patch propose to: > - change the deliverable model > - release and cut this deliverable for Wallaby > - init this deliverable for Xena > > Hervé > > Le mar. 4 mai 2021 à 16:34, Pierre Riteau a écrit : > >> Hi Hervé, >> >> Kayobe (part of the Kolla project) also uses >> ironic-python-agent-builder to build IPA images. We use its master >> branch [1], since it is the only one available. However, this has >> caused issues. For example, IPA builds broke on Ussuri [2] after >> ipa-builder started to use a feature of diskimage-builder that isn't >> available in the maximum DIB version allowed by Ussuri upper >> constraints. This wouldn't have happened had we been able to use >> stable branches. Of course we could use tags, but they can break too >> and can't be fixed. >> >> So as far as Kayobe is concerned, we fully support the proposed >> release model change and see no downside. >> >> On a related note, stable branches for diskimage-builder would be useful >> too. >> >> [1] >> https://opendev.org/openstack/kayobe/src/branch/master/ansible/group_vars/all/ipa#L20 >> [2] https://review.opendev.org/c/openstack/kayobe/+/775936 >> >> On Mon, 3 May 2021 at 10:20, Herve Beraud wrote: >> > >> > Hello, >> > >> > At first glance that makes sense. >> > >> > If I correctly understand the story, the ironic-python-agent [1] and >> the ironic-python-agent-builder [2] were within the same repo at the >> origin, correct? >> > >> > Does someone else use the ironic-python-agent-builder? >> > >> > [1] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/xena/ironic-python-agent.yaml >> > [2] >> https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/ironic-python-agent-builder.yaml >> > >> > Le ven. 30 avr. 2021 à 16:34, Iury Gregory a >> écrit : >> >> >> >> Hi Riccardo, >> >> >> >> Thanks for raising this! >> >> I do like the idea of having stable branches for the ipa-builder +1 >> >> >> >> Em seg., 26 de abr. de 2021 às 12:03, Riccardo Pittau < >> elfosardo at gmail.com> escreveu: >> >>> >> >>> Hello fellow openstackers! >> >>> >> >>> During the recent xena ptg, the ironic community had a discussion >> about the need to move the ironic-python-agent-builder project from an >> independent model to the standard release model. >> >>> When we initially split the builder from ironic-python-agent, we >> decided against it, but considering some problems we encountered during the >> road, the ironic community seems to be in favor of the change. >> >>> The reasons for this are mainly to strictly align the image building >> project to ironic-python-agent releases, and ease dealing with the >> occasional upgrade of tinycore linux, the base image used to build the >> "tinyipa" ironic-python-agent ramdisk. >> >>> >> >>> We'd like to involve the release team to ask for advice, not only on >> the process, but also considering that we need to ask to cut the first >> branch for the wallaby stable release, and we know we're a bit late for >> that! :) >> >>> >> >>> Thank you in advance for your help! >> >>> >> >>> Riccardo >> >> >> >> >> >> >> >> -- >> >> Att[]'s >> >> Iury Gregory Melo Ferreira >> >> MSc in Computer Science at UFCG >> >> Part of the ironic-core and puppet-manager-core team in OpenStack >> >> Software Engineer at Red Hat Czech >> >> Social: https://www.linkedin.com/in/iurygregory >> >> E-mail: iurygregory at gmail.com >> > >> > >> > >> > -- >> > Hervé Beraud >> > Senior Software Engineer at Red Hat >> > irc: hberaud >> > https://github.com/4383/ >> > https://twitter.com/4383hberaud >> > -----BEGIN PGP SIGNATURE----- >> > >> > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ >> > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ >> > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP >> > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G >> > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g >> > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw >> > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ >> > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 >> > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y >> > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 >> > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O >> > v6rDpkeNksZ9fFSyoY2o >> > =ECSj >> > -----END PGP SIGNATURE----- >> > >> >> > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 4 18:25:20 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 13:25:20 -0500 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: References: Message-ID: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady wrote ---- > > Team, > Based on guidance from the Board of Directors, Interop WG is changing its process. > Please, review https://review.opendev.org/c/osf/interop/+/787646 > Guidelines will no longer need to be approved by the board. > It is approved by “committee” consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. Thanks, Arkady to push it on ML. For members who are new to this discussion, please check this https://review.opendev.org/c/osf/interop/+/787646 Also about the new 'Joint Committee's proposal in https://review.opendev.org/c/osf/interop/+/784622/5/doc/source/process/CoreDefinition.rst#223 With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage and maintenance of tests. In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under the joint ownership of Board and TC? What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. [1] https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/ -gmann > > Will be happy to discuss and provide more details. > Looking for TC review and approval of the proposal before I present it to the board. > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From gmann at ghanshyammann.com Tue May 4 18:32:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 13:32:16 -0500 Subject: [all][tc] Nodejs version change in Xena cycle testing runtime Message-ID: <17938a6f632.d7f706ae21013.5850287225668168318@ghanshyammann.com> Hello Everyone, We have done a change in Xena cycle testing runtime on bumping the Nodejs version from Nodejs10 to Nodejs14[1]. Nodejs10 is going to EOL by April 2021 [2]. If you have any jobs running on Nodejs10, please upgrade it to Nodejs14. Vishal from the Horizon team will drive this effort and looking forward to coordination on code/review. [1] https://review.opendev.org/c/openstack/governance/+/788306 [21] https://nodejs.org/en/about/releases/ -gmann From skaplons at redhat.com Tue May 4 20:09:02 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 04 May 2021 22:09:02 +0200 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <17937ac7ad8.dab64d1f6035.2421869115471104088@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <2950268.F7XZ1j6D0K@p1> <17937ac7ad8.dab64d1f6035.2421869115471104088@ghanshyammann.com> Message-ID: <2822760.diVoI4oBLF@p1> Hi, Dnia wtorek, 4 maja 2021 15:58:41 CEST Ghanshyam Mann pisze: > ---- On Tue, 04 May 2021 01:23:42 -0500 Slawek Kaplonski wrote ---- > > > Hi, > > > > Dnia poniedziałek, 3 maja 2021 15:50:09 CEST Ghanshyam Mann pisze: > > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek > > > > wrote ---- > > > > > > Dears, > > > > > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > > > master branch and are still on bionic. [1] > > > > "Could" because I could not establish from the API whether they are > > > > included in any pipelines or not really (e.g., there are lots of > > > > transitive jobs there that have their nodeset overridden in children > > > > and children are likely used in pipelines, not them). > > > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > > > Thanks for the list. We need to only worried about jobs using devstack > > > > master branch. Along with > > > > > non-devstack jobs. there are many stable testing jobs also on the master > > > > gate which is all good to > > > > > pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > > > > > From the list, I see few more projects (other than listed in the subject of > > > > this email) jobs, so tagging them > > > > > now: sahara, networking-sfc, manila, magnum, kuryr. > > > > > > -gmann > > > > > > > -yoctozepto > > > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann > > > > wrote: > > > > > Hello Everyone, > > > > > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/ > > > > CD to Ubuntu Focal 20.04 but > > > > > > > it seems there are few jobs still running on Bionic. As devstack team > > > > is planning to drop the Bionic support > > > > > > > you need to move those to Focal otherwise they will start failing. We > > > > are planning to merge the devstack patch > > > > > > > by 2nd week of May. > > > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > > > I have not listed all the job but few of them which were failing with ' > > > > rtslib-fb-targetctl error' are below: > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > > - https://opendev.org/openstack/cinder/src/commit/ > > > > 7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/ > > > > branch/master/zuul.d/jobs.yaml#L182 > > > > > > > Murani- murano-dashboard-sanity-check > > > > > -https://opendev.org/openstack/murano-dashboard/src/commit/ > > > > b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to > > > > migrate it to Focal before devstack patch merge. > > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/ > > > > victoria.html > > > > > > > -gmann > > > > I checked neutron-* jobs on that list. All with "legacy" in the name are some > > old jobs which may be run on some stable branches only. > > Also neutron-tempest-plugin jobs on that list are for older stable branches > > and I think they should be still running on Bionic. > > In overall I think we are good on the Neutron with dropping support for Bionic > > in the master branch. > > Thanks, Slawek for checking and confirmation, I saw this job 'networking-sfc-tempest- multinode'[1] > in networking-sfc which might be running on master? > > [1] https://opendev.org/openstack/networking-sfc/src/branch/master/zuul.d/ jobs.yaml#L16 > Yes. You're right. I checked "neutron*" jobs but not "networking*" ones. This is running on master branch as non-voting job. I just proposed patch [1] to change that. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Arkady.Kanevsky at dell.com Tue May 4 21:21:43 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 4 May 2021 21:21:43 +0000 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> Message-ID: Comments inline -----Original Message----- From: Ghanshyam Mann Sent: Tuesday, May 4, 2021 1:25 PM To: Kanevsky, Arkady Cc: openstack-discuss Subject: Re: [TC][Interop] process change proposal for interoperability [EXTERNAL EMAIL] ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady wrote ---- > > Team, > Based on guidance from the Board of Directors, Interop WG is changing its process. > Please, review https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] > Guidelines will no longer need to be approved by the board. > It is approved by “committee” consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. Thanks, Arkady to push it on ML. For members who are new to this discussion, please check this https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] Also about the new 'Joint Committee's proposal in https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/784622/5/doc/source/process/CoreDefinition.rst*223__;KyM!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeV9JbdRH$ [review[.]opendev[.]org] With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage and maintenance of tests. [ak] My view that Interop WG is still under board, as it is part of Marketplace and Logo trademark program. But guidelines doc no longer need to go thru board for approval. That is why new process is created. Thus, the purpose of new "approval committee" of all parties that impact and impacted by guidelines. In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under the joint ownership of Board and TC? What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. [ak] Board as it is now Open Infrastructure board, does not feel that it need to be involved on routing operation of Interop WG, including approval of new guidelines. Thus, the change in the process. Board wants to delegate its approval authority to us. [1] https://urldefense.com/v3/__https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/__;!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeURuNcUW$ [openstack[.]org] -gmann > > Will be happy to discuss and provide more details. > Looking for TC review and approval of the proposal before I present it to the board. > Thanks, > > Arkady Kanevsky, Ph.D. > SP Chief Technologist & DE > Dell Technologies office of CTO > Dell Inc. One Dell Way, MS PS2-91 > Round Rock, TX 78682, USA > Phone: 512 7204955 > > From gouthampravi at gmail.com Tue May 4 21:23:26 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 4 May 2021 14:23:26 -0700 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <20210504143608.mcmov6clb6vgkrpl@barron.net> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> Message-ID: On Tue, May 4, 2021 at 7:40 AM Tom Barron wrote: > On 04/05/21 08:55 -0500, Ghanshyam Mann wrote: > > ---- On Mon, 03 May 2021 19:11:11 -0500 Tom Barron > wrote ---- > > > On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > > > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote ---- > > > > > Dears, > > > > > > > > > > I have scraped the Zuul API to get names of jobs that *could* run > on > > > > > master branch and are still on bionic. [1] > > > > > "Could" because I could not establish from the API whether they are > > > > > included in any pipelines or not really (e.g., there are lots of > > > > > transitive jobs there that have their nodeset overridden in > children > > > > > and children are likely used in pipelines, not them). > > > > > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > > > The manila-image-elements and manila-test-image jobs listed here are > > > not pinned and are running with bionic but I made reviews with them > > > pinned to focal [2] [3] and they run fine. So I think manila is OK > > > w.r.t. dropping bionic support. > > > > > > [2] > https://review.opendev.org/c/openstack/manila-image-elements/+/789296 > > > > > > [3] https://review.opendev.org/c/openstack/manila-test-image/+/789409 > > > >Thanks, Tom for testing. Please merge these patches before devstack patch > merge. > > > >-gmann > > Dumb question probably, but ... > > Do we need to pin the nodepool for these jobs, or will they just start > picking up focal? > The jobs that were using the bionic nodes inherited from the "unittests" job and are agnostic to the platform for the most part. The unittest job inherits from the base jobs that fungi's modifying here: https://review.opendev.org/c/opendev/base-jobs/+/789097/ and here: https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no need to pin a nodeset - we'll get the changes transparently when the patches merge. > > -- Tom > > > > > > > > > > > > >Thanks for the list. We need to only worried about jobs using > devstack master branch. Along with > > > >non-devstack jobs. there are many stable testing jobs also on the > master gate which is all good to > > > >pin the bionic nodeset, for example - > 'neutron-tempest-plugin-api-ussuri'. > > > > > > > >From the list, I see few more projects (other than listed in the > subject of this email) jobs, so tagging them > > > >now: sahara, networking-sfc, manila, magnum, kuryr. > > > > > > > >-gmann > > > > > > > > > > > > > > -yoctozepto > > > > > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > > > > > > > > > > > Hello Everyone, > > > > > > > > > > > > As per the testing runtime since Victoria [1], we need to move > our CI/CD to Ubuntu Focal 20.04 but > > > > > > it seems there are few jobs still running on Bionic. As devstack > team is planning to drop the Bionic support > > > > > > you need to move those to Focal otherwise they will start > failing. We are planning to merge the devstack patch > > > > > > by 2nd week of May. > > > > > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > > > > > I have not listed all the job but few of them which were failing > with ' rtslib-fb-targetctl error' are below: > > > > > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > > > - > https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > > > - > https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > > > > > Octavia- > https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > > > > > Murani- murano-dashboard-sanity-check > > > > > > - > https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can > plan to migrate it to Focal before devstack patch merge. > > > > > > > > > > > > [1] > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue May 4 22:06:55 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 May 2021 22:06:55 +0000 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> Message-ID: <20210504220655.hd5q4zzlpe2s7t4k@yuggoth.org> On 2021-05-04 14:23:26 -0700 (-0700), Goutham Pacha Ravi wrote: [...] > The unittest job inherits from the base jobs that fungi's > modifying here: > https://review.opendev.org/c/opendev/base-jobs/+/789097/ and here: > https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no > need to pin a nodeset - we'll get the changes transparently when > the patches merge. [...] Specifically 789098, yeah, which is now scheduled for approval two weeks from today: http://lists.opendev.org/pipermail/service-announce/2021-May/000019.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue May 4 22:48:02 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 17:48:02 -0500 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> Message-ID: <1793991208a.c6eddff726720.3252842048256808928@ghanshyammann.com> ---- On Tue, 04 May 2021 16:21:43 -0500 Kanevsky, Arkady wrote ---- > Comments inline > > -----Original Message----- > From: Ghanshyam Mann > Sent: Tuesday, May 4, 2021 1:25 PM > To: Kanevsky, Arkady > Cc: openstack-discuss > Subject: Re: [TC][Interop] process change proposal for interoperability > > > [EXTERNAL EMAIL] > > ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady wrote ---- > > Team, > Based on guidance from the Board of Directors, Interop WG is changing its process. > > Please, review https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] > Guidelines will no longer need to be approved by the board. > > It is approved by “committee” consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. > > Thanks, Arkady to push it on ML. > > For members who are new to this discussion, please check this https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] Also about the new 'Joint Committee's proposal in https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/784622/5/doc/source/process/CoreDefinition.rst*223__;KyM!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeV9JbdRH$ [review[.]opendev[.]org] > > With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage and maintenance of tests. > > [ak] My view that Interop WG is still under board, as it is part of Marketplace and Logo trademark program. > But guidelines doc no longer need to go thru board for approval. > That is why new process is created. Thus, the purpose of new "approval committee" of all parties that impact and impacted by guidelines. > > In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under the joint ownership of Board and TC? > > What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project > (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. > This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. > > [ak] Board as it is now Open Infrastructure board, does not feel that it need to be involved on routing operation of Interop WG, including approval of new guidelines. > Thus, the change in the process. Board wants to delegate its approval authority to us. Yes, I agree that Board does not need to review the guidelines as such. But here InteropWG will continue to do those as a single ownership body and take help from OpenStack Project Team, OpenStack Technical Committee for any query or so instead of co-owning the guidelines. OpenStack TC can own the test coverage of those which is what we are doing currently but for smooth working, I feel guidelines ownership should be 100% to InteropWG. -gmann > > [1] https://urldefense.com/v3/__https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/__;!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeURuNcUW$ [openstack[.]org] > > -gmann > > > > > Will be happy to discuss and provide more details. > > Looking for TC review and approval of the proposal before I present it to the board. > > Thanks, > > > > Arkady Kanevsky, Ph.D. > > SP Chief Technologist & DE > > Dell Technologies office of CTO > > Dell Inc. One Dell Way, MS PS2-91 > > Round Rock, TX 78682, USA > > Phone: 512 7204955 > > > > > From gouthampravi at gmail.com Tue May 4 23:27:24 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Tue, 4 May 2021 16:27:24 -0700 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: <1793991208a.c6eddff726720.3252842048256808928@ghanshyammann.com> References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> <1793991208a.c6eddff726720.3252842048256808928@ghanshyammann.com> Message-ID: On Tue, May 4, 2021 at 3:52 PM Ghanshyam Mann wrote: > ---- On Tue, 04 May 2021 16:21:43 -0500 Kanevsky, Arkady < > Arkady.Kanevsky at dell.com> wrote ---- > > Comments inline > > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Tuesday, May 4, 2021 1:25 PM > > To: Kanevsky, Arkady > > Cc: openstack-discuss > > Subject: Re: [TC][Interop] process change proposal for interoperability > > > > > > [EXTERNAL EMAIL] > > > > ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady < > Arkady.Kanevsky at dell.com> wrote ---- > > Team, > Based on guidance > from the Board of Directors, Interop WG is changing its process. > > > Please, review > https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ > [review[.]opendev[.]org] > Guidelines will no longer need to be approved > by the board. > > > It is approved by “committee” consisting of representatives from > Interop WG, refstack, TC and Foundation marketplace administrator. > > > > Thanks, Arkady to push it on ML. > > > > For members who are new to this discussion, please check this > https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ > [review[.]opendev[.]org] Also about the new 'Joint Committee's proposal in > https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/784622/5/doc/source/process/CoreDefinition.rst*223__;KyM!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeV9JbdRH$ > [review[.]opendev[.]org] > > > > With my TC hats on, I am not so clear about the new 'Joint Committee' > proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all > the capabilities must be a subset of the OpenStack Technical Committee, > which is what we have in the current process. All interop capabilities are > discussed with the project teams (for which trademark program is) under > OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage > and maintenance of tests. > > > > [ak] My view that Interop WG is still under board, as it is part of > Marketplace and Logo trademark program. > > But guidelines doc no longer need to go thru board for approval. > > That is why new process is created. Thus, the purpose of new "approval > committee" of all parties that impact and impacted by guidelines. > > > > In the new process, making TC a decision making and co-ownership group > in Interop WG is not very clear to me. InteropWG is a Board's working group > and does not come under the OpenStack TC governance as such. Does the new > process mean we are adding InteropWG under TC? or under the joint ownership > of Board and TC? > > > > What I see as InteropWG is, a group of people working on the trademark > program and all technical help can be asked from the OpenInfra project > > (OpenStack) to draft the guidelines and test coverage/maintenance. But > keep all the approval and decision-making authority to InteropWG or Board. > > This is how it is currently. Also, we can keep encouraging the > community members to join this group to fill the required number of > resources. > > > > [ak] Board as it is now Open Infrastructure board, does not feel that > it need to be involved on routing operation of Interop WG, including > approval of new guidelines. > > Thus, the change in the process. Board wants to delegate its approval > authority to us. > > Yes, I agree that Board does not need to review the guidelines as such. > But here InteropWG will continue to do those as a single ownership body and > take > help from OpenStack Project Team, OpenStack Technical Committee for any > query or so instead of co-owning the guidelines. OpenStack TC can own > the test coverage of those which is what we are doing currently but for > smooth working, I feel guidelines ownership should be 100% to InteropWG. > I agree the TC can impact the guidelines in an advisory role, as could anyone else. Is the concern here that we need technical purview specifically over test guidelines? If yes, could we have a list of interop liaisons (default to PTLs or tact-sig liaisons) from the project teams in this committee instead? The list of these liaisons can be dynamic and informal, and we can discuss guidelines over this ML as we've done in the past. > > -gmann > > > > > [1] > https://urldefense.com/v3/__https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/__;!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeURuNcUW$ > [openstack[.]org] > > > > -gmann > > > > > > > > Will be happy to discuss and provide more details. > > > Looking for TC review and approval of the proposal before I present > it to the board. > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 5 00:31:08 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 19:31:08 -0500 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> <1793991208a.c6eddff726720.3252842048256808928@ghanshyammann.com> Message-ID: <17939ef83d5.d7e2375f27281.3052373082493661243@ghanshyammann.com> ---- On Tue, 04 May 2021 18:27:24 -0500 Goutham Pacha Ravi wrote ---- > > > On Tue, May 4, 2021 at 3:52 PM Ghanshyam Mann wrote: > ---- On Tue, 04 May 2021 16:21:43 -0500 Kanevsky, Arkady wrote ---- > > Comments inline > > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Tuesday, May 4, 2021 1:25 PM > > To: Kanevsky, Arkady > > Cc: openstack-discuss > > Subject: Re: [TC][Interop] process change proposal for interoperability > > > > > > [EXTERNAL EMAIL] > > > > ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady wrote ---- > > Team, > Based on guidance from the Board of Directors, Interop WG is changing its process. > > > Please, review https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] > Guidelines will no longer need to be approved by the board. > > > It is approved by “committee” consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. > > > > Thanks, Arkady to push it on ML. > > > > For members who are new to this discussion, please check this https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] Also about the new 'Joint Committee's proposal in https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/784622/5/doc/source/process/CoreDefinition.rst*223__;KyM!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeV9JbdRH$ [review[.]opendev[.]org] > > > > With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage and maintenance of tests. > > > > [ak] My view that Interop WG is still under board, as it is part of Marketplace and Logo trademark program. > > But guidelines doc no longer need to go thru board for approval. > > That is why new process is created. Thus, the purpose of new "approval committee" of all parties that impact and impacted by guidelines. > > > > In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under the joint ownership of Board and TC? > > > > What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project > > (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. > > This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. > > > > [ak] Board as it is now Open Infrastructure board, does not feel that it need to be involved on routing operation of Interop WG, including approval of new guidelines. > > Thus, the change in the process. Board wants to delegate its approval authority to us. > > Yes, I agree that Board does not need to review the guidelines as such. But here InteropWG will continue to do those as a single ownership body and take > help from OpenStack Project Team, OpenStack Technical Committee for any query or so instead of co-owning the guidelines. OpenStack TC can own > the test coverage of those which is what we are doing currently but for smooth working, I feel guidelines ownership should be 100% to InteropWG. > > I agree the TC can impact the guidelines in an advisory role, as could anyone else. Is the concern here that we need technical purview specifically over test guidelines? If yes, could we have a list of interop liaisons (default to PTLs or tact-sig liaisons) from the project teams in this committee instead? The list of these liaisons can be dynamic and informal, and we can discuss guidelines over this ML as we've done in the past. +1. Adding them as liaison is a good idea. Testing guidelines are QA's responsibility which is since starting for 5 services which tests are in Tempest and for adds-on it is Tempest plugins. We can add guidelines' test liaisons as QA PTL (Tempest team) for 5 services and adds-on project PTL for Tempest plugin tests. -gmann > -gmann > > > > > [1] https://urldefense.com/v3/__https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/__;!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeURuNcUW$ [openstack[.]org] > > > > -gmann > > > > > > > > Will be happy to discuss and provide more details. > > > Looking for TC review and approval of the proposal before I present it to the board. > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > > > > > From tpb at dyncloud.net Wed May 5 00:38:29 2021 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 4 May 2021 20:38:29 -0400 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <20210504220655.hd5q4zzlpe2s7t4k@yuggoth.org> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> <20210504220655.hd5q4zzlpe2s7t4k@yuggoth.org> Message-ID: <20210505003829.xzpchggyeldjtjrt@barron.net> On 04/05/21 22:06 +0000, Jeremy Stanley wrote: >On 2021-05-04 14:23:26 -0700 (-0700), Goutham Pacha Ravi wrote: >[...] >> The unittest job inherits from the base jobs that fungi's >> modifying here: >> https://review.opendev.org/c/opendev/base-jobs/+/789097/ and here: >> https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no >> need to pin a nodeset - we'll get the changes transparently when >> the patches merge. >[...] > >Specifically 789098, yeah, which is now scheduled for approval two >weeks from today: > >http://lists.opendev.org/pipermail/service-announce/2021-May/000019.html >-- >Jeremy Stanley Thanks for filling in the details, I'll leave those pinning patches WIP/DNM (and eventually abandon) unless I hear something to the contrary. From gmann at ghanshyammann.com Wed May 5 00:39:17 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 May 2021 19:39:17 -0500 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> Message-ID: <17939f6f8b6.f875ee3627306.6954932512782884355@ghanshyammann.com> ---- On Tue, 04 May 2021 16:23:26 -0500 Goutham Pacha Ravi wrote ---- > > > On Tue, May 4, 2021 at 7:40 AM Tom Barron wrote: > On 04/05/21 08:55 -0500, Ghanshyam Mann wrote: > > ---- On Mon, 03 May 2021 19:11:11 -0500 Tom Barron wrote ---- > > > On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > > > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek wrote ---- > > > > > Dears, > > > > > > > > > > I have scraped the Zuul API to get names of jobs that *could* run on > > > > > master branch and are still on bionic. [1] > > > > > "Could" because I could not establish from the API whether they are > > > > > included in any pipelines or not really (e.g., there are lots of > > > > > transitive jobs there that have their nodeset overridden in children > > > > > and children are likely used in pipelines, not them). > > > > > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > > > The manila-image-elements and manila-test-image jobs listed here are > > > not pinned and are running with bionic but I made reviews with them > > > pinned to focal [2] [3] and they run fine. So I think manila is OK > > > w.r.t. dropping bionic support. > > > > > > [2] https://review.opendev.org/c/openstack/manila-image-elements/+/789296 > > > > > > [3] https://review.opendev.org/c/openstack/manila-test-image/+/789409 > > > >Thanks, Tom for testing. Please merge these patches before devstack patch merge. > > > >-gmann > > Dumb question probably, but ... > > Do we need to pin the nodepool for these jobs, or will they just start > picking up focal? > > The jobs that were using the bionic nodes inherited from the "unittests" job and are agnostic to the platform for the most part. The unittest job inherits from the base jobs that fungi's modifying here: https://review.opendev.org/c/opendev/base->jobs/+/789097/ and here: https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no need to pin a nodeset - we'll get the changes transparently when the patches merge. Yeah, they will be running on focal via the parent job nodeset so all good here. For devstack based job too, manila-tempest-plugin-base job does not set any nodeset so it use the one devstack base job define which is Focal - https://opendev.org/openstack/manila-tempest-plugin/src/branch/master/zuul.d/manila-tempest-jobs.yaml#L2 -gmann > -- Tom > > > > > > > > > > > > >Thanks for the list. We need to only worried about jobs using devstack master branch. Along with > > > >non-devstack jobs. there are many stable testing jobs also on the master gate which is all good to > > > >pin the bionic nodeset, for example - 'neutron-tempest-plugin-api-ussuri'. > > > > > > > >From the list, I see few more projects (other than listed in the subject of this email) jobs, so tagging them > > > >now: sahara, networking-sfc, manila, magnum, kuryr. > > > > > > > >-gmann > > > > > > > > > > > > > > -yoctozepto > > > > > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann wrote: > > > > > > > > > > > > Hello Everyone, > > > > > > > > > > > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > > > > > > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > > > > > > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > > > > > > by 2nd week of May. > > > > > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > > > > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > > > > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > > > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > > > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > > > > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > > > > > Murani- murano-dashboard-sanity-check > > > > > > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > > > > > > > > > > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > From Arkady.Kanevsky at dell.com Wed May 5 02:54:26 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Wed, 5 May 2021 02:54:26 +0000 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: <17939ef83d5.d7e2375f27281.3052373082493661243@ghanshyammann.com> References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> <1793991208a.c6eddff726720.3252842048256808928@ghanshyammann.com> <17939ef83d5.d7e2375f27281.3052373082493661243@ghanshyammann.com> Message-ID: So we will reword that TC is advisory with PTLs of advisory and QA as +1/-1 voting for guidelines. Still believe that Refstack PTL and Foundation marketplace owner need to approve guidelines also. Still the same committee but with TC as advisory role on it. -----Original Message----- From: Ghanshyam Mann Sent: Tuesday, May 4, 2021 7:31 PM To: Goutham Pacha Ravi Cc: Kanevsky, Arkady; openstack-discuss Subject: Re: [TC][Interop] process change proposal for interoperability [EXTERNAL EMAIL] ---- On Tue, 04 May 2021 18:27:24 -0500 Goutham Pacha Ravi wrote ---- > > > On Tue, May 4, 2021 at 3:52 PM Ghanshyam Mann wrote: > ---- On Tue, 04 May 2021 16:21:43 -0500 Kanevsky, Arkady wrote ---- > > Comments inline > > > > -----Original Message----- > > From: Ghanshyam Mann > > Sent: Tuesday, May 4, 2021 1:25 PM > > To: Kanevsky, Arkady > > Cc: openstack-discuss > > Subject: Re: [TC][Interop] process change proposal for interoperability > > > > > > [EXTERNAL EMAIL] > > > > ---- On Mon, 03 May 2021 16:44:35 -0500 Kanevsky, Arkady wrote ---- > > Team, > Based on guidance from the Board of Directors, Interop WG is changing its process. > > > Please, review https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] > Guidelines will no longer need to be approved by the board. > > > It is approved by “committee” consisting of representatives from Interop WG, refstack, TC and Foundation marketplace administrator. > > > > Thanks, Arkady to push it on ML. > > > > For members who are new to this discussion, please check this https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/787646__;Kw!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCectuuGGT$ [review[.]opendev[.]org] Also about the new 'Joint Committee's proposal in https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/784622/5/doc/source/process/CoreDefinition.rst*223__;KyM!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeV9JbdRH$ [review[.]opendev[.]org] > > > > With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) provides the test coverage and maintenance of tests. > > > > [ak] My view that Interop WG is still under board, as it is part of Marketplace and Logo trademark program. > > But guidelines doc no longer need to go thru board for approval. > > That is why new process is created. Thus, the purpose of new "approval committee" of all parties that impact and impacted by guidelines. > > > > In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under the joint ownership of Board and TC? > > > > What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project > > (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. > > This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. > > > > [ak] Board as it is now Open Infrastructure board, does not feel that it need to be involved on routing operation of Interop WG, including approval of new guidelines. > > Thus, the change in the process. Board wants to delegate its approval authority to us. > > Yes, I agree that Board does not need to review the guidelines as such. But here InteropWG will continue to do those as a single ownership body and take > help from OpenStack Project Team, OpenStack Technical Committee for any query or so instead of co-owning the guidelines. OpenStack TC can own > the test coverage of those which is what we are doing currently but for smooth working, I feel guidelines ownership should be 100% to InteropWG. > > I agree the TC can impact the guidelines in an advisory role, as could anyone else. Is the concern here that we need technical purview specifically over test guidelines? If yes, could we have a list of interop liaisons (default to PTLs or tact-sig liaisons) from the project teams in this committee instead? The list of these liaisons can be dynamic and informal, and we can discuss guidelines over this ML as we've done in the past. +1. Adding them as liaison is a good idea. Testing guidelines are QA's responsibility which is since starting for 5 services which tests are in Tempest and for adds-on it is Tempest plugins. We can add guidelines' test liaisons as QA PTL (Tempest team) for 5 services and adds-on project PTL for Tempest plugin tests. -gmann > -gmann > > > > > [1] https://urldefense.com/v3/__https://www.openstack.org/legal/bylaws-of-the-openstack-foundation/__;!!LpKI!1RLybM_OpAIJeUqG0ocwMvigt0Oce8Ib6npr_5Cnl2Z2oJV39MEpI0jni7DCeURuNcUW$ [openstack[.]org] > > > > -gmann > > > > > > > > Will be happy to discuss and provide more details. > > > Looking for TC review and approval of the proposal before I present it to the board. > > > Thanks, > > > > > > Arkady Kanevsky, Ph.D. > > > SP Chief Technologist & DE > > > Dell Technologies office of CTO > > > Dell Inc. One Dell Way, MS PS2-91 > > > Round Rock, TX 78682, USA > > > Phone: 512 7204955 > > > > > > > > > > From swogatpradhan22 at gmail.com Wed May 5 05:02:26 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Wed, 5 May 2021 10:32:26 +0530 Subject: [heat] [aodh] [autoscaling] [openstack victoria] How to setup aodh and heat to auto scale in openstack victoria based on cpu usage? Message-ID: Hi, How to set up aodh and heat to auto scale in openstack victoria based on cpu usage? As the metric cpu_util is now deprecated, how can someone use heat to auto scale up and down using cpu usage of the instance? I checked the ceilometer package and I could see the transformers are removed so i can't see any other way to perform arithmetic operations and generate cpu usage in percentage. Any insight is appreciated. with regards Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Wed May 5 08:38:48 2021 From: hjensas at redhat.com (Harald Jensas) Date: Wed, 5 May 2021 10:38:48 +0200 Subject: =?UTF-8?Q?Re=3a_Proposing_C=c3=a9dric_Jeanneret_=28Tengu=29_for_tri?= =?UTF-8?Q?pleo-core?= In-Reply-To: References: Message-ID: <104f28a5-1304-e3af-a875-11f25e418103@redhat.com> On 4/29/21 5:53 PM, James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since > then, and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the > end of next week. > +1 Absolutely! From mbultel at redhat.com Wed May 5 09:06:52 2021 From: mbultel at redhat.com (Mathieu Bultel) Date: Wed, 5 May 2021 11:06:52 +0200 Subject: =?UTF-8?Q?Re=3A_Proposing_C=C3=A9dric_Jeanneret_=28Tengu=29_for_tripleo=2D?= =?UTF-8?Q?core?= In-Reply-To: <104f28a5-1304-e3af-a875-11f25e418103@redhat.com> References: <104f28a5-1304-e3af-a875-11f25e418103@redhat.com> Message-ID: +1 On Wed, May 5, 2021 at 10:47 AM Harald Jensas wrote: > On 4/29/21 5:53 PM, James Slagle wrote: > > I'm proposing we formally promote Cédric to full tripleo-core duties. He > > is already in the gerrit group with the understanding that his +2 is for > > validations. His experience and contributions have grown a lot since > > then, and I'd like to see that +2 expanded to all of TripleO. > > > > If there are no objections, we'll consider the change official at the > > end of next week. > > > > +1 Absolutely! > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed May 5 10:09:42 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 5 May 2021 12:09:42 +0200 Subject: [TC][Interop] process change proposal for interoperability In-Reply-To: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> References: <17938a09cc2.c0a8b8f420793.887269094838742367@ghanshyammann.com> Message-ID: <54e6d3e9-74f5-4ec3-5ec4-83a88a75375a@openstack.org> Ghanshyam Mann wrote: > With my TC hats on, I am not so clear about the new 'Joint Committee' proposal. As per the bylaws of the Foundation, section 4.1(b)(iii))[1], > all the capabilities must be a subset of the OpenStack Technical Committee, which is what we have in the current process. All interop > capabilities are discussed with the project teams (for which trademark program is) under OpenStack TC and QA (Tempest + Tempest plugins) > provides the test coverage and maintenance of tests. > > In the new process, making TC a decision making and co-ownership group in Interop WG is not very clear to me. InteropWG is a Board's working > group and does not come under the OpenStack TC governance as such. Does the new process mean we are adding InteropWG under TC? or under > the joint ownership of Board and TC? > > What I see as InteropWG is, a group of people working on the trademark program and all technical help can be asked from the OpenInfra project > (OpenStack) to draft the guidelines and test coverage/maintenance. But keep all the approval and decision-making authority to InteropWG or Board. > This is how it is currently. Also, we can keep encouraging the community members to join this group to fill the required number of resources. Yes I also had reservations about the approval committee, while trying to see who on Foundation staff would fill the "Foundation marketplace administrator" seat -- I posted them on the review. We have Foundation resources helping keeping the marketplace in sync with the trademark programs, and I think it's great that they are involved in the Interop workgroup. But I'm not sure of their added value when it comes to *approving* technical guidelines around OpenStack capabilities. For me, which technical capabilities to include is a discussion between the technical (which capabilities are technically mature, as provided by the TC or their representatives) and the ecosystem (which capabilities are desirable to create interoperability around, as provided by the Board of Directors or their representatives). This can be done in two ways: using a single committee with TC and Board representation, or as a two-step process (TC proposes, Board selects within that proposal). My understanding is that Arkady is proposing we do the former, to simplify overall process. I think it can work, as long as everyone is clear on their role... I'm just unsure this committee should include a "Foundation marketplace administrator" veto seat. -- Thierry Carrez (ttx) From E.Panter at mittwald.de Wed May 5 12:16:39 2021 From: E.Panter at mittwald.de (Erik Panter) Date: Wed, 5 May 2021 12:16:39 +0000 Subject: [docs] bug tracker for individual projects' search pages Message-ID: <11c6d311763842469d40f34fdddacf34@mittwald.de> Hi, I'm having issues with the individual projects' doc search pages (e.g. https://docs.openstack.org/neutron/victoria/search.html?q=test) since ussuri: When searching Ussuri and Victoria docs, malformed links are returned, and for Wallaby it doesn't return any results. It seems to affect multiple projects that provide the search page, so I am not sure where to report it. Can someone point me to the correct project/bug tracker? Thanks, Erik _____ Erik Panter Systementwickler | Infrastruktur Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 Mobil: 0151 / 12345678 e.panter at mittwald.de https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From fungi at yuggoth.org Wed May 5 12:33:44 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 May 2021 12:33:44 +0000 Subject: [docs] bug tracker for individual projects' search pages In-Reply-To: <11c6d311763842469d40f34fdddacf34@mittwald.de> References: <11c6d311763842469d40f34fdddacf34@mittwald.de> Message-ID: <20210505123343.2wmfzefi2inai2jv@yuggoth.org> On 2021-05-05 12:16:39 +0000 (+0000), Erik Panter wrote: > I'm having issues with the individual projects' doc search pages > (e.g. https://docs.openstack.org/neutron/victoria/search.html?q=test) > since ussuri: When searching Ussuri and Victoria docs, malformed > links are returned, and for Wallaby it doesn't return any results. > > It seems to affect multiple projects that provide the search page, so > I am not sure where to report it. Can someone point me to the correct > project/bug tracker? Probably https://bugs.launchpad.net/openstack-doc-tools/+filebug is the place (that's used as a catch-all for projects like OpenStackDocsTheme). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed May 5 12:52:38 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 May 2021 12:52:38 +0000 Subject: [docs] bug tracker for individual projects' search pages In-Reply-To: <11c6d311763842469d40f34fdddacf34@mittwald.de> References: <11c6d311763842469d40f34fdddacf34@mittwald.de> Message-ID: <20210505125237.tq5ypjzvbi6necvx@yuggoth.org> On 2021-05-05 12:16:39 +0000 (+0000), Erik Panter wrote: > I'm having issues with the individual projects' doc search pages > (e.g. https://docs.openstack.org/neutron/victoria/search.html?q=test) > since ussuri: When searching Ussuri and Victoria docs, malformed > links are returned, and for Wallaby it doesn't return any results. [...] Looking at the way the result links are malformed, it appears ".html" in the URL is being replaced with "undefined" so perhaps this is something to do with a filename suffix variable which the Sphinx simple search expects but which we're failing to set in its conf.py? Yet even that doesn't explain the later versions just never completing the search result request. For comparison, other documentation we're publishing with similar jobs and configs (but without the OpenStackDocsTheme) is working fine for simple search: https://docs.opendev.org/opendev/infra-manual/latest/search.html?q=test -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Wed May 5 13:05:06 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 5 May 2021 15:05:06 +0200 Subject: [ironic] RFC: abandon sushy-cli? Message-ID: Hi ironicers, sushy-cli was an attempt to create a redfish CLI based on sushy. The effort stopped long ago, and the project hasn't had a single meaningful change since Ussuri. There is an official Redfish CLI from DMTF, I don't think we have cycles to maintain an alternative one. If you would like to maintain sushy-cli, please speak up! Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed May 5 13:12:36 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 5 May 2021 06:12:36 -0700 Subject: [ironic] RFC: abandon sushy-cli? In-Reply-To: References: Message-ID: +1 to abandoning it. On Wed, May 5, 2021 at 6:07 AM Dmitry Tantsur wrote: > > Hi ironicers, > > sushy-cli was an attempt to create a redfish CLI based on sushy. The effort stopped long ago, and the project hasn't had a single meaningful change since Ussuri. There is an official Redfish CLI from DMTF, I don't think we have cycles to maintain an alternative one. > > If you would like to maintain sushy-cli, please speak up! > > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From aj at suse.com Wed May 5 13:19:16 2021 From: aj at suse.com (Andreas Jaeger) Date: Wed, 5 May 2021 15:19:16 +0200 Subject: [docs] bug tracker for individual projects' search pages In-Reply-To: <20210505125237.tq5ypjzvbi6necvx@yuggoth.org> References: <11c6d311763842469d40f34fdddacf34@mittwald.de> <20210505125237.tq5ypjzvbi6necvx@yuggoth.org> Message-ID: On 05.05.21 14:52, Jeremy Stanley wrote: > On 2021-05-05 12:16:39 +0000 (+0000), Erik Panter wrote: >> I'm having issues with the individual projects' doc search pages >> (e.g. https://docs.openstack.org/neutron/victoria/search.html?q=test) >> since ussuri: When searching Ussuri and Victoria docs, malformed >> links are returned, and for Wallaby it doesn't return any results. > [...] > > Looking at the way the result links are malformed, it appears > ".html" in the URL is being replaced with "undefined" so perhaps > this is something to do with a filename suffix variable which the > Sphinx simple search expects but which we're failing to set in its > conf.py? Yet even that doesn't explain the later versions just never > completing the search result request. > > For comparison, other documentation we're publishing with similar > jobs and configs (but without the OpenStackDocsTheme) is working > fine for simple search: > > https://docs.opendev.org/opendev/infra-manual/latest/search.html?q=test > Isn't this fixed by: https://review.opendev.org/c/openstack/openstackdocstheme/+/753283 Maybe a new release needs to be done? Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From fungi at yuggoth.org Wed May 5 13:47:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 May 2021 13:47:16 +0000 Subject: [docs] bug tracker for individual projects' search pages In-Reply-To: References: <11c6d311763842469d40f34fdddacf34@mittwald.de> <20210505125237.tq5ypjzvbi6necvx@yuggoth.org> Message-ID: <20210505134716.bnzyzdi4ytwsbqwg@yuggoth.org> On 2021-05-05 15:19:16 +0200 (+0200), Andreas Jaeger wrote: > On 05.05.21 14:52, Jeremy Stanley wrote: > > On 2021-05-05 12:16:39 +0000 (+0000), Erik Panter wrote: > > > I'm having issues with the individual projects' doc search pages > > > (e.g. https://docs.openstack.org/neutron/victoria/search.html?q=test) > > > since ussuri: When searching Ussuri and Victoria docs, malformed > > > links are returned, and for Wallaby it doesn't return any results. > > [...] > > > > Looking at the way the result links are malformed, it appears > > ".html" in the URL is being replaced with "undefined" so perhaps > > this is something to do with a filename suffix variable which the > > Sphinx simple search expects but which we're failing to set in its > > conf.py? [...] > Isn't this fixed by: > > https://review.opendev.org/c/openstack/openstackdocstheme/+/753283 > > Maybe a new release needs to be done? Good find! Yeah that seems to be included in 2.2.6 from October (2.2.7 is the latest release BTW), but we constrain the versions used by branch so only stable/wallaby and master are using it. For stable/victoria we build with 2.2.5 which explains why we see that behavior on victoria and earlier branch docs. That still doesn't explain the later versions just never completing the search result request, but does point out that there's a good chance it's a regression in the theme. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From skaplons at redhat.com Wed May 5 14:13:30 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 05 May 2021 16:13:30 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL Message-ID: <15209060.0YdeOJI3E6@p1> Hi, I checked today that stable/ocata and stable/pike branches in both Neutron and neutron stadium projects are pretty inactive since long time. * according to [1], last patch merged patch in Neutron for stable/pike was in July 2020 and in ocata October 2019, * for stadium projects, according to [2] it was September 2020. According to [3] and [4] there are no opened patches for any of those branches for Neutron and any stadium project except neutron-lbaas. So based on that info I want to propose that we will close both those branches are EOL now and before doing that, I would like to know if anyone would like to keep those branches to be open still. [1] https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:merged[1] [2] https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*) +(branch:stable/ocata+OR+branch:stable/pike)+status:merged[2] [3] https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:open[3] [4] https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*) +(branch:stable/ocata+OR+branch:stable/pike)+status:open[4] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:merged [2] https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*) +(branch:stable/ocata+OR+branch:stable/pike)+status:merged [3] https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:open [4] https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*) +(branch:stable/ocata+OR+branch:stable/pike)+status:open -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From senrique at redhat.com Wed May 5 14:28:36 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 5 May 2021 11:28:36 -0300 Subject: [cinder] Bug deputy report for week of 2021-05-05 Message-ID: Hello, This is a bug report from 2021-04-21 to 2021-05-05. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ------------------------------------------------------------------------------------------------------------ Critical:- High: - https://bugs.launchpad.net/cinder/+bug/1925818 "Retype of non-encrypted root volume to encrypted fails". Unassigned. - https://bugs.launchpad.net/python-cinderclient/+bug/1926331 "Allow cinderclient to handle system-scoped tokens". Unassigned. Medium: - https://bugs.launchpad.net/cinder/+bug/1926630 " Creating an encrypted volume type without a cipher accepted but will fail to create an encrypted volume". Assigned to Eric Harney. - https://bugs.launchpad.net/cinder/+bug/1925809 "[RBD] Cinder backup python3 encoding issue". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1924264 "volume creation with `--hint local_to_instance=UUID` is failing, because deprecated novaclient attribute `list_extensions` is used". Assigned to Pavlo Shchelokovskyy. - https://bugs.launchpad.net/cinder/+bug/1923866 "Netapp dataontap NFS manage operation fails with ipv6 reference". Unassigned. Low: - https://bugs.launchpad.net/cinder/+bug/1926408 "quota-show should raise exception when non-existent tenant id was given". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1926336 " The cinder API should fail early if a system-scoped token is provided without a project ID". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1924574 " [Storwize]: Configure to manage GMCV changed volumes on a separate child pool". Unassigned. - https://bugs.launchpad.net/python-cinderclient/+bug/1925737: "List volumes can not be filtered by bootable". Unassigned. Incomplete: - https://bugs.launchpad.net/cinder/+bug/1927186 "The value of allocated_capacity_gb is incorrect when the number of replicas of cinder-volume is more than one and all of them configured on the same backend". Unassigned. Cheers, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 5 14:56:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 May 2021 14:56:30 +0000 Subject: [infra] Retiring the LimeSurvey instance at survey.openstack.org Message-ID: <20210505145629.gsxybvfjv4uif6rv@yuggoth.org> We are retiring our beta-test LimeSurvey service that was running on survey.openstack.org, as well as decommissioning the server itself, some time later today. Our survey service never saw production use. It was a compelling idea, but the prevalence of free surveys-as-a-service options (a number of which are built with open source software) meant that the only real benefit to running our own was that we didn't need to entrust the response data to third party organizations. As a result, there's been very limited use of the offering and in order to redirect our limited time and resources to services where we can provide the most benefit to our communities, we're ending this one. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jay.faulkner at verizonmedia.com Wed May 5 15:09:34 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 5 May 2021 08:09:34 -0700 Subject: [E] Re: [ironic] RFC: abandon sushy-cli? In-Reply-To: References: Message-ID: Full disclosure; this is not a project I've ever used or even reviewed a single patch for. That being said; I agree if there's evidence of disuse and it's not being maintained, let's lighten the load. +1. - Jay Faulkner On Wed, May 5, 2021 at 6:15 AM Julia Kreger wrote: > +1 to abandoning it. > > On Wed, May 5, 2021 at 6:07 AM Dmitry Tantsur wrote: > > > > Hi ironicers, > > > > sushy-cli was an attempt to create a redfish CLI based on sushy. The > effort stopped long ago, and the project hasn't had a single meaningful > change since Ussuri. There is an official Redfish CLI from DMTF, I don't > think we have cycles to maintain an alternative one. > > > > If you would like to maintain sushy-cli, please speak up! > > > > Dmitry > > > > -- > > Red Hat GmbH, > https://urldefense.proofpoint.com/v2/url?u=https-3A__de.redhat.com_&d=DwIFaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=C4qtZ0p0KYTvlSTojIRDvCOAO5BvSLl2wYj6SLB05aE&s=7-oVXa6tu6ORpm2dJT_u7WGg5PPbp688pc6eYuuW7y8&e= > , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Wed May 5 15:26:59 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Wed, 5 May 2021 15:26:59 +0000 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth Message-ID: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> Hi All, I'm trying to make keystone federation with openid connect work on an Ubuntu 20.04 + Victoria cloud deployed with OSA. Despite the fact that I use openid, shibboleth seems to be involved and I had to add "ShibCompatValidUser On" directive to the file "/etc/apache2/conf-available/shib.conf", by hand in the keystone lxc container, in order to successfully authenticate ("valid user: granted" an not "valid user: denied" in apache log file). Has anyone already experienced this use case ? Thanks and best regards, Jean-Francois From christian.rohmann at inovex.de Wed May 5 15:34:12 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Wed, 5 May 2021 17:34:12 +0200 Subject: Nova not updating to new size of an extended in-use / attached cinder volume (Ceph RBD) to guest In-Reply-To: <66b67bb4d7494601a87436bdc1d7b00b@binero.com> References: <37cf15be-48d6-fdbc-a003-7117bda10dbc@inovex.de> <20210215111834.7nw3bdqsccoik2ss@lyarwood-laptop.usersys.redhat.com> <23b84a12-91a7-b739-d88d-bbc630bd9d5f@inovex.de> <5f0d1404f3bb1774918288912a98195f1d48f361.camel@redhat.com> <66b67bb4d7494601a87436bdc1d7b00b@binero.com> Message-ID: <48679cb7-e7c1-4d19-2831-720bfa9ca5af@inovex.de> Hello all, On 16/02/2021 22:32, Christian Rohmann wrote: > > I have just been trying to reproduce the issue, but in all my new > attempts it just worked as expected: > >> [162262.926512] sd 0:0:0:1: Capacity data has changed >> [162262.932868] sd 0:0:0:1: [sdb] 6291456 512-byte logical blocks: >> (3.22 GB/3.00 GiB) >> [162262.933061] sdb: detected capacity change from 2147483648 to >> 3221225472 On 17/02/2021 09:43, Tobias Urdin wrote: > > Regarding the actual extending of in-use volumes, we had an issue > where cinder could not talk to os-server-external-events endpoint for > nova because it used the wrong endpoint when looking up in keystone. > We saw the error in cinder-volume.log except for that I can't remember > we did anything special. > > > Had to use newer microversion for cinder when using CLI. > cinder --os-volume-api-version 3.42 extend size in GB> > I am very sorry for the delay, but I was now finally able to reproduce the issue and with a VERY strange finding: 1) * When using the cinder client with a Volume API version >= 3.42  like you suggested Tobias, it works just fine using cloud admin credentials. * The volume is attached / in-use, but it is resized just fine, including the notification of the kernel on the VM. 2) * When attempting the same thing using the project users credentials the resize also works just fine, volume still attached and in-use, but then the VM is NOT notified * Also this does not seems to be related to nova or QEMU, but rather it appears there is no extend_volume event triggered or at least logged: --- cut --- Request ID  -- Action -- Start Time -- User ID -- Message req-a4065b2d-1b77-4c5a-bf53-a2967b574fa0     extend_volume May 5, 2021, 1:22 p.m.     784bd2a5b82c3b31eb56ee     - req-5965910b-874f-4c7a-ab61-32d1a080d1b2     attach_volume     May 5, 2021, 1:09 p.m.     4b2abc14e511a7c0b10c     - req-75ef5bd3-75d3-4146-84eb-1809789b6586     Create     May 5, 2021, 1:09 p.m.     4b2abc14e511a7c0b10c     - --- cut --- UserID "784bd2a5b82c3b31eb56ee" is the regular user creating and attaching the volume. But that user also did an extend_volume, which is not logged as an event. There also was no API errors reported back to the client, the resize did happen - just not propagated to the VM - so a stop and restart was required. But the admin user with id "4b2abc14e511a7c0b10c" doing a resize attempt caused an extend_volume and consequently did trigger a notification of the VM, just as expected and documented in regards to this feature. Does anybody have any idea what could cause this or where to look for more details? Regards Christian From jonathan.rosser at rd.bbc.co.uk Wed May 5 15:56:37 2021 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 5 May 2021 16:56:37 +0100 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> Message-ID: Hi Jean-Francois, I have a similar deployment of Victoria on Ubuntu 18.04 using OIDC . On Ubuntu 18.04 libapache2-mod-auth-openidc and libapache2-mod-shib2 can't be co-installed as they require conflicting versions of libcurl - see the workaround here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/vars/debian.yml#L58-L61 For Ubuntu 20.04 these packages are co-installable so whenever keystone is configured to be a SP both are installed, as here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/vars/ubuntu-20.04.yml#L58-L60 A starting point would be checking what you've got keystone_sp.apache_mod set to in your config, as this drives how the apache config is constructed, here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/tasks/main.yml#L51-L68 In particular, if keystone_sp.apache_mod is undefined in your config, the defaults assume mod_shib is required. You can also join us in the IRC channel #openstack-ansible we can debug further. Regards Jonathan. On 05/05/2021 16:26, Taltavull Jean-Francois wrote: > Hi All, > > I'm trying to make keystone federation with openid connect work on an Ubuntu 20.04 + Victoria cloud deployed with OSA. > > Despite the fact that I use openid, shibboleth seems to be involved and I had to add "ShibCompatValidUser On" directive to the file "/etc/apache2/conf-available/shib.conf", by hand in the keystone lxc container, in order to successfully authenticate ("valid user: granted" an not "valid user: denied" in apache log file). > > Has anyone already experienced this use case ? > > Thanks and best regards, > Jean-Francois > > > > From marios at redhat.com Wed May 5 16:15:37 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 5 May 2021 19:15:37 +0300 Subject: [TripleO] stable/wallaby branch for tripleo repos \o/ ping if problems :) Message-ID: Hello tripleo o/ now that https://review.opendev.org/c/openstack/releases/+/789558 merged (thanks elod & hberaud for reviews) we have a stable/wallaby branch for all tripleo repos (list at [1]). Note that for python-tripleoclient & tripleo-common the branch was previously created with [2] as part of our testing and preparation. Special thank you to all those involved from the tripleo-ci team - rlandy, baghyashris, ysandeep, ananya (frenzy_friday), pojadav and apologies to those I missed. Preparations started at least six weeks ago (i.e. the previous 2 sprints). If you are interested in our prep work there are some notes and pointers at [3]. So: * stable/wallaby for tripleo repos is now a thing \o/ * go ahead and post patches to stable/wallaby (you'll need those .gitreview things to merge first like [4]) * If there are issues with CI please reach out to tripleo-ci in freenode #tripleo or #oooq - especially ping the persons with |ruck or |rover in the irc nickname [5] I'll deal with the launchpad side of things next (moving ongoing bugs over from wallaby-rc1 to xena-1 etc), regards, marios [1] https://releases.openstack.org/teams/tripleo.html#wallaby [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021741.html [3] https://hackmd.io/2sxlx1XzTa-Te47_zLv42Q?view [4] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/789889 [5] https://docs.openstack.org/tripleo-docs/latest/ci/ruck_rover_primer.html From marios at redhat.com Wed May 5 16:26:10 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 5 May 2021 19:26:10 +0300 Subject: [TripleO] stable/wallaby branching In-Reply-To: References: Message-ID: On Fri, Apr 9, 2021 at 7:02 PM Marios Andreou wrote: > > Hello TripleO, > > quick update on the plan for stable/wallaby branching. The goal is to > release tripleo stable/wallaby just after PTG i.e. last week of April. > > The tripleo-ci team have spent the previous sprint preparing and we > now have the integration and component pipelines in place [1][2]. As > of today we should also have the upstream check/gate multinode > branchful jobs. We are planning to use this current sprint to resolve > issues and ensure we have the CI coverage in place so we can safely > release all the tripleo things. > > As we usually do, we are going to first branch python-tripleoclient > and tripleo-common so we can exercise and sanity check the CI jobs. > The stable/wallaby for client and common will appear after we merge > [3]. > > *** PLEASE AVOID *** posting patches to stable/wallaby > python-tripleoclient or tripleo-common until the CI team has completed > our testing. Basically until we are ready to create a stable/wallaby > for all the tripleo things (which will be announced in due course). > update on this at http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022276.html > Obviously as always please speak up if you disagree with any of the > above or if something doesn't make sense or if you have any concerns > about the proposed timings > > regards, marios > > [1] https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-integration-stable1 > [2] https://review.rdoproject.org/zuul/builds?pipeline=openstack-component-tripleo > [3] https://review.opendev.org/c/openstack/releases/+/785670 From marios at redhat.com Wed May 5 16:30:17 2021 From: marios at redhat.com (Marios Andreou) Date: Wed, 5 May 2021 19:30:17 +0300 Subject: [TripleO] stable/wallaby branch for tripleo repos \o/ ping if problems :) In-Reply-To: References: Message-ID: On Wed, May 5, 2021 at 7:15 PM Marios Andreou wrote: > > Hello tripleo o/ > > now that https://review.opendev.org/c/openstack/releases/+/789558 > merged (thanks elod & hberaud for reviews) we have a stable/wallaby > branch for all tripleo repos (list at [1]). Note that for > python-tripleoclient & tripleo-common the branch was previously > created with [2] as part of our testing and preparation. > > Special thank you to all those involved from the tripleo-ci team - > rlandy, baghyashris, ysandeep, ananya (frenzy_friday), pojadav and > apologies to those I missed. Preparations started at least six weeks > ago (i.e. the previous 2 sprints). If you are interested in our prep > work there are some notes and pointers at [3]. > > So: > > * stable/wallaby for tripleo repos is now a thing \o/ > > * go ahead and post patches to stable/wallaby (you'll need those > .gitreview things to merge first like [4]) > > * If there are issues with CI please reach out to tripleo-ci in > freenode #tripleo or #oooq - especially ping the persons with |ruck or > |rover in the irc nickname [5] and of course it didn't take long to have a broken content provider for stable/wallaby ;) we are blocked on that bug https://bugs.launchpad.net/tripleo/+bug/1927171 thanks to weshay|ruck for the ping and reporting that new bug > > I'll deal with the launchpad side of things next (moving ongoing bugs > over from wallaby-rc1 to xena-1 etc), > > regards, marios > > [1] https://releases.openstack.org/teams/tripleo.html#wallaby > [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021741.html > [3] https://hackmd.io/2sxlx1XzTa-Te47_zLv42Q?view > [4] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/789889 > [5] https://docs.openstack.org/tripleo-docs/latest/ci/ruck_rover_primer.html From luke.camilleri at zylacomputing.com Wed May 5 16:33:53 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Wed, 5 May 2021 18:33:53 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance Message-ID: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> Hi there, i am trying to get Octavia running on a Victoria deployment on CentOS 8. It was a bit rough getting to the point to launch an instance mainly due to the load-balancer management network and the lack of documentation (https://docs.openstack.org/octavia/victoria/install/install.html) to deploy this oN CentOS. I will try to fix this once I have my deployment up and running to help others on the way installing and configuring this :-) At this point a LB can be launched by the tenant and the instance is spawned in the Octavia project and I can ping and SSH into the amphora instance from the Octavia node where the octavia-health-manager service is running using the IP within the same subnet of the amphoras (172.16.0.0/12). Unfortunately I keep on getting these errors in the log file of the worker log (/var/log/octavia/worker.log): 2021-05-05 01:54:49.368 14521 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect to instance. Retrying.: requests.exceptions.ConnectionError: HTTPSConnectionPool(host='172.16.4.46', p ort=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection ref used',)) 2021-05-05 01:54:54.374 14521 ERROR octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries (currently set to 120) exhausted.  The amphora is unavailable. Reason: HTTPSConnectionPool(host='172.16 .4.46', port=9443): Max retries exceeded with url: // (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Conne ction refused',)) 2021-05-05 01:54:54.374 14521 ERROR octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora compute instance failed to become reachable. This either means the compute driver failed to fully boot the instance inside the timeout interval or the instance is not reachable via the lb-mgmt-net.: octavia.amphorae.driver_exceptions.exceptions.TimeOutException: contacting the amphora timed out obviously the instance is deleted then and the task fails from the tenant's perspective. The main issue here is that there is no service running on port 9443 on the amphora instance. I am assuming that this is in fact the amphora-agent service that is running on the instance which should be listening on this port 9443 but the service does not seem to be up or not installed at all. To create the image I have installed the CentOS package "openstack-octavia-diskimage-create" which provides the utility disk-image-create but from what I can conclude the amphora-agent is not being installed (thought this was done automatically by default :-( ) Can anyone let me know if the amphora-agent is what gets queried on port 9443 ? If the agent is not installed/injected by default when building the amphora image? The command to inject the amphora-agent into the amphora image when using the disk-image-create command? Thanks in advance for any assistance From whayutin at redhat.com Wed May 5 16:40:39 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 5 May 2021 10:40:39 -0600 Subject: [TripleO] stable/wallaby branch for tripleo repos \o/ ping if problems :) In-Reply-To: References: Message-ID: On Wed, May 5, 2021 at 10:31 AM Marios Andreou wrote: > On Wed, May 5, 2021 at 7:15 PM Marios Andreou wrote: > > > > Hello tripleo o/ > > > > now that https://review.opendev.org/c/openstack/releases/+/789558 > > merged (thanks elod & hberaud for reviews) we have a stable/wallaby > > branch for all tripleo repos (list at [1]). Note that for > > python-tripleoclient & tripleo-common the branch was previously > > created with [2] as part of our testing and preparation. > > > > Special thank you to all those involved from the tripleo-ci team - > > rlandy, baghyashris, ysandeep, ananya (frenzy_friday), pojadav and > > apologies to those I missed. Preparations started at least six weeks > > ago (i.e. the previous 2 sprints). If you are interested in our prep > > work there are some notes and pointers at [3]. > > > > So: > > > > * stable/wallaby for tripleo repos is now a thing \o/ > > > > * go ahead and post patches to stable/wallaby (you'll need those > > .gitreview things to merge first like [4]) > > > > * If there are issues with CI please reach out to tripleo-ci in > > freenode #tripleo or #oooq - especially ping the persons with |ruck or > > |rover in the irc nickname [5] > > and of course it didn't take long to have a broken content provider > for stable/wallaby ;) > we are blocked on that bug https://bugs.launchpad.net/tripleo/+bug/1927171 > > thanks to weshay|ruck for the ping and reporting that new bug > Also since the release I'm seeing https://bugs.launchpad.net/tripleo/+bug/1927258 In config-download, "The error was: ImportError: cannot import name 'heat'" I'm only seeing that in check atm, and perhaps there is a pending patch to wallaby. That happens. > > > > > I'll deal with the launchpad side of things next (moving ongoing bugs > > over from wallaby-rc1 to xena-1 etc), > > > > regards, marios > > > > [1] https://releases.openstack.org/teams/tripleo.html#wallaby > > [2] > http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021741.html > > [3] https://hackmd.io/2sxlx1XzTa-Te47_zLv42Q?view > > [4] > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/789889 > > [5] > https://docs.openstack.org/tripleo-docs/latest/ci/ruck_rover_primer.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Wed May 5 16:41:42 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Wed, 5 May 2021 16:41:42 +0000 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> Message-ID: <78e663e39e314d4087b1823b22cd3fa4@elca.ch> I've got keystone_sp.apache_mod = mod_auth_openidc > -----Original Message----- > From: Jonathan Rosser > Sent: mercredi, 5 mai 2021 17:57 > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Keystone federation with OpenID needs > shibboleth > > Hi Jean-Francois, > > I have a similar deployment of Victoria on Ubuntu 18.04 using OIDC . > > On Ubuntu 18.04 libapache2-mod-auth-openidc and libapache2-mod-shib2 can't > be co-installed as they require conflicting versions of libcurl - see the > workaround here > https://github.com/openstack/openstack-ansible- > os_keystone/blob/master/vars/debian.yml#L58-L61 > > For Ubuntu 20.04 these packages are co-installable so whenever keystone is > configured to be a SP both are installed, as here > https://github.com/openstack/openstack-ansible- > os_keystone/blob/master/vars/ubuntu-20.04.yml#L58-L60 > > A starting point would be checking what you've got keystone_sp.apache_mod > set to in your config, as this drives how the apache config is constructed, here > https://github.com/openstack/openstack-ansible- > os_keystone/blob/master/tasks/main.yml#L51-L68 > > In particular, if keystone_sp.apache_mod is undefined in your config, the > defaults assume mod_shib is required. > > You can also join us in the IRC channel #openstack-ansible we can debug further. > > Regards > Jonathan. > > On 05/05/2021 16:26, Taltavull Jean-Francois wrote: > > Hi All, > > > > I'm trying to make keystone federation with openid connect work on an > Ubuntu 20.04 + Victoria cloud deployed with OSA. > > > > Despite the fact that I use openid, shibboleth seems to be involved and I had to > add "ShibCompatValidUser On" directive to the file "/etc/apache2/conf- > available/shib.conf", by hand in the keystone lxc container, in order to > successfully authenticate ("valid user: granted" an not "valid user: denied" in > apache log file). > > > > Has anyone already experienced this use case ? > > > > Thanks and best regards, > > Jean-Francois > > > > > > > > From opensrloo at gmail.com Wed May 5 16:53:42 2021 From: opensrloo at gmail.com (Ruby Loo) Date: Wed, 5 May 2021 12:53:42 -0400 Subject: [keystone] release notes for Victoria & wallaby? Message-ID: Hi, Where might I find out what changed in keystone in the wallaby release? I don't see any release notes for wallaby (or victoria) here: https://docs.openstack.org/releasenotes/keystone/. Thanks in advance, --ruby -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 5 17:15:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 5 May 2021 17:15:09 +0000 Subject: [keystone] release notes for Victoria & wallaby? In-Reply-To: References: Message-ID: <20210505171508.hzo4guulzksk35zi@yuggoth.org> On 2021-05-05 12:53:42 -0400 (-0400), Ruby Loo wrote: > Where might I find out what changed in keystone in the wallaby release? I > don't see any release notes for wallaby (or victoria) here: > https://docs.openstack.org/releasenotes/keystone/. You can build them yourself after applying these, which the Keystone reviewers seem to have missed approving until your message: https://review.opendev.org/754296 https://review.opendev.org/783450 Or check again tomorrow after those merge and the publication jobs are rerun for them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jonathan.rosser at rd.bbc.co.uk Wed May 5 17:19:26 2021 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Wed, 5 May 2021 18:19:26 +0100 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: <78e663e39e314d4087b1823b22cd3fa4@elca.ch> References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> <78e663e39e314d4087b1823b22cd3fa4@elca.ch> Message-ID: <6aafc35f-a33b-78a5-8682-0607d13c1f31@rd.bbc.co.uk> Could you check which apache modules are enabled? The set is defined in the code here https://github.com/openstack/openstack-ansible-os_keystone/blob/master/vars/ubuntu-20.04.yml#L85-L95 On 05/05/2021 17:41, Taltavull Jean-Francois wrote: > I've got keystone_sp.apache_mod = mod_auth_openidc > > >> -----Original Message----- >> From: Jonathan Rosser >> Sent: mercredi, 5 mai 2021 17:57 >> To: openstack-discuss at lists.openstack.org >> Subject: Re: [openstack-ansible] Keystone federation with OpenID needs >> shibboleth >> >> Hi Jean-Francois, >> >> I have a similar deployment of Victoria on Ubuntu 18.04 using OIDC . >> >> On Ubuntu 18.04 libapache2-mod-auth-openidc and libapache2-mod-shib2 can't >> be co-installed as they require conflicting versions of libcurl - see the >> workaround here >> https://github.com/openstack/openstack-ansible- >> os_keystone/blob/master/vars/debian.yml#L58-L61 >> >> For Ubuntu 20.04 these packages are co-installable so whenever keystone is >> configured to be a SP both are installed, as here >> https://github.com/openstack/openstack-ansible- >> os_keystone/blob/master/vars/ubuntu-20.04.yml#L58-L60 >> >> A starting point would be checking what you've got keystone_sp.apache_mod >> set to in your config, as this drives how the apache config is constructed, here >> https://github.com/openstack/openstack-ansible- >> os_keystone/blob/master/tasks/main.yml#L51-L68 >> >> In particular, if keystone_sp.apache_mod is undefined in your config, the >> defaults assume mod_shib is required. >> >> You can also join us in the IRC channel #openstack-ansible we can debug further. >> >> Regards >> Jonathan. From johnsomor at gmail.com Wed May 5 18:25:32 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 5 May 2021 11:25:32 -0700 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> Message-ID: Hi Luke. Yes, the amphora-agent will listen on 9443 in the amphorae instances. It uses TLS mutual authentication, so you can get a TLS response, but it will not let you into the API without a valid certificate. A simple "openssl s_client" is usually enough to prove that it is listening and requesting the client certificate. I can't talk to the "openstack-octavia-diskimage-create" package you found in centos, but I can discuss how to build an amphora image using the OpenStack tools. If you get Octavia from git or via a release tarball, we provide a script to build the amphora image. This is how we build our images for the testing gates, etc. and is the recommended way (at least from the OpenStack Octavia community) to create amphora images. https://opendev.org/openstack/octavia/src/branch/master/diskimage-create For CentOS 8, the command would be: diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 is the minimum disk size for centos images, you may want more if you are not offloading logs) I just did a run on a fresh centos 8 instance: git clone https://opendev.org/openstack/octavia python3 -m venv dib source dib/bin/activate pip3 install diskimage-builder PyYAML six sudo dnf install yum-utils ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 This built an image. Off and on we have had issues building CentOS images due to issues in the tools we rely on. If you run into issues with this image, drop us a note back. Michael On Wed, May 5, 2021 at 9:37 AM Luke Camilleri wrote: > > Hi there, i am trying to get Octavia running on a Victoria deployment on > CentOS 8. It was a bit rough getting to the point to launch an instance > mainly due to the load-balancer management network and the lack of > documentation > (https://docs.openstack.org/octavia/victoria/install/install.html) to > deploy this oN CentOS. I will try to fix this once I have my deployment > up and running to help others on the way installing and configuring this :-) > > At this point a LB can be launched by the tenant and the instance is > spawned in the Octavia project and I can ping and SSH into the amphora > instance from the Octavia node where the octavia-health-manager service > is running using the IP within the same subnet of the amphoras > (172.16.0.0/12). > > Unfortunately I keep on getting these errors in the log file of the > worker log (/var/log/octavia/worker.log): > > 2021-05-05 01:54:49.368 14521 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.4.46', p > ort=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] > Connection ref > used',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries > (currently set to 120) exhausted. The amphora is unavailable. Reason: > HTTPSConnectionPool(host='172.16 > .4.46', port=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne > ction refused',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora > compute instance failed to become reachable. This either means the > compute driver failed to fully boot the > instance inside the timeout interval or the instance is not reachable > via the lb-mgmt-net.: > octavia.amphorae.driver_exceptions.exceptions.TimeOutException: > contacting the amphora timed out > > obviously the instance is deleted then and the task fails from the > tenant's perspective. > > The main issue here is that there is no service running on port 9443 on > the amphora instance. I am assuming that this is in fact the > amphora-agent service that is running on the instance which should be > listening on this port 9443 but the service does not seem to be up or > not installed at all. > > To create the image I have installed the CentOS package > "openstack-octavia-diskimage-create" which provides the utility > disk-image-create but from what I can conclude the amphora-agent is not > being installed (thought this was done automatically by default :-( ) > > Can anyone let me know if the amphora-agent is what gets queried on port > 9443 ? > > If the agent is not installed/injected by default when building the > amphora image? > > The command to inject the amphora-agent into the amphora image when > using the disk-image-create command? > > Thanks in advance for any assistance > > From elod.illes at est.tech Wed May 5 18:35:48 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 5 May 2021 20:35:48 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <15209060.0YdeOJI3E6@p1> References: <15209060.0YdeOJI3E6@p1> Message-ID: <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> Hi, Ocata is unfortunately unmaintained for a long time as some general test jobs are broken there, so as a stable-maint-core member I support to tag neutron's stable/ocata as End of Life. After the branch is tagged, please ping me and I can arrange the deletion of the branch. For Pike, I volunteered at the PTG in 2020 to help with reviews there, I still keep that offer, however I am clearly not enough to keep it maintained, besides backports are not arriving for stable/pike in neutron. Anyway, if the gate is functional there, then I say we could keep it open (but as far as I see how gate situation is worsen now, as more and more things go wrong, I don't expect that will take long). If not, then I only ask that let's do the EOL'ing first with Ocata and when it is done, then continue with neutron's stable/pike. For the process please follow the steps here: https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life (with the only exception, that in the last step, instead of infra team, please turn to me/release team - patch for the documentation change is on the way: https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) Thanks, Előd On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > Hi, > > > I checked today that stable/ocata and stable/pike branches in both > Neutron and neutron stadium projects are pretty inactive since long time. > > * according to [1], last patch merged patch in Neutron for stable/pike > was in July 2020 and in ocata October 2019, > > * for stadium projects, according to [2] it was September 2020. > > > According to [3] and [4] there are no opened patches for any of those > branches for Neutron and any stadium project except neutron-lbaas. > > > So based on that info I want to propose that we will close both those > branches are EOL now and before doing that, I would like to know if > anyone would like to keep those branches to be open still. > > > [1] > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > [2] > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > [3] > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > [4] > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networking-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Wed May 5 19:29:34 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Wed, 5 May 2021 21:29:34 +0200 Subject: [ptl][release][stable][EM] Extended Maintenance - Train In-Reply-To: <07a09dbb-4baa-1d22-5605-a636b0f55fbc@est.tech> References: <07a09dbb-4baa-1d22-5605-a636b0f55fbc@est.tech> Message-ID: Reminder, in a week, Train will transition to Extended Maintenance, prepare your final train release as soon as possible. Előd On 2021. 04. 16. 19:40, Előd Illés wrote: > Hi, > > As Wallaby was released the day before yesterday and we are in a less > busy period, it is a good opportunity to call your attention to the > following: > > In less than a month Train is planned to transition to Extended > Maintenance phase [1] (planned date: 2021-05-12). > > I have generated the list of the current *open* and *unreleased* > changes in stable/train for the follows-policy tagged repositories [2] > (where there are such patches). These lists could help the teams who > are planning to do a *final* release on Train before moving > stable/train branches to Extended Maintenance. Feel free to edit and > extend these lists to track your progress! > > * At the transition date the Release Team will tag the*latest* (Train) > *releases* of repositories with *train-em* tag. > * After the transition stable/train will be still open for bug fixes, > but there won't be any official releases. > > NOTE: teams, please focus on wrapping up your libraries first if there > is any concern about the changes, in order to avoid broken releases! > > Thanks, > > Előd > > [1] https://releases.openstack.org/ > [2] https://etherpad.opendev.org/p/train-final-release-before-em > > > From skaplons at redhat.com Wed May 5 20:04:20 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 05 May 2021 22:04:20 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> Message-ID: <2358401.pfyFj45K73@p1> Hi, Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > Hi, > > Ocata is unfortunately unmaintained for a long time as some general test > jobs are broken there, so as a stable-maint-core member I support to tag > neutron's stable/ocata as End of Life. After the branch is tagged, > please ping me and I can arrange the deletion of the branch. > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > still keep that offer, however I am clearly not enough to keep it > maintained, besides backports are not arriving for stable/pike in > neutron. Anyway, if the gate is functional there, then I say we could > keep it open (but as far as I see how gate situation is worsen now, as > more and more things go wrong, I don't expect that will take long). If > not, then I only ask that let's do the EOL'ing first with Ocata and when > it is done, then continue with neutron's stable/pike. > > For the process please follow the steps here: > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > (with the only exception, that in the last step, instead of infra team, > please turn to me/release team - patch for the documentation change is > on the way: > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > Thanks, Thx Elod for volunteering for maintaining stable/pike. Neutron stable team will also definitely be able to help with that if needed but my proposal came mostly from the fact that there is no any opened patches for that branch proposed since pretty long time already. Of course I will for now do EOL of Ocata and we can get back to Pike in few weeks/months to check how things are going there. > > Előd > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > Hi, > > > > > > I checked today that stable/ocata and stable/pike branches in both > > Neutron and neutron stadium projects are pretty inactive since long time. > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > was in July 2020 and in ocata October 2019, > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > According to [3] and [4] there are no opened patches for any of those > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > So based on that info I want to propose that we will close both those > > branches are EOL now and before doing that, I would like to know if > > anyone would like to keep those branches to be open still. > > > > > > [1] > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ (branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > [2] > > https://review.opendev.org/q/(project:openstack/ ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ neutron-.*+OR+project:%255Eopenstack/networki > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > [3] > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ (branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > [4] > > https://review.opendev.org/q/(project:openstack/ ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ neutron-.*+OR+project:%255Eopenstack/networki > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From johnsomor at gmail.com Wed May 5 20:44:29 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 5 May 2021 13:44:29 -0700 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <2358401.pfyFj45K73@p1> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <2358401.pfyFj45K73@p1> Message-ID: Slawek, The open patches for neutron-lbaas are clearly stale (untouched for over a year). I don't think there would be an issue closing those out in preparation to mark the branch EOL. Michael On Wed, May 5, 2021 at 1:09 PM Slawek Kaplonski wrote: > > Hi, > > Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > > Hi, > > > > Ocata is unfortunately unmaintained for a long time as some general test > > jobs are broken there, so as a stable-maint-core member I support to tag > > neutron's stable/ocata as End of Life. After the branch is tagged, > > please ping me and I can arrange the deletion of the branch. > > > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > > still keep that offer, however I am clearly not enough to keep it > > maintained, besides backports are not arriving for stable/pike in > > neutron. Anyway, if the gate is functional there, then I say we could > > keep it open (but as far as I see how gate situation is worsen now, as > > more and more things go wrong, I don't expect that will take long). If > > not, then I only ask that let's do the EOL'ing first with Ocata and when > > it is done, then continue with neutron's stable/pike. > > > > For the process please follow the steps here: > > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > (with the only exception, that in the last step, instead of infra team, > > please turn to me/release team - patch for the documentation change is > > on the way: > > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > > > Thanks, > > Thx Elod for volunteering for maintaining stable/pike. Neutron stable team > will also definitely be able to help with that if needed but my proposal came > mostly from the fact that there is no any opened patches for that branch > proposed since pretty long time already. > Of course I will for now do EOL of Ocata and we can get back to Pike in few > weeks/months to check how things are going there. > > > > > Előd > > > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > > Hi, > > > > > > > > > I checked today that stable/ocata and stable/pike branches in both > > > Neutron and neutron stadium projects are pretty inactive since long time. > > > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > > was in July 2020 and in ocata October 2019, > > > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > > > > According to [3] and [4] there are no opened patches for any of those > > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > > > > So based on that info I want to propose that we will close both those > > > branches are EOL now and before doing that, I would like to know if > > > anyone would like to keep those branches to be open still. > > > > > > > > > [1] > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > (branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > (branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > [2] > > > https://review.opendev.org/q/(project:openstack/ > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networki > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networ > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > [3] > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > (branch:stable/ocata+OR+branch:stable/pike)+status:open > > > (branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > [4] > > > https://review.opendev.org/q/(project:openstack/ > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networki > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networ > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat From gmann at ghanshyammann.com Thu May 6 00:38:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 May 2021 19:38:21 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 6th at 1500 UTC In-Reply-To: <17937a5f288.1169270675465.2499892921963661204@ghanshyammann.com> References: <17937a5f288.1169270675465.2499892921963661204@ghanshyammann.com> Message-ID: <1793f1c7906.d7b8c61995261.2393704339035993526@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on May 6th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate performance and heavy job configs (dansmith) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Project Health checks (gmann) * TC's context, name, and documenting formal responsibilities (TheJulia) * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Tue, 04 May 2021 08:51:33 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 6th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 5th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From malikobaidadil at gmail.com Thu May 6 04:06:58 2021 From: malikobaidadil at gmail.com (Malik Obaid) Date: Thu, 6 May 2021 09:06:58 +0500 Subject: [wallaby][neutron][ovn] Distributed FIP Message-ID: Hi, I am using Openstack Wallaby release on Ubuntu 20.04. I have enabled DVR and Distributed FIP on 1 controller node and 2 compute nodes, we have mapped an interface(ens224) with the physical network(br-eth1) using below command: ovs-vsctl set open . external-ids:ovn-bridge-mappings=physnet1:br-eth1 The issue I am facing here is that when I analyze the traffic on interface (ens224) while initializing ping from a VM on my compute node 1, it shows the traffic flowing through my controller as well as both the compute nodes rather than only flowing through compute node 1. I would really appreciate any input in this regard. Thank you. Regards, Malik Obaid -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Thu May 6 04:58:15 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Thu, 6 May 2021 09:58:15 +0500 Subject: [wallaby][magnum] magnum-conductor Service down Message-ID: Hi, I am experiencing a problem in magnum 12.0.0 that conductor service randomly goes down. I am using ubuntu 20.04. root at controller ~(keystone_ubuntu20_new)# openstack coe service list +----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+ | id | host | binary | state | disabled | disabled_reason | created_at | updated_at | +----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+ | 1 | None | magnum-conductor | down | False | None | 2021-05-03T10:33:16+00:00 | 2021-05-05T21:48:39+00:00 | +----+------+------------------+-------+----------+-----------------+---------------------------+---------------------------+ checking the logs of last updated timestamp in /var/log/syslog. I have found below error. May 5 21:49:29 orchestration magnum-conductor[10827]: /usr/lib/python3/dist-packages/magnum/drivers/common/driver.py:38: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. May 5 21:49:29 orchestration magnum-conductor[10827]: yield entry_point, entry_point.load(require=False) May 5 21:49:29 orchestration magnum-conductor[10827]: /usr/lib/python3/dist-packages/kubernetes/client/apis/__init__.py:10: DeprecationWarning: The package kubernetes.client.apis is renamed and deprecated, use kubernetes.client.api instead (please note that the trailing s was removed). May 5 21:49:29 orchestration magnum-conductor[10827]: warnings.warn( May 5 21:49:29 orchestration magnum-conductor[10827]: /usr/lib/python3/dist-packages/magnum/drivers/common/driver.py:38: PkgResourcesDeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. May 5 21:49:29 orchestration magnum-conductor[10827]: yield entry_point, entry_point.load(require=False) May 5 21:49:29 orchestration magnum-conductor[10827]: Traceback (most recent call last): May 5 21:49:29 orchestration magnum-conductor[10827]: File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 476, in fire_timers May 5 21:49:29 orchestration magnum-conductor[10827]: timer() May 5 21:49:29 orchestration magnum-conductor[10827]: File "/usr/lib/python3/dist-packages/eventlet/hubs/timer.py", line 59, in __call__ May 5 21:49:29 orchestration magnum-conductor[10827]: cb(*args, **kw) May 5 21:49:29 orchestration magnum-conductor[10827]: File "/usr/lib/python3/dist-packages/eventlet/semaphore.py", line 152, in _do_acquire May 5 21:49:29 orchestration magnum-conductor[10827]: waiter.switch() May 5 21:49:29 orchestration magnum-conductor[10827]: greenlet.error: cannot switch to a different thread There is no logs in magnum-conductor.log after the last updated timestamp of conductor service. The service looks running if I see in ps -ef | grep magnum-conductor. - Ammad -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 6 06:16:49 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 06 May 2021 08:16:49 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: References: <15209060.0YdeOJI3E6@p1> <2358401.pfyFj45K73@p1> Message-ID: <4197142.oXMt3vC68W@p1> Hi, Dnia środa, 5 maja 2021 22:44:29 CEST Michael Johnson pisze: > Slawek, > > The open patches for neutron-lbaas are clearly stale (untouched for > over a year). I don't think there would be an issue closing those out > in preparation to mark the branch EOL. Thx for checking them and for that info. > > Michael > > On Wed, May 5, 2021 at 1:09 PM Slawek Kaplonski wrote: > > Hi, > > > > Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > > > Hi, > > > > > > Ocata is unfortunately unmaintained for a long time as some general test > > > jobs are broken there, so as a stable-maint-core member I support to tag > > > neutron's stable/ocata as End of Life. After the branch is tagged, > > > please ping me and I can arrange the deletion of the branch. > > > > > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > > > still keep that offer, however I am clearly not enough to keep it > > > maintained, besides backports are not arriving for stable/pike in > > > neutron. Anyway, if the gate is functional there, then I say we could > > > keep it open (but as far as I see how gate situation is worsen now, as > > > more and more things go wrong, I don't expect that will take long). If > > > not, then I only ask that let's do the EOL'ing first with Ocata and when > > > it is done, then continue with neutron's stable/pike. > > > > > > For the process please follow the steps here: > > > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > > (with the only exception, that in the last step, instead of infra team, > > > please turn to me/release team - patch for the documentation change is > > > on the way: > > > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > > > > > Thanks, > > > > Thx Elod for volunteering for maintaining stable/pike. Neutron stable team > > will also definitely be able to help with that if needed but my proposal came > > mostly from the fact that there is no any opened patches for that branch > > proposed since pretty long time already. > > Of course I will for now do EOL of Ocata and we can get back to Pike in few > > weeks/months to check how things are going there. > > > > > Előd > > > > > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > > > > > I checked today that stable/ocata and stable/pike branches in both > > > > Neutron and neutron stadium projects are pretty inactive since long time. > > > > > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > > > was in July 2020 and in ocata October 2019, > > > > > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > > > > > > > According to [3] and [4] there are no opened patches for any of those > > > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > > > > > > > So based on that info I want to propose that we will close both those > > > > branches are EOL now and before doing that, I would like to know if > > > > anyone would like to keep those branches to be open still. > > > > > > > > > > > > [1] > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > > > > (branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > > > (branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > [2] > > > > https://review.opendev.org/q/(project:openstack/ > > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > > neutron-.*+OR+project:%255Eopenstack/networki > > > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > > neutron-.*+OR+project:%255Eopenstack/networ > > > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > > > [3] > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > > > > (branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > > > (branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > [4] > > > > https://review.opendev.org/q/(project:openstack/ > > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > > neutron-.*+OR+project:%255Eopenstack/networki > > > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > > neutron-.*+OR+project:%255Eopenstack/networ > > > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gthiemonge at redhat.com Thu May 6 06:50:51 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Thu, 6 May 2021 08:50:51 +0200 Subject: [Octavia] 2021-05-12 weekly meeting cancelled Message-ID: Hi, I'll be on PTO next Wednesday, I won't be able to run the meeting. We decided during yesterday's meeting that next week meeting is cancelled, Thanks, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 6 07:08:04 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 06 May 2021 09:08:04 +0200 Subject: [neutron] Drivers team meeting 07.05.2021 cancelled Message-ID: <13370596.x2BqlDsPJG@p1> Hi, Due to lack of new RFEs and any other topics, let's cancel tomorrow's meeting. Please use that time to review some of the already proposed specs: https:// review.opendev.org/q/project:openstack/neutron-specs+status:open[1] See You online :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/q/project:openstack/neutron-specs+status:open -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From amoralej at redhat.com Thu May 6 07:25:00 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 6 May 2021 09:25:00 +0200 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: On Tue, May 4, 2021 at 3:42 PM Alex Schultz wrote: > > > On Tue, May 4, 2021 at 2:42 AM Tim Bell wrote: > >> >> >> On 3 May 2021, at 23:07, Emilien Macchi wrote: >> >> Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], so >> I don't see any reason why you couldn't use RPMs from RDO: >> >> https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository >> >> [1] Source: https://rockylinux.org >> >> >> I wonder if there would be some compatibility problems with using RDO on >> a RHEL compatible OS. If RDO is built against CentOS Stream [1], could it >> potentially have some dependencies on python packages which are due to be >> released in the next RHEL minor update (since Stream is on the latest >> version) ? >> >> > Yes, that may happen and it's happening. i.e. we've had that situation with python-rtslib has been updated in CentOS Stream 8 to the minimal version required by cinder [1] but not yet in CentOS Linux 8. That means that, current cinder in master is installable in CentOS Stream 8 but will need to wait for CentOS Linux 8.4. Similar will happen with CentOS Stream vs Rocky Linux. RDO Victoria, Ussuri and Train currently support both CentOS Linux 8 and CentOS Stream 8, so those should run fine on Rocky Linux. [1] https://github.com/openstack/cinder/blob/master/requirements.txt#L49 Generally RDO isn't specifying a version when building packages so it's > likely to be compatible. CentOS Stream will just have things that'll show > up in the next version of Rocky Linux. For stable releases it's likely to > just be compatible, whereas master might run into issues if new base os > dependencies get added. > > Tim >> [1] https://lists.rdoproject.org/pipermail/users/2021-January/000967.html >> >> On Mon, May 3, 2021 at 12:11 PM Wada Akor wrote: >> >>> Good day , >>> Please I want to know when will openstack provide information on how to >>> installation of openstack on Rocky Linux be available. >>> >>> Thanks & Regards >>> >> >> >> -- >> Emilien Macchi >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Thu May 6 07:31:36 2021 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 6 May 2021 09:31:36 +0200 Subject: Rocky Linux for Openstack In-Reply-To: References: Message-ID: On Tue, May 4, 2021 at 2:53 PM Wada Akor wrote: > I agree with you. That's what I was thinking too. > > On Tue, May 4, 2021, 10:53 Radosław Piliszek > wrote: > >> On Tue, May 4, 2021 at 10:39 AM Tim Bell wrote: >> > >> > >> > >> > On 3 May 2021, at 23:07, Emilien Macchi wrote: >> > >> > Rocky Linux claims to be 100% compatible with Red Hat's OS family [1], >> so I don't see any reason why you couldn't use RPMs from RDO: >> > >> https://docs.openstack.org/install-guide/environment-packages-rdo.html#enable-the-openstack-repository >> > >> > [1] Source: https://rockylinux.org >> > >> > >> > I wonder if there would be some compatibility problems with using RDO >> on a RHEL compatible OS. If RDO is built against CentOS Stream [1], could >> it potentially have some dependencies on python packages which are due to >> be released in the next RHEL minor update (since Stream is on the latest >> version) ? >> > >> > Tim >> >> I feel the same. >> I guess Rocky Linux would need to come up with their own OpenStack >> release process. >> Or, perhaps, collaborate with RDO so that it supports both distros. ;-) >> >> RDO is providing both packages and deliverables that can be used to rebuild packages with Rocky Linux OS as some other organizations have been doing in the past to support architectures not supported in CentOS or to customize their own OpenStack distros. We don't plan to support both distros but we are open to collaborate with anyone willing to work on it. > -yoctozepto >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Thu May 6 08:09:21 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Thu, 6 May 2021 10:09:21 +0200 Subject: [E] Re: [ironic] RFC: abandon sushy-cli? In-Reply-To: References: Message-ID: I sadly vote for abandoning it Riccardo On Wed, May 5, 2021 at 5:14 PM Jay Faulkner wrote: > Full disclosure; this is not a project I've ever used or even reviewed a > single patch for. That being said; I agree if there's evidence of disuse > and it's not being maintained, let's lighten the load. +1. > > - > Jay Faulkner > > > > On Wed, May 5, 2021 at 6:15 AM Julia Kreger > wrote: > >> +1 to abandoning it. >> >> On Wed, May 5, 2021 at 6:07 AM Dmitry Tantsur >> wrote: >> > >> > Hi ironicers, >> > >> > sushy-cli was an attempt to create a redfish CLI based on sushy. The >> effort stopped long ago, and the project hasn't had a single meaningful >> change since Ussuri. There is an official Redfish CLI from DMTF, I don't >> think we have cycles to maintain an alternative one. >> > >> > If you would like to maintain sushy-cli, please speak up! >> > >> > Dmitry >> > >> > -- >> > Red Hat GmbH, >> https://urldefense.proofpoint.com/v2/url?u=https-3A__de.redhat.com_&d=DwIFaQ&c=sWW_bEwW_mLyN3Kx2v57Q8e-CRbmiT9yOhqES_g_wVY&r=NKR1jXf8to59hDGraABDUb4djWcsAXM11_v4c7uz0Tg&m=C4qtZ0p0KYTvlSTojIRDvCOAO5BvSLl2wYj6SLB05aE&s=7-oVXa6tu6ORpm2dJT_u7WGg5PPbp688pc6eYuuW7y8&e= >> , Registered seat: Grasbrunn, >> > Commercial register: Amtsgericht Muenchen, HRB 153243, >> > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Thu May 6 08:17:03 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Thu, 6 May 2021 08:17:03 +0000 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: <6aafc35f-a33b-78a5-8682-0607d13c1f31@rd.bbc.co.uk> References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> <78e663e39e314d4087b1823b22cd3fa4@elca.ch> <6aafc35f-a33b-78a5-8682-0607d13c1f31@rd.bbc.co.uk> Message-ID: I forgot to mention: in Ubuntu 20.04, the apache shibboleth module is named "shib" and not "sib2". So, I had to supersede the variable " keystone_apache_modules". If you don't do this, os-keystone playbook fails with " "Failed to set module shib2 to disabled:\n\nMaybe the module identifier (mod_shib) was guessed incorrectly.Consider setting the \"identifier\" option.", "rc": 1, "stderr": "ERROR: Module shib2 does not exist!\n"". So, apache modules enabled are: - shib - auth_openidc - proxy_uwsgi - headers > -----Original Message----- > From: Jonathan Rosser > Sent: mercredi, 5 mai 2021 19:19 > To: openstack-discuss at lists.openstack.org > Subject: Re: [openstack-ansible] Keystone federation with OpenID needs > shibboleth > > Could you check which apache modules are enabled? > > The set is defined in the code here > https://github.com/openstack/openstack-ansible- > os_keystone/blob/master/vars/ubuntu-20.04.yml#L85-L95 > > On 05/05/2021 17:41, Taltavull Jean-Francois wrote: > > I've got keystone_sp.apache_mod = mod_auth_openidc > > > > > >> -----Original Message----- > >> From: Jonathan Rosser > >> Sent: mercredi, 5 mai 2021 17:57 > >> To: openstack-discuss at lists.openstack.org > >> Subject: Re: [openstack-ansible] Keystone federation with OpenID > >> needs shibboleth > >> > >> Hi Jean-Francois, > >> > >> I have a similar deployment of Victoria on Ubuntu 18.04 using OIDC . > >> > >> On Ubuntu 18.04 libapache2-mod-auth-openidc and libapache2-mod-shib2 > >> can't be co-installed as they require conflicting versions of libcurl > >> - see the workaround here > >> https://github.com/openstack/openstack-ansible- > >> os_keystone/blob/master/vars/debian.yml#L58-L61 > >> > >> For Ubuntu 20.04 these packages are co-installable so whenever > >> keystone is configured to be a SP both are installed, as here > >> https://github.com/openstack/openstack-ansible- > >> os_keystone/blob/master/vars/ubuntu-20.04.yml#L58-L60 > >> > >> A starting point would be checking what you've got > >> keystone_sp.apache_mod set to in your config, as this drives how the > >> apache config is constructed, here > >> https://github.com/openstack/openstack-ansible- > >> os_keystone/blob/master/tasks/main.yml#L51-L68 > >> > >> In particular, if keystone_sp.apache_mod is undefined in your config, > >> the defaults assume mod_shib is required. > >> > >> You can also join us in the IRC channel #openstack-ansible we can debug > further. > >> > >> Regards > >> Jonathan. From bcafarel at redhat.com Thu May 6 08:55:01 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Thu, 6 May 2021 10:55:01 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <2358401.pfyFj45K73@p1> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <2358401.pfyFj45K73@p1> Message-ID: On Wed, 5 May 2021 at 22:08, Slawek Kaplonski wrote: > Hi, > > Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > > Hi, > > > > Ocata is unfortunately unmaintained for a long time as some general test > > jobs are broken there, so as a stable-maint-core member I support to tag > > neutron's stable/ocata as End of Life. After the branch is tagged, > > please ping me and I can arrange the deletion of the branch. > > > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > > still keep that offer, however I am clearly not enough to keep it > > maintained, besides backports are not arriving for stable/pike in > > neutron. Anyway, if the gate is functional there, then I say we could > > keep it open (but as far as I see how gate situation is worsen now, as > > more and more things go wrong, I don't expect that will take long). If > > not, then I only ask that let's do the EOL'ing first with Ocata and when > > it is done, then continue with neutron's stable/pike. > > > > For the process please follow the steps here: > > > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > (with the only exception, that in the last step, instead of infra team, > > please turn to me/release team - patch for the documentation change is > > on the way: > > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > > > Thanks, > > Thx Elod for volunteering for maintaining stable/pike. Neutron stable team > will also definitely be able to help with that if needed but my proposal > came > mostly from the fact that there is no any opened patches for that branch > proposed since pretty long time already. > Of course I will for now do EOL of Ocata and we can get back to Pike in > few > weeks/months to check how things are going there. > > +1 for Ocata, last merged backport was in October 2019 For neutron, Pike has been quiet since July, though remaining CI was OK that last time (we have dropped a few jobs and it does not have neutron-tempest-plugin) so hopefully it should not overwhelm you I think we can do Pike EOL for stadium projects, this will not break Pike overall and these did not see updates for a longer time > > > > Előd > > > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > > Hi, > > > > > > > > > I checked today that stable/ocata and stable/pike branches in both > > > Neutron and neutron stadium projects are pretty inactive since long > time. > > > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > > was in July 2020 and in ocata October 2019, > > > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > > > > According to [3] and [4] there are no opened patches for any of those > > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > > > > So based on that info I want to propose that we will close both those > > > branches are EOL now and before doing that, I would like to know if > > > anyone would like to keep those branches to be open still. > > > > > > > > > [1] > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > (branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > (branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > > > [2] > > > https://review.opendev.org/q/(project:openstack/ > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networki > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networ > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > [3] > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ > (branch:stable/ocata+OR+branch:stable/pike)+status:open > > > (branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > > > [4] > > > https://review.opendev.org/q/(project:openstack/ > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networki > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ > neutron-.*+OR+project:%255Eopenstack/networ > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > > > > -- > > > > > > Slawek Kaplonski > > > > > > Principal Software Engineer > > > > > > Red Hat > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From bkslash at poczta.onet.pl Thu May 6 09:19:38 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Thu, 6 May 2021 11:19:38 +0200 Subject: [cinder] Multiattach volume mount/attach limit Message-ID: Hi, Is there any way to limit the number of possible mounts of multiattach volume? Best regards Adam Tomas From jonathan.rosser at rd.bbc.co.uk Thu May 6 09:20:58 2021 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Thu, 6 May 2021 10:20:58 +0100 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> <78e663e39e314d4087b1823b22cd3fa4@elca.ch> <6aafc35f-a33b-78a5-8682-0607d13c1f31@rd.bbc.co.uk> Message-ID: I've made a patch to correct this module name which it would be great if you could test and leave a comment if it's OK https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/790018 Are you able to debug any further why the shib module is being enabled, maybe through using -vv on the openstack-ansible command to show the task parameters, or adding some debug tasks in os_keystone to show the values of keystone_sp_apache_mod_shib and keystone_sp_apache_mod_auth_openidc? On 06/05/2021 09:17, Taltavull Jean-Francois wrote: > I forgot to mention: in Ubuntu 20.04, the apache shibboleth module is named "shib" and not "sib2". So, I had to supersede the variable > " keystone_apache_modules". If you don't do this, os-keystone playbook fails with " "Failed to set module shib2 to disabled:\n\nMaybe the module identifier (mod_shib) was guessed incorrectly.Consider setting the \"identifier\" option.", "rc": 1, "stderr": "ERROR: Module shib2 does not exist!\n"". > > So, apache modules enabled are: > - shib > - auth_openidc > - proxy_uwsgi > - headers > >> -----Original Message----- >> From: Jonathan Rosser >> Sent: mercredi, 5 mai 2021 19:19 >> To: openstack-discuss at lists.openstack.org >> Subject: Re: [openstack-ansible] Keystone federation with OpenID needs >> shibboleth >> >> Could you check which apache modules are enabled? >> >> The set is defined in the code here >> https://github.com/openstack/openstack-ansible- >> os_keystone/blob/master/vars/ubuntu-20.04.yml#L85-L95 >> >> On 05/05/2021 17:41, Taltavull Jean-Francois wrote: >>> I've got keystone_sp.apache_mod = mod_auth_openidc >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu May 6 12:05:40 2021 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 6 May 2021 14:05:40 +0200 Subject: [largescale-sig] Next meeting: May 5, 15utc In-Reply-To: References: Message-ID: <4b5c2a10-d2be-fb2e-d684-c1c3145de947@openstack.org> We held our meeting yesterday. We discussed our OpenInfra.Live show, the "Large Scale OpenStack" show, and its upcoming episode on upgrades on May 20. Meeting logs at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2021/large_scale_sig.2021-05-05-15.00.html We'll have a special meeting next week on IRC (May 12, 15utc) for the show guests to discuss content and common questions. Regards, -- Thierry Carrez (ttx) From lana.dinoia at gmail.com Thu May 6 12:32:39 2021 From: lana.dinoia at gmail.com (Lana DI NOIA) Date: Thu, 6 May 2021 14:32:39 +0200 Subject: [devstack][installation] Trouble with devstack installation Message-ID: Hello, I encounter the following error when I run stack.sh: ERROR: Editable requirements are not allowed as constraints ++./stack.sh:main:752 err_trap ++./stack.sh:err_trap:530 local r=1 stack.sh failed: full log in /opt/stack/logs/stack.sh.log.2021-05-06-115934 Error on exit And a little higher I have this warning Attempting uninstall: pip Found existing installation: pip 21.1.1 Uninstalling pip-21.1.1: Successfully uninstalled pip-21.1.1 Successfully installed pip-21.1.1 WARNING: Running pip as root will break packages and permissions. You should install packages reliably by using venv: https://pip.pypa.io/warnings/venv Any idea why? Thanks From gouthampravi at gmail.com Thu May 6 07:30:50 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 6 May 2021 00:30:50 -0700 Subject: [manila][ptg] Xena PTG Summary Message-ID: Hello Zorillas, and other interested Stackers! Sorry this is getting to you later than usual :) We concluded a productive Project Team Gathering on 23rd Apr '21. I'd like to thank everyone that participated as well as the friendly folks at the OpenInfra Foundation who worked hard to organize this event. The following is my summary of the proceedings. I've linked associated etherpads and resources to dig in further. Please feel free to follow up here or on freenode's #openstack-manila if you have any questions. == Day 1 - Apr 19, 2021, Mon == === Manila Retrospective === Topic Etherpad: https://etherpad.opendev.org/p/manila-wallaby-retrospective - interns and their mentors did a great job through Wallaby! Many accomplishments, thanks to the sponsoring organizations for their investment - the two Bugsquash events at M1 and M3 were very effective. - we added new core members (python-manilaclient, manila-tempest-plugin) - sub-teams (manila-ui, manilaclient and manila-tempest-plugin) increased focus on the individual projects and sped up reviews - good cross project collaboration with OpenStack QA, Horizon, NFS-Ganesha and Ceph communities - we discussed strategies to avoid reviewer burnout and performed health checks on the team growth initiatives we took up in the Wallaby cycle Actions: - need to get proactive about reviews so we can avoid burn-out at the end of the cycle - continue bug squash days in Xena - look for interested contributors to join the core maintainer team - bring up reviews in weekly meetings and assign review owners earlier than M-2 - contributors need to be reminded to help with reviews == Day 3 - Apr 21, 2021, Wed == === manila-tempest-plugin plans and update === - lkuchlan and vhari highlighted difficulties around "capabilities" testing that was exposed due to feature changes in the CephFS driver in the Wallaby cycle - optional feature testing requires rework, the configuration options and test strategy have gotten confusing over time - discussion continued on dropping testing for older API microversions where some optional features (snapshots, snapshot cloning) weren't really optional. It was agreed that this was overkill. The problem is specifically in share type extra-specs. If these are setup with API version >= 2.24, we could circumvent this: https://review.opendev.org/c/openstack/manila-tempest-plugin/+/785307. The only issue is that tests could theoretically be run against a cloud that doesn't support API version 2.24 (has to be Newton release or older). The team agreed that we could document this deficiency in the configuration file since Newton release has been EOL for a while. Actions: - review and merge the fix to share type extra-specs - rework existing "run_XYZZY_tests" and "capability_XYZZY_support" flags to "share-feature-enabled" section where these capabilities are always set when a feature is enabled - this is in-line with the way optional feature testing for other OpenStack services is done === Service Recovery === Topic Etherpad: https://etherpad.opendev.org/p/xena-ptg-manila-share-recovery-polling - gouthamr discussed the use cases and architectures needing to run manila-share in active-active HA - service recovery after failure needs to be coordinated to prevent metadata corruption - clustering services will provide coordination for cleanup - vendor driver authors will need to audit critical sections and replace oslo.concurrency locks with distributed locks. Actions: - there will be a specification submitted for review === Polling operations in manila-share === Topic Etherpad: https://etherpad.opendev.org/p/xena-ptg-manila-share-recovery-polling - haixin and gouthamr summarized the periodic tasks conducted by the share manager service - to update share service capabilities and real time capacity information, - to monitor the health of share replicas, - to "watch" the progress of long running clone, snapshot and migration operations. - these periodic tasks will be inefficiently duplicated if running across multiple nodes associated with the same backend storage system/driver - many of these periodic tasks are meant to be read-only, however, the health-check operations modify database state, currently these assume a single writer, and unless staggered, updates may be duplicated as well - proposed mitigation was to allow clustered hosts to ignore polling operations if one is in progress elsewhere, and for the service instance executing the tasks to distribute the workload to other hosts via the message queue - another issue with periodic tasks was the lack of ability to stop an unnecessary task from running - we could take advantage of oslo's threadgroup that exposes a start/stop API Actions: - replace loopingcall periodic tasks with threadgroups to selectively run periodic tasks - propose a specification to coordinate and distribute polling workloads === Interop Working Group === - arkady_kanevsky presented the mission of the InteropWG, and laid out plans for the upcoming Interop guideline - manila tests were added to the guideline as part of the 2020.11 guideline and have been available for the community to provide results - there was an optional capability (shrinking shares) that was flagged, and the team discussed if it was to remain flagged, or be moved to an advisory state - including reduction of the cutoff score - the team wants to stabilize the tests exposed through the next guideline, and work on a way to remove the necessity of an admin user to bootstrap the manila's tempest tests where unnecessary Actions: - improve manila tempest plugin to not request admin credentials for the tests exposed, and work with a configured "default" share type === Share server migration improvements === - share server migration was added as an experimental feature in the Victoria release - carloss presented use cases for nondisruptive share server migration - nondisruptive migration would mean that network changes are unnecessary, so the share manager needn't seek out any new network ports or relinquish existing ones - the share server migration is experimental, however, we'd still make nondisruptive migration possible only from a new API microversion - manual triggering of the second phase is still necessary to allow control of the cutover/switchover. All resources will continue to be tagged busy until the migration has been completed Actions: - no specification is necessary because the changes to the API and core manila are expected to be minimal === OpenStackSDK and Manila === - ashrod98, NicoleBU and markypharaoh, seniors at Boston University worked through the past cycle to expose manila resources to the OpenStackSDK - an outreachy internship proposal has been submitted to continue their work to expose more API functionality via the SDK - we'd prioritize common elements - quotas, limits and availability zones so we can parallely work on the OpenStackClient and Horizon implementation for these - need review attention on the patches Actions: - some domain reviewers for manila patches to be added to the OpenStackSDK core team to help with reviews == Day 4 - Apr 22, 2021, Thu == === Support soft delete/recycle bin functionality === - haixin told us that inspur cloud has a "soft-delete" functionality that they would like to contribute to upstream manila - the functionality allows users to recover deleted shares within a configurable time zone via Manila API - the proposal is to only allow soft-deletion and recovery of shares, but in the future this can be extended to other resources such as snapshots, replicas, share groups - soft deletions have the same validations as deletions currently do - quota implications and alternatives (unmanage, an system admin API) were discussed Actions: - a specification will be proposed for this work === New Driver Requirements and CI knowledge sharing === - there was a informative presentation from the Cinder PTG [3] about using Software Factory to deploy a Third Party CI system with Gerrit, Nodepool, Zuulv3 and other components - some helpful but unmaintained test hook code exists in the repos to aid with CI systems using devstack-gate - since devstack-gate has been deprecated for a long time, third party CIs are strongly recommended to move away from it Actions: - complete removal of devstack-gate related manila-tempest-plugin installation from manila's devstack plugin - create a Third Party CI and help wiki/doc that vendor driver maintainers can curate and contribute to === New Driver for Dell EMC PowerStore === - vb123 discussed Dell EMC powerstore storage, its capabilities and their work in cinder, and made a proposal for a driver in manila - this driver's being targeted for the Xena release: https://blueprints.launchpad.net/manila/+spec/powerstore-manila-driver - community shared updates and documentation from the Victoria and Wallaby cycles that may be helpful to the driver authors - driver submission deadlines have been published: https://releases.openstack.org/xena/schedule.html === Enabling access allow/deny for container driver with LDAP === - esantos and carloss enabled the container driver in the Wallaby release to support addition of and updates to LDAP security services - however, this work was just intended to expose a reference implementation to security services - the container driver does not validate access rules applied with the LDAP server today - the proposal is to enable validation in the driver when a security service is configured Actions: - publish container driver helper image to the manila-image-elements repository and explore container registries that community can use for the long term for CI and dev/test - file a bug and fix the missing LDAP validation in the container share driver === Keystone based user/cephx authentication === - "cephx" (CEPHFS) access types are not validated beyond their structure and syntax - with cephx access, an access key for the ceph client user can be retrieved via the access-list API call - access keys are privileged information, while we've always had a stance that Native CephFS is only suitable in trusted tenant environments, there's a desire to hide access keys from unrelated users in the project - gouthamr proposed that we allow deployers to choose if Keystone user identity validation must be performed - when validation is enabled, manila ensures that users may only allow share access to other users in their project - users may only retrieve access keys belonging to themselves, and no other project users - an alternative would be to keystone validation would be to allow controlling the access_key via a separate RBAC policy - with the alternative, we wouldn't be able to validate if the access to user is in the same project as the user requesting the access Actions: - a specification will be proposed, however, the work's not expected to be prioritized for Xena === Addressing Technical Debt === - we discussed tech debt that was piling up from the last couple of cycles and assigned owners to drive them to completion in the Xena cycle Actions: - python-manilaclient still relies on keystoneclient to perform auth, there's ongoing work from vkmc+gouthamr to replace this with keystoneauth [4] - code in manila relies on the retrying python library that is no longer maintained; kiwi36 will propose replacing the usage with tenacity with tbarron's help [5] - tbarron will be resurrecting a patch that removes old deprecated configuration options [6] - carloss will be picking up rootwrap-to-privsep migration === Unified Limits === Topic Etherpad: https://etherpad.opendev.org/p/unified-limits-ptg - kiwi36, our Outreachy intern, proposed the design for using keystone's unified limits in manila with the help of the oslo.limit library - he walked through the prototype using resource quotas and highlighted the differences with the current quota system - we discussed why nested quotas were preferable to user quotas Actions: - a release independent specification will be proposed for the community to review and continue this work - explore how share type quotas will be handled === Secure RBAC Follow up === - vhari presented the RBAC changes that were made in the Wallaby release, including support for the system scope and reader role - a tempest test strategy was discussed, along with improvements made to tempest itself to make test credential setup with the new default roles - there's no plan to enforce the new defaults in Xena, we'll take the release to stabilize the feature and turn on deprecation warnings indicating intent to switch to the new defaults in the Y release Actions: - vhari and lkuchlan will start working on the tempest tests - wrap up known issues with the new defaults and backport fixes to the Wallaby release - manila's admin model and user personas will be documented == Day 5 - Apr 23, 2021, Fri == === VirtIOFS plans and update === Topic Etherpad: https://etherpad.opendev.org/p/nova-manila-virtio-fs-support-xptg-april-2021 - tbarron and lyarwood presented an update on the research that was done in the wallaby release wrt live attaching virtiofs volumes to running compute instances - qemu supports live attaches, the feature is coming to libvirt soon. live migration of instances with virtiofs volumes isn't supported yet - this is ongoing work in the qemu/kvm/libvirt communities - we discussed the connection initiation and information exchange between manila and nova, and what parts will be coordinated via the os-share library - there are as yet no anticipated changes to the manila API Actions: - continue working with the libvirt community to have the live-attach APIs added - a specification will be proposed to nova (and manila if changes are necessary to the manila API) === CephFS driver update === Topic Etherpad: https://etherpad.opendev.org/p/cephfs-driver-update-xena-ptg - vkmc presented the changes and new features in the cephfs driver in the wallaby cycle - the driver was overhauled to interact with ceph via the ceph-mgr daemon instead of the deprecated ceph_volume_client library - there's an upgrade impact going from victoria to wallaby. Deployers must consult the release notes [7] and the ceph driver documentation [8] prior to upgrading. Data path operations are unaffected during an upgrade: - ceph clusters must be running the latest version of the release they're on to allow the wallaby manila driver to communicate with the ceph cluster - the ceph user configured with the driver needs mgr "caps", and mds/osd privileges can be dropped/reduced [8] - we discussed dropping the use of dbus in the nfs-ganesha module, and using the "watch-url" approach that ganesha's ceph-fsal module provides - we also discussed ceph-adm and ceph-orch based deployment which replaces ceph-ansible and the impact that would have on ganesha configuration - ceph quincy will support active/active nfs-ganesha clusters (even with ceph-adm based deployment), we discussed manila side changes to take advantage of this feature === manilaclient and OpenStackClient plans and update === Topic Etherpad: https://etherpad.opendev.org/p/manila-osc-xena-ptg - maaritamm provided an update on our multi-cycle effort to gain parity between manila's shell client and the osc plugin - we made great progress in the wallaby cycle and covered all "user" related functionality that was requested - we discussed what's left, and sought community priority of the missing commands - we discussed deprecation of the manila shell client and only supporting the osc plugin - albeit addressing the "standalone" use case where "manila" can be used in place of "openstack share" if users desire - we could use help to add new commands, and to review code in the osc plugin Actions: - achieve complete feature parity between the two shell client implementations, deprecate the manila shell client === manila-ui plans and update === Topic Etherpad: https://etherpad.opendev.org/p/manila-ui-update-xena-ptg - disap, our Outreachy intern and vkmc, highlighted the changes made in the wallaby cycle - we lauded the cross-project collaboration that was renewed in this cycle to triage issues, and proactively work on incoming changes in horizon and elsewhere - manila-ui's microversion catchup is progressing slowly, but surely - we have another outreachy project proposal that overlaps with the xena cycle Actions: - continue to catch up to API feature functionality in the UI Thanks for staying with me so far! :) There were a number of items that we couldn't cover, that you can see on the PTG Planning etherpad [1], if you own those topics, please add them to the weekly IRC meeting agenda [2] and we can go over them. The meeting minutes etherpad continues to be available [9]. As usual, the whole PTG was recorded and posted on the OpenStack Manila Youtube channel [10] We had a great turn out and we heard from a diverse set of contributors, operators, interns and users from several affiliations and time zones. On behalf of the OpenStack Manila team, I deeply appreciate your time, and help in keeping the momentum on the project! Best, gouthamr [1] https://etherpad.opendev.org/p/xena-ptg-manila-planning [2] https://wiki.openstack.org/wiki/Manila/Meetings [3] https://www.youtube.com/watch?v=hVLpPBldn7g [4] https://review.opendev.org/647538 [5] https://review.opendev.org/380552 [6] https://review.opendev.org/745206/ [7] https://docs.openstack.org/releasenotes/manila/wallaby.html [8] https://docs.openstack.org/manila/wallaby/admin/cephfs_driver.html [9] https://etherpad.opendev.org/p/xena-ptg-manila [10] https://www.youtube.com/playlist?list=PLnpzT0InFrqDmsKKsF0MQKtv9fP4Oik17 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu May 6 15:26:23 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 6 May 2021 11:26:23 -0400 Subject: [cinder] save the date for xena midcycles Message-ID: <1cf42a1a-00a1-61c6-443e-37891cec37e8@gmail.com> At yesterday's cinder meeting, we selected the times for our virtual midcycles: midcycle-1: week R-18, Wednesday 2 June 2021, 1400-1600 UTC midcycle-2: week R-9, Wednesday 4 August 2021, 1400-1600 UTC Use this etherpad to propose topics: https://etherpad.opendev.org/p/cinder-xena-mid-cycles Connection info will be posted on the etherpad closer to the date of each midcycle. The midcycles will be recorded. cheers, brian From pierre at stackhpc.com Thu May 6 15:53:02 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 6 May 2021 17:53:02 +0200 Subject: [blazar] Proposing Jason Anderson as new core reviewer Message-ID: Hello, Jason has been involved with the Blazar project for several years now. He has submitted useful new features and fixed bugs, and even wrote a couple of specs. I am proposing to add him as a core reviewer. If there is no objection, I will grant him +2 rights. Thank you Jason for your contributions! Best wishes, Pierre Riteau (priteau) From rosmaita.fossdev at gmail.com Thu May 6 15:58:50 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 6 May 2021 11:58:50 -0400 Subject: [interop][cinder] review of next.json Message-ID: <0e1d8cce-ec5b-2fb1-d8fa-307192a5eed7@gmail.com> Hello Interop WG, We began a review of next.json [0] at yesterday's cinder meeting and a few issues came up. 1. The 'description' for the volumes capabilities should be updated; here's a patch: https://review.opendev.org/c/osf/interop/+/789940 2. Are capabilities that require an admin (or system-scoped) token still excluded? 3. Are capabilities that require multiple users for testing (for example, volume transfer) excluded? Thanks! [0] https://opendev.org/osf/interop/src/branch/master/next.json From marios at redhat.com Thu May 6 16:05:32 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 6 May 2021 19:05:32 +0300 Subject: [TripleO] Closed out wallaby milestone in Launchpad and moved bugs to xena-1 Message-ID: Hello, I closed out the wallaby-rc1 milestone in launchpad and moved ongoing bugs over to xena-1 https://launchpad.net/tripleo/+milestone/xena-1 - info on the moved bugs @ [1] Thanks to tosky for pinging on tripleo about this: there were problems a while back with gerrit <--> launchpad integration so there are likely many bugs that should be in fix-released but weren't moved automatically after the related patches merged. Please take a moment to check the bugs assigned to you in https://launchpad.net/tripleo/+milestone/xena-1 and make sure the status reflects reality? regards, marios [1] https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3733636 From christian.rohmann at inovex.de Thu May 6 16:08:47 2021 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Thu, 6 May 2021 18:08:47 +0200 Subject: Nova not updating to new size of an extended in-use / attached cinder volume (Ceph RBD) to guest In-Reply-To: <48679cb7-e7c1-4d19-2831-720bfa9ca5af@inovex.de> References: <37cf15be-48d6-fdbc-a003-7117bda10dbc@inovex.de> <20210215111834.7nw3bdqsccoik2ss@lyarwood-laptop.usersys.redhat.com> <23b84a12-91a7-b739-d88d-bbc630bd9d5f@inovex.de> <5f0d1404f3bb1774918288912a98195f1d48f361.camel@redhat.com> <66b67bb4d7494601a87436bdc1d7b00b@binero.com> <48679cb7-e7c1-4d19-2831-720bfa9ca5af@inovex.de> Message-ID: <529b5d86-fc37-dc2d-7dd2-aabffdb0d945@inovex.de> On 05/05/2021 17:34, Christian Rohmann wrote: > But the admin user with id "4b2abc14e511a7c0b10c" doing a resize > attempt caused an extend_volume and consequently did trigger a > notification of the VM, > just as expected and documented in regards to this feature. > > > > Does anybody have any idea what could cause this or where to look for > more details? Apparently this is a (long) known issue (i.e. https://bugzilla.redhat.com/show_bug.cgi?id=1640443) which is caused by Cinder talking to Nova to have it create the volume-extended event but does so with user credentials and this is denied by the default policy: --- cut --- 2021-05-06 15:13:12.214 4197 DEBUG nova.api.openstack.wsgi [req-4c291455-a21a-4314-8e57-173e66e6e60a f9c0b52ec43e423e9b5ea63d620f4e27 92a6c19e7482400385806266cdef149c - default default] Returning 403 to user: Po licy doesn't allow os_compute_api:os-server-external-events:create to be performed. __call__ /usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py:941 --- cut --- Unfortunately cinder does not report or log anything. Is switching cinder to the "admin" interface the proper approach here or am I missing something else? Regards Christian From smooney at redhat.com Thu May 6 16:29:21 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 06 May 2021 17:29:21 +0100 Subject: Nova not updating to new size of an extended in-use / attached cinder volume (Ceph RBD) to guest In-Reply-To: <529b5d86-fc37-dc2d-7dd2-aabffdb0d945@inovex.de> References: <37cf15be-48d6-fdbc-a003-7117bda10dbc@inovex.de> <20210215111834.7nw3bdqsccoik2ss@lyarwood-laptop.usersys.redhat.com> <23b84a12-91a7-b739-d88d-bbc630bd9d5f@inovex.de> <5f0d1404f3bb1774918288912a98195f1d48f361.camel@redhat.com> <66b67bb4d7494601a87436bdc1d7b00b@binero.com> <48679cb7-e7c1-4d19-2831-720bfa9ca5af@inovex.de> <529b5d86-fc37-dc2d-7dd2-aabffdb0d945@inovex.de> Message-ID: <93533ed1417037539e651ba50e7b721afb1e4bdd.camel@redhat.com> On Thu, 2021-05-06 at 18:08 +0200, Christian Rohmann wrote: > On 05/05/2021 17:34, Christian Rohmann wrote: > > But the admin user with id "4b2abc14e511a7c0b10c" doing a resize > > attempt caused an extend_volume and consequently did trigger a > > notification of the VM, > > just as expected and documented in regards to this feature. > > > > > > > > Does anybody have any idea what could cause this or where to look for > > more details? > > > Apparently this is a (long) known issue (i.e. > https://bugzilla.redhat.com/show_bug.cgi?id=1640443) which is caused by > > Cinder talking to Nova to have it create the volume-extended event but > does so with user credentials and this is denied by the default policy: > > --- cut --- > 2021-05-06 15:13:12.214 4197 DEBUG nova.api.openstack.wsgi > [req-4c291455-a21a-4314-8e57-173e66e6e60a > f9c0b52ec43e423e9b5ea63d620f4e27 92a6c19e7482400385806266cdef149c - > default default] Returning 403 to user: Po > licy doesn't allow os_compute_api:os-server-external-events:create to be > performed. __call__ > /usr/lib/python3/dist-packages/nova/api/openstack/wsgi.py:941 > --- cut --- > > Unfortunately cinder does not report or log anything. that woudl make sense give the externa event api is admin only and only inteed to be use by services so the fix would be for cidner to use an admin credtial not the user one to send the event to nova. > > > Is switching cinder to the "admin" interface the proper approach here or > am I missing something else? > > > > Regards > > > Christian > > From dmeng at uvic.ca Thu May 6 16:29:30 2021 From: dmeng at uvic.ca (dmeng) Date: Thu, 06 May 2021 09:29:30 -0700 Subject: [sdk]: identity service if get_application_credential method could use user name Message-ID: <308b555bdd500119c9f17535a50c0649@uvic.ca> Hello, Hope this email finds you well. Shall I please ask a question about the openstacksdk identity service, the application credential? We would like to get the expiration date of an application credential using the Identity v3 method get_application_credential or find_appplication_credential. We found that they require both the user id and the application credential id in order to get it. The code I'm using is as the following, and this works for us. conn = connection.Connection( session=sess, region_name='Victoria, identity_api_version='3') keystone = conn.identity find = keystone.get_application_credential(user='my_user_id', application_credential = 'app_cred_id') expire_date = find['expires_at'] We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Thu May 6 17:33:02 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Thu, 6 May 2021 17:33:02 +0000 Subject: [openstack-ansible] Keystone federation with OpenID needs shibboleth In-Reply-To: References: <23fe816aac2c4d32bfd21ee658ceb56e@elca.ch> <78e663e39e314d4087b1823b22cd3fa4@elca.ch> <6aafc35f-a33b-78a5-8682-0607d13c1f31@rd.bbc.co.uk> Message-ID: <044c5671bfea485fae9e975fd020b36d@elca.ch> Your patch is ok, that’s what I did by superseding the variable “keystone_apache_modules”. Ansible -vvv trace shows that the task parameters are correct, but the apache shib module remains enabled. Anyway, authentication still fails and I get “valid-user: denied” in apache logs because of a weird interference with libapache2-mod-shib package. For now, the workaround I’ve found is not to install the libapache2-mod-shib package: “openstack-ansible os-keystone-install.yml --extra-vars '{"keystone_sp_distro_packages":["libapache2-mod-auth-openidc"]}'” And everything works fine (if you don’t need shibboleth), keystone deployment and openid auth. But this is just a workaround. From: Jonathan Rosser Sent: jeudi, 6 mai 2021 11:21 To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-ansible] Keystone federation with OpenID needs shibboleth I've made a patch to correct this module name which it would be great if you could test and leave a comment if it's OK https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/790018 Are you able to debug any further why the shib module is being enabled, maybe through using -vv on the openstack-ansible command to show the task parameters, or adding some debug tasks in os_keystone to show the values of keystone_sp_apache_mod_shib and keystone_sp_apache_mod_auth_openidc? On 06/05/2021 09:17, Taltavull Jean-Francois wrote: I forgot to mention: in Ubuntu 20.04, the apache shibboleth module is named "shib" and not "sib2". So, I had to supersede the variable " keystone_apache_modules". If you don't do this, os-keystone playbook fails with " "Failed to set module shib2 to disabled:\n\nMaybe the module identifier (mod_shib) was guessed incorrectly.Consider setting the \"identifier\" option.", "rc": 1, "stderr": "ERROR: Module shib2 does not exist!\n"". So, apache modules enabled are: - shib - auth_openidc - proxy_uwsgi - headers -----Original Message----- From: Jonathan Rosser Sent: mercredi, 5 mai 2021 19:19 To: openstack-discuss at lists.openstack.org Subject: Re: [openstack-ansible] Keystone federation with OpenID needs shibboleth Could you check which apache modules are enabled? The set is defined in the code here https://github.com/openstack/openstack-ansible- os_keystone/blob/master/vars/ubuntu-20.04.yml#L85-L95 On 05/05/2021 17:41, Taltavull Jean-Francois wrote: I've got keystone_sp.apache_mod = mod_auth_openidc -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu May 6 18:11:29 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 6 May 2021 18:11:29 +0000 Subject: [interop][cinder] review of next.json In-Reply-To: <0e1d8cce-ec5b-2fb1-d8fa-307192a5eed7@gmail.com> References: <0e1d8cce-ec5b-2fb1-d8fa-307192a5eed7@gmail.com> Message-ID: Thanks Brian. I reviewed the patch and will add other for review. Yes on #2. Interop is testing portability of applications. Thus, admin APIs are out of scope. We do use admin token in some cases to setup environment for user testing, as it is done by refstack. #3. We can add multiple users scenarios to test coverage. Volume sharing can be part of it. Thanks, Arkady -----Original Message----- From: Brian Rosmaita Sent: Thursday, May 6, 2021 10:59 AM To: openstack-discuss at lists.openstack.org Subject: [interop][cinder] review of next.json [EXTERNAL EMAIL] Hello Interop WG, We began a review of next.json [0] at yesterday's cinder meeting and a few issues came up. 1. The 'description' for the volumes capabilities should be updated; here's a patch: https://urldefense.com/v3/__https://review.opendev.org/c/osf/interop/*/789940__;Kw!!LpKI!2qbmsgFRsMwjN02DrU4T1EChrp_HEOngd55x-7TPdBqnN5Yl0sCmHwSx0bWKD3NlENau$ [review[.]opendev[.]org] 2. Are capabilities that require an admin (or system-scoped) token still excluded? 3. Are capabilities that require multiple users for testing (for example, volume transfer) excluded? Thanks! [0] https://urldefense.com/v3/__https://opendev.org/osf/interop/src/branch/master/next.json__;!!LpKI!2qbmsgFRsMwjN02DrU4T1EChrp_HEOngd55x-7TPdBqnN5Yl0sCmHwSx0bWKD0CCThmp$ [opendev[.]org] From manchandavishal143 at gmail.com Thu May 6 18:26:02 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Thu, 6 May 2021 23:56:02 +0530 Subject: [horizon] Support for Angular 1.8.x in Horizon (fixing Debian Bullseye) In-Reply-To: <0dbeab98-a93e-efba-c71f-dbf22596f585@debian.org> References: <0dbeab98-a93e-efba-c71f-dbf22596f585@debian.org> Message-ID: Hi Thomas, Horizon team discussed this topic in yesterday's horizon weekly meeting. I will try to push a patch to Update XStatic-Angular to 1.8.2 from 1.5.8 by next week. As of now, we use 1.5.8 angularjs version [1]. I have also reported a new bug for that so it's easy to track [2]. It's going to take some time as we have a small team. It would be great if you can also review related patches. Thanks & Regards, Vishal Manchanda [1] https://opendev.org/openstack/horizon/src/branch/master/requirements.txt#L44 [2] https://bugs.launchpad.net/horizon/+bug/1927261 On Tue, May 4, 2021 at 8:16 PM Thomas Goirand wrote: > Hi, > > In Debian Bullseye, we've noticed that the ssh keypair and Glance image > panels are broken. We have python3-xstatic-angular that used to depends > on libjs-angularjs, and that libjs-angularjs moved to 1.8.2. Therefore, > Horizon in Bullseye appears broken. > > I have re-embedded Angula within the python3-xstatic-angular and ask the > Debian release team for an unblock, but due to the fact that the Debian > policy is to *not* allow twice the same library with different versions, > I have little hope for this unblock request to be approved. See the > discussion here: > https://bugs.debian.org/988054 > > So my question is: how hard would it be to fix Horizon so that it could > work with libjs-angularjs 1.8.2 ? Is there any patch already available > for this? > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Thu May 6 19:30:05 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 6 May 2021 21:30:05 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> Message-ID: <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> Hi Michael and thanks a lot for your help on this, after following your steps the agent got deployed successfully in the amphora-image. I have some other queries that I would like to ask mainly related to the health-manager/load-balancer network setup and IP assignment. First of all let me point out that I am using a manual installation process, and it might help others to understand the underlying infrastructure required to make this component work as expected. 1- The installation procedure contains this step: $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia which is later on called to assign the IP to the o-hm0 interface which is connected to the lb-management network as shown below: $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia Apart from having a dhcp config for a single IP seems a bit of an overkill, using these steps is injecting an additional routing table into the default namespace as shown below in my case: # route -n Kernel IP routing table Destination     Gateway         Genmask         Flags Metric Ref    Use Iface 0.0.0.0         172.16.0.1      0.0.0.0         UG    0 0        0 o-hm0 0.0.0.0         10.X.X.1        0.0.0.0         UG    100 0        0 ensX 10.X.X.0        0.0.0.0         255.255.255.0   U     100 0        0 ensX 169.254.169.254 172.16.0.100    255.255.255.255 UGH   0 0        0 o-hm0 172.16.0.0      0.0.0.0         255.240.0.0     U     0 0        0 o-hm0 Since the load-balancer management network does not need any external connectivity (but only communication between health-manager service and amphora-agent), why is a gateway required and why isn't the IP address allocated as part of the interface creation script which is called when the service is started or stopped (example below)? --- #!/bin/bash set -ex MAC=$MGMT_PORT_MAC BRNAME=$BRNAME if [ "$1" == "start" ]; then   ip link add o-hm0 type veth peer name o-bhm0   brctl addif $BRNAME o-bhm0   ip link set o-bhm0 up   ip link set dev o-hm0 address $MAC  *** ip addr add 172.16.0.2/12 dev o-hm0  ***ip link set o-hm0 mtu 1500   ip link set o-hm0 up   iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT elif [ "$1" == "stop" ]; then   ip link del o-hm0 else   brctl show $BRNAME   ip a s dev o-hm0 fi --- 2- Is there a possibility to specify a fixed vlan outside of tenant range for the load balancer management network? 3- Are the configuration changes required only in neutron.conf or also in additional config files like neutron_lbaas.conf and services_lbaas.conf, similar to the vpnaas configuration? Thanks in advance for any assistance, but its like putting together a puzzle of information :-) On 05/05/2021 20:25, Michael Johnson wrote: > Hi Luke. > > Yes, the amphora-agent will listen on 9443 in the amphorae instances. > It uses TLS mutual authentication, so you can get a TLS response, but > it will not let you into the API without a valid certificate. A simple > "openssl s_client" is usually enough to prove that it is listening and > requesting the client certificate. > > I can't talk to the "openstack-octavia-diskimage-create" package you > found in centos, but I can discuss how to build an amphora image using > the OpenStack tools. > > If you get Octavia from git or via a release tarball, we provide a > script to build the amphora image. This is how we build our images for > the testing gates, etc. and is the recommended way (at least from the > OpenStack Octavia community) to create amphora images. > > https://opendev.org/openstack/octavia/src/branch/master/diskimage-create > > For CentOS 8, the command would be: > > diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 > is the minimum disk size for centos images, you may want more if you > are not offloading logs) > > I just did a run on a fresh centos 8 instance: > git clone https://opendev.org/openstack/octavia > python3 -m venv dib > source dib/bin/activate > pip3 install diskimage-builder PyYAML six > sudo dnf install yum-utils > ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 > > This built an image. > > Off and on we have had issues building CentOS images due to issues in > the tools we rely on. If you run into issues with this image, drop us > a note back. > > Michael > > On Wed, May 5, 2021 at 9:37 AM Luke Camilleri > wrote: >> Hi there, i am trying to get Octavia running on a Victoria deployment on >> CentOS 8. It was a bit rough getting to the point to launch an instance >> mainly due to the load-balancer management network and the lack of >> documentation >> (https://docs.openstack.org/octavia/victoria/install/install.html) to >> deploy this oN CentOS. I will try to fix this once I have my deployment >> up and running to help others on the way installing and configuring this :-) >> >> At this point a LB can be launched by the tenant and the instance is >> spawned in the Octavia project and I can ping and SSH into the amphora >> instance from the Octavia node where the octavia-health-manager service >> is running using the IP within the same subnet of the amphoras >> (172.16.0.0/12). >> >> Unfortunately I keep on getting these errors in the log file of the >> worker log (/var/log/octavia/worker.log): >> >> 2021-05-05 01:54:49.368 14521 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >> to instance. Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.4.46', p >> ort=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] >> Connection ref >> used',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries >> (currently set to 120) exhausted. The amphora is unavailable. Reason: >> HTTPSConnectionPool(host='172.16 >> .4.46', port=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne >> ction refused',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora >> compute instance failed to become reachable. This either means the >> compute driver failed to fully boot the >> instance inside the timeout interval or the instance is not reachable >> via the lb-mgmt-net.: >> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: >> contacting the amphora timed out >> >> obviously the instance is deleted then and the task fails from the >> tenant's perspective. >> >> The main issue here is that there is no service running on port 9443 on >> the amphora instance. I am assuming that this is in fact the >> amphora-agent service that is running on the instance which should be >> listening on this port 9443 but the service does not seem to be up or >> not installed at all. >> >> To create the image I have installed the CentOS package >> "openstack-octavia-diskimage-create" which provides the utility >> disk-image-create but from what I can conclude the amphora-agent is not >> being installed (thought this was done automatically by default :-( ) >> >> Can anyone let me know if the amphora-agent is what gets queried on port >> 9443 ? >> >> If the agent is not installed/injected by default when building the >> amphora image? >> >> The command to inject the amphora-agent into the amphora image when >> using the disk-image-create command? >> >> Thanks in advance for any assistance >> >> From johnsomor at gmail.com Thu May 6 20:46:18 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 6 May 2021 13:46:18 -0700 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> Message-ID: Hi Luke, 1. I agree that DHCP is technically unnecessary for the o-hm0 interface if you can manage your address allocation on the network you are using for the lb-mgmt-net. I don't have detailed information about the Ubuntu install instructions, but I suspect it was done to simplify the IPAM to be managed by whatever is providing DHCP on the lb-mgmt-net provided (be it neutron or some other resource on a provider network). The lb-mgmt-net is simply a neutron network that the amphora management address is on. It is routable and does not require external access. The only tricky part to it is the worker, health manager, and housekeeping processes need to be reachable from the amphora, and the controllers need to reach the amphora over the network(s). There are many ways to accomplish this. 2. See my above answer. Fundamentally the lb-mgmt-net is just a neutron network that nova can use to attach an interface to the amphora instances for command and control traffic. As long as the controllers can reach TCP 9433 on the amphora, and the amphora can send UDP 5555 back to the health manager endpoints, it will work fine. 3. Octavia, with the amphora driver, does not require any special configuration in Neutron (beyond the advanced services RBAC policy being available for the neutron service account used in your octavia configuration file). The neutron_lbaas.conf and services_lbaas.conf are legacy configuration files/settings that were used for neutron-lbaas which is now end of life. See the wiki page for information on the deprecation of neutron-lbaas: https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. Michael On Thu, May 6, 2021 at 12:30 PM Luke Camilleri wrote: > > Hi Michael and thanks a lot for your help on this, after following your > steps the agent got deployed successfully in the amphora-image. > > I have some other queries that I would like to ask mainly related to the > health-manager/load-balancer network setup and IP assignment. First of > all let me point out that I am using a manual installation process, and > it might help others to understand the underlying infrastructure > required to make this component work as expected. > > 1- The installation procedure contains this step: > > $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia > > which is later on called to assign the IP to the o-hm0 interface which > is connected to the lb-management network as shown below: > > $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia > > Apart from having a dhcp config for a single IP seems a bit of an > overkill, using these steps is injecting an additional routing table > into the default namespace as shown below in my case: > > # route -n > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use > Iface > 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 > 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX > 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX > 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 > 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 > > Since the load-balancer management network does not need any external > connectivity (but only communication between health-manager service and > amphora-agent), why is a gateway required and why isn't the IP address > allocated as part of the interface creation script which is called when > the service is started or stopped (example below)? > > --- > > #!/bin/bash > > set -ex > > MAC=$MGMT_PORT_MAC > BRNAME=$BRNAME > > if [ "$1" == "start" ]; then > ip link add o-hm0 type veth peer name o-bhm0 > brctl addif $BRNAME o-bhm0 > ip link set o-bhm0 up > ip link set dev o-hm0 address $MAC > *** ip addr add 172.16.0.2/12 dev o-hm0 > ***ip link set o-hm0 mtu 1500 > ip link set o-hm0 up > iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT > elif [ "$1" == "stop" ]; then > ip link del o-hm0 > else > brctl show $BRNAME > ip a s dev o-hm0 > fi > > --- > > 2- Is there a possibility to specify a fixed vlan outside of tenant > range for the load balancer management network? > > 3- Are the configuration changes required only in neutron.conf or also > in additional config files like neutron_lbaas.conf and > services_lbaas.conf, similar to the vpnaas configuration? > > Thanks in advance for any assistance, but its like putting together a > puzzle of information :-) > > On 05/05/2021 20:25, Michael Johnson wrote: > > Hi Luke. > > > > Yes, the amphora-agent will listen on 9443 in the amphorae instances. > > It uses TLS mutual authentication, so you can get a TLS response, but > > it will not let you into the API without a valid certificate. A simple > > "openssl s_client" is usually enough to prove that it is listening and > > requesting the client certificate. > > > > I can't talk to the "openstack-octavia-diskimage-create" package you > > found in centos, but I can discuss how to build an amphora image using > > the OpenStack tools. > > > > If you get Octavia from git or via a release tarball, we provide a > > script to build the amphora image. This is how we build our images for > > the testing gates, etc. and is the recommended way (at least from the > > OpenStack Octavia community) to create amphora images. > > > > https://opendev.org/openstack/octavia/src/branch/master/diskimage-create > > > > For CentOS 8, the command would be: > > > > diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 > > is the minimum disk size for centos images, you may want more if you > > are not offloading logs) > > > > I just did a run on a fresh centos 8 instance: > > git clone https://opendev.org/openstack/octavia > > python3 -m venv dib > > source dib/bin/activate > > pip3 install diskimage-builder PyYAML six > > sudo dnf install yum-utils > > ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 > > > > This built an image. > > > > Off and on we have had issues building CentOS images due to issues in > > the tools we rely on. If you run into issues with this image, drop us > > a note back. > > > > Michael > > > > On Wed, May 5, 2021 at 9:37 AM Luke Camilleri > > wrote: > >> Hi there, i am trying to get Octavia running on a Victoria deployment on > >> CentOS 8. It was a bit rough getting to the point to launch an instance > >> mainly due to the load-balancer management network and the lack of > >> documentation > >> (https://docs.openstack.org/octavia/victoria/install/install.html) to > >> deploy this oN CentOS. I will try to fix this once I have my deployment > >> up and running to help others on the way installing and configuring this :-) > >> > >> At this point a LB can be launched by the tenant and the instance is > >> spawned in the Octavia project and I can ping and SSH into the amphora > >> instance from the Octavia node where the octavia-health-manager service > >> is running using the IP within the same subnet of the amphoras > >> (172.16.0.0/12). > >> > >> Unfortunately I keep on getting these errors in the log file of the > >> worker log (/var/log/octavia/worker.log): > >> > >> 2021-05-05 01:54:49.368 14521 WARNING > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > >> to instance. Retrying.: requests.exceptions.ConnectionError: > >> HTTPSConnectionPool(host='172.16.4.46', p > >> ort=9443): Max retries exceeded with url: // (Caused by > >> NewConnectionError(' >> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] > >> Connection ref > >> used',)) > >> > >> 2021-05-05 01:54:54.374 14521 ERROR > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries > >> (currently set to 120) exhausted. The amphora is unavailable. Reason: > >> HTTPSConnectionPool(host='172.16 > >> .4.46', port=9443): Max retries exceeded with url: // (Caused by > >> NewConnectionError(' >> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne > >> ction refused',)) > >> > >> 2021-05-05 01:54:54.374 14521 ERROR > >> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora > >> compute instance failed to become reachable. This either means the > >> compute driver failed to fully boot the > >> instance inside the timeout interval or the instance is not reachable > >> via the lb-mgmt-net.: > >> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: > >> contacting the amphora timed out > >> > >> obviously the instance is deleted then and the task fails from the > >> tenant's perspective. > >> > >> The main issue here is that there is no service running on port 9443 on > >> the amphora instance. I am assuming that this is in fact the > >> amphora-agent service that is running on the instance which should be > >> listening on this port 9443 but the service does not seem to be up or > >> not installed at all. > >> > >> To create the image I have installed the CentOS package > >> "openstack-octavia-diskimage-create" which provides the utility > >> disk-image-create but from what I can conclude the amphora-agent is not > >> being installed (thought this was done automatically by default :-( ) > >> > >> Can anyone let me know if the amphora-agent is what gets queried on port > >> 9443 ? > >> > >> If the agent is not installed/injected by default when building the > >> amphora image? > >> > >> The command to inject the amphora-agent into the amphora image when > >> using the disk-image-create command? > >> > >> Thanks in advance for any assistance > >> > >> From zigo at debian.org Thu May 6 23:05:44 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 7 May 2021 01:05:44 +0200 Subject: [horizon] Support for Angular 1.8.x in Horizon (fixing Debian Bullseye) In-Reply-To: References: <0dbeab98-a93e-efba-c71f-dbf22596f585@debian.org> Message-ID: Hi Vishal, Thanks a lot for the reactivity. I know almost nothing about AngularJS, though I will happily test patches (manually) and report problems. I can at least try and see if the Glance image panel and the Nova SSH keypair screens get repaired... :) Cheers, Thomas Goirand (zigo) On 5/6/21 8:26 PM, vishal manchanda wrote: > Hi Thomas, > > Horizon team discussed this topic in yesterday's horizon weekly meeting. > I will try to push a patch to Update XStatic-Angular to 1.8.2 from 1.5.8 > by next week. > As of now, we use 1.5.8 angularjs version [1]. > I have also reported a new bug for that so it's easy to track [2]. > It's going to take some time as we have a small team. > It would be great if you can also review related patches. > > Thanks & Regards, > Vishal Manchanda > > [1] > https://opendev.org/openstack/horizon/src/branch/master/requirements.txt#L44 > > [2] https://bugs.launchpad.net/horizon/+bug/1927261 > > > On Tue, May 4, 2021 at 8:16 PM Thomas Goirand > wrote: > > Hi, > > In Debian Bullseye, we've noticed that the ssh keypair and Glance image > panels are broken. We have python3-xstatic-angular that used to depends > on libjs-angularjs, and that libjs-angularjs moved to 1.8.2. Therefore, > Horizon in Bullseye appears broken. > > I have re-embedded Angula within the python3-xstatic-angular and ask the > Debian release team for an unblock, but due to the fact that the Debian > policy is to *not* allow twice the same library with different versions, > I have little hope for this unblock request to be approved. See the > discussion here: > https://bugs.debian.org/988054 > > So my question is: how hard would it be to fix Horizon so that it could > work with libjs-angularjs 1.8.2 ? Is there any patch already available > for this? > > Cheers, > > Thomas Goirand (zigo) > From arne.wiebalck at cern.ch Fri May 7 09:09:39 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 7 May 2021 11:09:39 +0200 Subject: [baremetal-sig][ironic] Tue May 11, 2021, 2pm UTC: Why Ironic? Message-ID: <4e754784-6574-b173-74d2-d4758716c157@cern.ch> Dear all, The Bare Metal SIG will meet next week Tue May 11, 2021, at 2pm UTC on zoom. The meeting will feature a "topic-of-the-day" presentation by Arne Wiebalck (arne_wiebalck) on "Why Ironic? (The Case of Ironic in CERN IT)" As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome! Cheers, Arne From mkopec at redhat.com Fri May 7 11:48:36 2021 From: mkopec at redhat.com (Martin Kopec) Date: Fri, 7 May 2021 13:48:36 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> Message-ID: hmm, seems like we have hit the issue again, however in a different job now: Latest logs: https://zuul.opendev.org/t/openstack/build/0565c3d252194f9ba67f4af20e8be65d Link to the review where it occurred: https://review.opendev.org/c/osf/refstack-client/+/788743 On Tue, 6 Apr 2021 at 18:47, Clark Boylan wrote: > On Tue, Apr 6, 2021, at 9:11 AM, Radosław Piliszek wrote: > > On Tue, Apr 6, 2021 at 6:02 PM Jeremy Stanley wrote: > > > Looking at the error, I strongly suspect memory exhaustion. We could > > > try tuning xz to use less memory when compressing. > > Worth noting that we continue to suspect memory pressure, and in > particular diving into swap, for random failures that appear timing or > performance related. I still think it would be a helpful exercise for > OpenStack to look at its memory consumption (remember end users will > experience this too) and see if there are any unexpected areas of memory > use. I think the last time i skimmed logs the privsep daemon was a large > consumer because we separate instance is run for each service and they all > add up. > > > > > That was my hunch as well, hence why I test using gzip. > > > > On Tue, Apr 6, 2021 at 5:51 PM Clark Boylan > wrote: > > > > > > On Tue, Apr 6, 2021, at 8:14 AM, Radosław Piliszek wrote: > > > > I am testing whether replacing xz with gzip would solve the problem > [1] [2]. > > > > > > The reason we used xz is that the files are very large and gz > compression is very poor compared to xz for these files and these files are > not really human readable as is (you need to load them into journald > first). Let's test it and see what the gz file sizes look like but if they > are still quite large then this is unlikely to be an appropriate fix. > > > > Let's see how bad the file sizes are. > > If they are acceptable, we can keep gzip and be happy. > > Otherwise we try to tune the params to make xz a better citizen as > > fungi suggested. > > > > -yoctozepto > > > > > > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Fri May 7 11:59:09 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Fri, 7 May 2021 13:59:09 +0200 Subject: =?UTF-8?Q?Re=3a_Proposing_C=c3=a9dric_Jeanneret_=28Tengu=29_for_tri?= =?UTF-8?Q?pleo-core?= In-Reply-To: References: Message-ID: <68c5bba7-da73-60d8-02d9-62f340fcfe87@redhat.com> :) Thank you James, and thank you all! I'll do my best to use my full "power" as wisely as possible. Have a great one all! C. On 4/29/21 5:53 PM, James Slagle wrote: > I'm proposing we formally promote Cédric to full tripleo-core duties. He > is already in the gerrit group with the understanding that his +2 is for > validations. His experience and contributions have grown a lot since > then, and I'd like to see that +2 expanded to all of TripleO. > > If there are no objections, we'll consider the change official at the > end of next week. > > -- > -- James Slagle > -- -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Fri May 7 13:34:56 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 7 May 2021 13:34:56 +0000 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> Message-ID: <20210507133456.aaah2b52g35aaat2@yuggoth.org> On 2021-05-07 13:48:36 +0200 (+0200), Martin Kopec wrote: > hmm, seems like we have hit the issue again, however in a different job now: > Latest logs: > https://zuul.opendev.org/t/openstack/build/0565c3d252194f9ba67f4af20e8be65d > Link to the review where it occurred: > https://review.opendev.org/c/osf/refstack-client/+/788743 [...] It was addressed in the master branch a month ago with https://review.opendev.org/784964 wasn't backported to any older branches (or if it was then the backports haven't merged yet). Looking at the zuul._inheritance_path from the inventory for your build, it seems to have used stable/wallaby of devstack rather than master, which explains why you're still seeing xzip used. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From marios at redhat.com Fri May 7 13:42:17 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 7 May 2021 16:42:17 +0300 Subject: [TripleO] Closed out wallaby milestone in Launchpad and moved bugs to xena-1 In-Reply-To: References: Message-ID: On Thu, May 6, 2021 at 7:05 PM Marios Andreou wrote: > > Hello, > > I closed out the wallaby-rc1 milestone in launchpad and moved ongoing > bugs over to xena-1 https://launchpad.net/tripleo/+milestone/xena-1 - > info on the moved bugs @ [1] > > Thanks to tosky for pinging on tripleo about this: there were problems > a while back with gerrit <--> launchpad integration so there are > likely many bugs that should be in fix-released but weren't moved > automatically after the related patches merged. > > Please take a moment to check the bugs assigned to you in > https://launchpad.net/tripleo/+milestone/xena-1 and make sure the > status reflects reality? I just ran close_bugs.py [1] against xena-1 and got about ~ 40 of the really stale things those listed there https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3734805 [1] https://review.opendev.org/c/openstack/tripleo-ci/+/776246/ > > regards, marios > > [1] https://gist.github.com/marios/b3155fe3b1318cc26bfa4bc15c764a26#gistcomment-3733636 From radoslaw.piliszek at gmail.com Fri May 7 19:24:39 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 7 May 2021 21:24:39 +0200 Subject: [oslo] [tooz] Why does not tooz's tox env use upper-constraints? Message-ID: The question is the subject. Why does not tooz's tox env use upper-constraints? I asked this on IRC once some time ago but did not receive an answer so trying out ML as a more reliable medium. -yoctozepto From cboylan at sapwetik.org Fri May 7 20:53:29 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 07 May 2021 13:53:29 -0700 Subject: [oslo] [tooz] Why does not tooz's tox env use upper-constraints? In-Reply-To: References: Message-ID: On Fri, May 7, 2021, at 12:24 PM, Radosław Piliszek wrote: > The question is the subject. > Why does not tooz's tox env use upper-constraints? > > I asked this on IRC once some time ago but did not receive an answer > so trying out ML as a more reliable medium. https://review.opendev.org/c/openstack/tooz/+/413365 is the change that added the comment about the exclusion to tox.ini. Reading the comments there it seems one reason people were skipping upper-constraints was to find problems with dependencies early (rather than avoiding them with constraints and updating when everyone was happy to work with it). Not sure if Tony is still following this list but may have more info too. > > -yoctozepto > > From zigo at debian.org Fri May 7 21:51:37 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 7 May 2021 23:51:37 +0200 Subject: [oslo] [tooz] Why does not tooz's tox env use upper-constraints? In-Reply-To: References: Message-ID: On 5/7/21 10:53 PM, Clark Boylan wrote: > On Fri, May 7, 2021, at 12:24 PM, Radosław Piliszek wrote: >> The question is the subject. >> Why does not tooz's tox env use upper-constraints? >> >> I asked this on IRC once some time ago but did not receive an answer >> so trying out ML as a more reliable medium. > > https://review.opendev.org/c/openstack/tooz/+/413365 is the change that added the comment about the exclusion to tox.ini. Reading the comments there it seems one reason people were skipping upper-constraints was to find problems with dependencies early (rather than avoiding them with constraints and updating when everyone was happy to work with it). > > Not sure if Tony is still following this list but may have more info too. > >> >> -yoctozepto Hi, Another more social reason, is that the people behind Tooz (that are now not involved in the project) loved to do things their own way, and loved experimentation, even if this meant not always being fully aligned with the best practices of the rest of the OpenStack project. I'm sure old-timers will know what I'm talking about! :) Anyways, it's probably time to realign... Cheers, Thomas Goirand (zigo) P.S: This post is in no way a critic of the involved people or what they did (I actually enjoyed a lot interacting with them), I'm just exposing a fact... From artem.goncharov at gmail.com Sat May 8 10:01:12 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Sat, 8 May 2021 12:01:12 +0200 Subject: [sdk]: identity service if get_application_credential method could use user name In-Reply-To: <308b555bdd500119c9f17535a50c0649@uvic.ca> References: <308b555bdd500119c9f17535a50c0649@uvic.ca> Message-ID: Hi > We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. You can use: user = conn.identity.find_user(name_or_id = ‘my_user’) ac = conn.identity.find_application_credential(user=user, name_or_id=‘app_cred’) Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Sun May 9 08:54:17 2021 From: mkopec at redhat.com (Martin Kopec) Date: Sun, 9 May 2021 10:54:17 +0200 Subject: [devstack][infra] POST_FAILURE on export-devstack-journal : Export journal In-Reply-To: <20210507133456.aaah2b52g35aaat2@yuggoth.org> References: <20210406160247.gevud2hlvodg7jzt@yuggoth.org> <7626869f-dab3-41df-a40b-dafa20dcfaf4@www.fastmail.com> <20210507133456.aaah2b52g35aaat2@yuggoth.org> Message-ID: right, thank you. I've proposed a backport to wallaby: https://review.opendev.org/c/openstack/devstack/+/790353 and verifying it solves the problem here: https://review.opendev.org/c/osf/refstack-client/+/788743 On Fri, 7 May 2021 at 15:35, Jeremy Stanley wrote: > On 2021-05-07 13:48:36 +0200 (+0200), Martin Kopec wrote: > > hmm, seems like we have hit the issue again, however in a different job > now: > > Latest logs: > > > https://zuul.opendev.org/t/openstack/build/0565c3d252194f9ba67f4af20e8be65d > > Link to the review where it occurred: > > https://review.opendev.org/c/osf/refstack-client/+/788743 > [...] > > It was addressed in the master branch a month ago with > https://review.opendev.org/784964 wasn't backported to any older > branches (or if it was then the backports haven't merged yet). > Looking at the zuul._inheritance_path from the inventory for your > build, it seems to have used stable/wallaby of devstack rather than > master, which explains why you're still seeing xzip used. > -- > Jeremy Stanley > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun May 9 09:11:15 2021 From: zigo at debian.org (Thomas Goirand) Date: Sun, 9 May 2021 11:11:15 +0200 Subject: [nova] Getting compute nodes disabled by default Message-ID: <30079c98-8087-d47f-b3f7-0bccc2bcf010@debian.org> Hi Nova team, I was wondering if there was a way so that new compute nodes appear as disabled when they pop up. Indeed, the default workflow is that they first appear in the "nova" availability zone, and enabled by default. This doesn't really fit a production environment. I'd prefer if they could appear disabled, and if I had to enable them manually, when I'm finished running validity tests, and when my scripts have finished moving the new compute in the correct availability zone. Your thoughts? Cheers, Thomas Goirand (zigo) From tkajinam at redhat.com Sun May 9 13:23:46 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Sun, 9 May 2021 22:23:46 +0900 Subject: [puppet][glare] Propose retiring puppet-glare In-Reply-To: References: Message-ID: Thank you all who shared your thoughts on this. Because we haven't heard any objections but only +1s, I'll move the retirement patches forward. https://review.opendev.org/q/topic:%22retire-puppet-glare%22+(status:open%20OR%20status:merged) On Tue, May 4, 2021 at 12:48 AM Takashi Kajinami wrote: > Adding [glare] tag just in case any folks from the project are still around > and have any feedback about this. > > On Sun, Apr 25, 2021 at 10:55 PM Takashi Kajinami > wrote: > >> Hello, >> >> >> I'd like to propose retiring puppet-galre project, because the Glare[1] >> project >> looks inactive for a while based on the following three points >> - No actual development is made for 2 years >> - No release was made since the last Rocky release >> - setup.cfg is not maintained and the python versions listed are very >> outdated >> >> [1] https://opendev.org/x/glare >> >> I'll wait for 1-2 weeks to hear opinions from others. >> If anybody is interested in keeping the puppet-glare project or has >> intention to >> maintain Glare itself then please let me know. >> >> Thank you, >> Takashi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun May 9 17:24:04 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 May 2021 12:24:04 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 7th May, 21: Reading: 5 min Message-ID: <17952284f5b.c9824bb5241321.6967307880371956113@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= Project updates: ------------------- ** None for this week. Other updates: ------------------ ** Swift and Ironic added the 'assert:supports-standalone' tag. ** Update Nodejs Runtime to Nodejs14 From Nodejs10 for Xena Cycle[1] ** Reduced TC office hours to one per week[2] 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-05-06-15.00.log.html * We will have next week's meeting on May 13th, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Two open reviews for ongoing activities[5]. Starting the 'Y' release naming process --------------------------------------------- * Governance patch is up to finalize the dates for Y release naming process[6] * https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Others --------- * Replacing ATC terminology with AC[7]. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[8]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [9] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [10] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022247.html [2] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/governance/+/789385 [7] https://review.opendev.org/c/openstack/governance/+/790092 [8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [9] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [10] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From tonyliu0592 at hotmail.com Mon May 10 02:04:29 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 10 May 2021 02:04:29 +0000 Subject: [tripleo] nic config sample for OVN Message-ID: Hi, Could anyone share a nic configuration template for OVN? In my case, the external interface is a VLAN interface. Thanks! Tony From tonyliu0592 at hotmail.com Mon May 10 02:17:35 2021 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 10 May 2021 02:17:35 +0000 Subject: [os-net-config] add VLAN interface to OVS bridge Message-ID: Hi, I want to have a VLAN interface on a OVS bridge without tagging. Can that be done by os-net-config? I tried couple things. 1) Define a VLAN interface as a member of OVS bridge. In this case, VLAN interface is created and added on OVS bridge, but with the tag of VLAN ID. ``` {"network_config": [ { "type": "ovs_bridge", "name": "br-test", "use_dhcp": false, "addresses": [{"ip_netmask": "10.250.1.5/24"}], "members": [ { "type": "vlan", "device": "eno4", "vlan_id": 100, "primary": true } ] } ]} ``` 2) Define a VLAN interface out of OVS bridge and reference that VLAN interface as the type of "interface" in the member. This doesn't work unless the VLAN interface exists before running os-net-config. ``` {"network_config": [ { "type": "vlan", "vlan_id": 100, "device": "eno4", "addresses": [{"ip_netmask": "10.250.1.5/24"}] }, { "type": "ovs_bridge", "name": "br-test", "use_dhcp": false, "members": [ { "type": "interface", "name": "vlan100", "primary": true } ] } ]} ``` Anything else I can try? Thanks! Tony From swogatpradhan22 at gmail.com Mon May 10 05:43:37 2021 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Mon, 10 May 2021 11:13:37 +0530 Subject: [heat] [aodh] [autoscaling] [openstack victoria] How to setup aodh and heat to auto scale in openstack victoria based on cpu usage? In-Reply-To: References: Message-ID: Previously we had transformers to calculate the cpu_util for us but now the whole transformers section has been completely removed so how to create a metric for cpu usage so that it can be used in heat template? On Wed, May 5, 2021 at 10:32 AM Swogat Pradhan wrote: > Hi, > > How to set up aodh and heat to auto scale in openstack victoria based on > cpu usage? > > As the metric cpu_util is now deprecated, how can someone use heat to auto > scale up and down using cpu usage of the instance? > > I checked the ceilometer package and I could see the transformers are > removed so i can't see any other way to perform arithmetic operations and > generate cpu usage in percentage. > > Any insight is appreciated. > > > with regards > > Swogat pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon May 10 06:30:35 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 10 May 2021 08:30:35 +0200 Subject: [nova] Getting compute nodes disabled by default In-Reply-To: <30079c98-8087-d47f-b3f7-0bccc2bcf010@debian.org> References: <30079c98-8087-d47f-b3f7-0bccc2bcf010@debian.org> Message-ID: Hi Zigo! I think you can configure this behavior via https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.enable_new_services Cheers, gibi On Sun, May 9, 2021 at 11:11, Thomas Goirand wrote: > Hi Nova team, > > I was wondering if there was a way so that new compute nodes appear as > disabled when they pop up. Indeed, the default workflow is that they > first appear in the "nova" availability zone, and enabled by default. > This doesn't really fit a production environment. I'd prefer if they > could appear disabled, and if I had to enable them manually, when I'm > finished running validity tests, and when my scripts have finished > moving the new compute in the correct availability zone. > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > From Istvan.Szabo at agoda.com Mon May 10 06:59:21 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Mon, 10 May 2021 06:59:21 +0000 Subject: Nova db instance table node field where it comes from? Message-ID: Hello, Where this entry is coming from of an instance? I haven't found in the libvirt instance.xml and to be honest don't know where else to look. Thank you ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Mon May 10 07:33:33 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Mon, 10 May 2021 09:33:33 +0200 Subject: [telemetry] Retire panko In-Reply-To: <035722a2-7860-6469-be83-240aa4a72ff3@matthias-runge.de> References: <035722a2-7860-6469-be83-240aa4a72ff3@matthias-runge.de> Message-ID: On Tue, Apr 27, 2021 at 11:29:37AM +0200, Matthias Runge wrote: > Hi there, > > over the past couple of cycles, we have seen decreasing interest on panko. > Also it has some debts, which were just carried over from the early days. > > We discussed over at the PTG and didn't really found a reason to keep it > alive or included under OpenStack. > > With that, it also makes sense to retire puppet-panko. > > I'll wait for 1-2 weeks and propose appropriate patches to get it retired > then, if I don't hear anything against it or if there are any takers. > 2 weeks are nearly over. I haven't heard any feedback on this. I'll wait until Wednesday and then pull the trigger otherwise. Matthias -- Matthias Runge From ricolin at ricolky.com Mon May 10 09:07:51 2021 From: ricolin at ricolky.com (Rico Lin) Date: Mon, 10 May 2021 17:07:51 +0800 Subject: [Multi-arch][SIG] Biweekly Meeting tomorrow at 0700 and 1500 UTC Message-ID: Dear Multi-arch forks We will host our meeting tomorrow, please fill in the agenda [1] if you got anything you would like us to discuss about Here's our meeting schedule and location [2]: Tuesday at 0700 UTC in #openstack-meeting-alt Tuesday at 1500 UTC in #openstack-meeting [1] https://etherpad.opendev.org/p/Multi-Arch-agenda [2] http://eavesdrop.openstack.org/#Multi-Arch_SIG_Meeting *Rico Lin* -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 10 09:14:43 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 10 May 2021 11:14:43 +0200 Subject: [oslo] [tooz] Why does not tooz's tox env use upper-constraints? In-Reply-To: References: Message-ID: Hello, I don't know the reason behind this choice, however, you should notice that tooz became an independent project a few weeks ago [1]. Usually independent projects (at least those under the Oslo scope) are not using U.C (e.g pbr by example). So, by taking account of the independent model, I wonder if we need a UC at this point. Any opinions or thoughts? [1] https://opendev.org/openstack/releases/commit/a0f4ef7053e00418ac4800e4d8428d05dadc7594 Le ven. 7 mai 2021 à 23:54, Thomas Goirand a écrit : > On 5/7/21 10:53 PM, Clark Boylan wrote: > > On Fri, May 7, 2021, at 12:24 PM, Radosław Piliszek wrote: > >> The question is the subject. > >> Why does not tooz's tox env use upper-constraints? > >> > >> I asked this on IRC once some time ago but did not receive an answer > >> so trying out ML as a more reliable medium. > > > > https://review.opendev.org/c/openstack/tooz/+/413365 is the change that > added the comment about the exclusion to tox.ini. Reading the comments > there it seems one reason people were skipping upper-constraints was to > find problems with dependencies early (rather than avoiding them with > constraints and updating when everyone was happy to work with it). > > > > Not sure if Tony is still following this list but may have more info too. > > > >> > >> -yoctozepto > > Hi, > > Another more social reason, is that the people behind Tooz (that are now > not involved in the project) loved to do things their own way, and loved > experimentation, even if this meant not always being fully aligned with > the best practices of the rest of the OpenStack project. I'm sure > old-timers will know what I'm talking about! :) > > Anyways, it's probably time to realign... > > Cheers, > > Thomas Goirand (zigo) > > P.S: This post is in no way a critic of the involved people or what they > did (I actually enjoyed a lot interacting with them), I'm just exposing > a fact... > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Mon May 10 09:19:56 2021 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 10 May 2021 11:19:56 +0200 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <17939f6f8b6.f875ee3627306.6954932512782884355@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> <17939f6f8b6.f875ee3627306.6954932512782884355@ghanshyammann.com> Message-ID: Hello, The kuryr-kubernetes-tempest-train and kuryr-kubernetes-tempest-ussuri jobs on that list are for older stable branches, so they pin the correct nodeset. For the job that is running on master I proposed a patch to not specify the nodeset as it's not needed anymore[1]. Thanks, Maysa. [1] https://review.opendev.org/c/openstack/kuryr-kubernetes/+/789595/ On Wed, May 5, 2021 at 2:43 AM Ghanshyam Mann wrote: > ---- On Tue, 04 May 2021 16:23:26 -0500 Goutham Pacha Ravi < > gouthampravi at gmail.com> wrote ---- > > > > > > On Tue, May 4, 2021 at 7:40 AM Tom Barron wrote: > > On 04/05/21 08:55 -0500, Ghanshyam Mann wrote: > > > ---- On Mon, 03 May 2021 19:11:11 -0500 Tom Barron > wrote ---- > > > > On 03/05/21 08:50 -0500, Ghanshyam Mann wrote: > > > > > ---- On Sun, 02 May 2021 05:09:17 -0500 Radosław Piliszek < > radoslaw.piliszek at gmail.com> wrote ---- > > > > > > Dears, > > > > > > > > > > > > I have scraped the Zuul API to get names of jobs that *could* > run on > > > > > > master branch and are still on bionic. [1] > > > > > > "Could" because I could not establish from the API whether they > are > > > > > > included in any pipelines or not really (e.g., there are lots of > > > > > > transitive jobs there that have their nodeset overridden in > children > > > > > > and children are likely used in pipelines, not them). > > > > > > > > > > > > [1] https://paste.ubuntu.com/p/N3JQ4dsfqR/ > > > > > > > > The manila-image-elements and manila-test-image jobs listed here are > > > > not pinned and are running with bionic but I made reviews with them > > > > pinned to focal [2] [3] and they run fine. So I think manila is OK > > > > w.r.t. dropping bionic support. > > > > > > > > [2] > https://review.opendev.org/c/openstack/manila-image-elements/+/789296 > > > > > > > > [3] > https://review.opendev.org/c/openstack/manila-test-image/+/789409 > > > > > >Thanks, Tom for testing. Please merge these patches before devstack > patch merge. > > > > > >-gmann > > > > Dumb question probably, but ... > > > > Do we need to pin the nodepool for these jobs, or will they just start > > picking up focal? > > > > The jobs that were using the bionic nodes inherited from the > "unittests" job and are agnostic to the platform for the most part. The > unittest job inherits from the base jobs that fungi's modifying here: > https://review.opendev.org/c/opendev/base->jobs/+/789097/ and here: > https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no need to > pin a nodeset - we'll get the changes transparently when the patches merge. > > Yeah, they will be running on focal via the parent job nodeset so all good > here. > > For devstack based job too, manila-tempest-plugin-base job does not set > any nodeset so it use the one devstack base job define which is Focal > - > https://opendev.org/openstack/manila-tempest-plugin/src/branch/master/zuul.d/manila-tempest-jobs.yaml#L2 > > -gmann > > > -- Tom > > > > > > > > > > > > > > > > >Thanks for the list. We need to only worried about jobs using > devstack master branch. Along with > > > > >non-devstack jobs. there are many stable testing jobs also on the > master gate which is all good to > > > > >pin the bionic nodeset, for example - > 'neutron-tempest-plugin-api-ussuri'. > > > > > > > > > >From the list, I see few more projects (other than listed in the > subject of this email) jobs, so tagging them > > > > >now: sahara, networking-sfc, manila, magnum, kuryr. > > > > > > > > > >-gmann > > > > > > > > > > > > > > > > > -yoctozepto > > > > > > > > > > > > On Fri, Apr 30, 2021 at 12:28 AM Ghanshyam Mann < > gmann at ghanshyammann.com> wrote: > > > > > > > > > > > > > > Hello Everyone, > > > > > > > > > > > > > > As per the testing runtime since Victoria [1], we need to > move our CI/CD to Ubuntu Focal 20.04 but > > > > > > > it seems there are few jobs still running on Bionic. As > devstack team is planning to drop the Bionic support > > > > > > > you need to move those to Focal otherwise they will start > failing. We are planning to merge the devstack patch > > > > > > > by 2nd week of May. > > > > > > > > > > > > > > - https://review.opendev.org/c/openstack/devstack/+/788754 > > > > > > > > > > > > > > I have not listed all the job but few of them which were > failing with ' rtslib-fb-targetctl error' are below: > > > > > > > > > > > > > > Cinder- cinder-plugin-ceph-tempest-mn-aa > > > > > > > - > https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > > > > > > > > > > > > > python-cinderclient - python-cinderclient-functional-py36 > > > > > > > - > https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > > > > > > > > > > > > > Octavia- > https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > > > > > > > > > > > > > Murani- murano-dashboard-sanity-check > > > > > > > - > https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > > > > > > > > > > > > > Also if your 3rd party CI is still running on Bionic, you can > plan to migrate it to Focal before devstack patch merge. > > > > > > > > > > > > > > [1] > https://governance.openstack.org/tc/reference/runtimes/victoria.html > > > > > > > > > > > > > > -gmann > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 10 09:36:29 2021 From: zigo at debian.org (Thomas Goirand) Date: Mon, 10 May 2021 11:36:29 +0200 Subject: [nova] Getting compute nodes disabled by default In-Reply-To: References: <30079c98-8087-d47f-b3f7-0bccc2bcf010@debian.org> Message-ID: <8f71ed83-01a4-2b31-88c9-43f58402c80e@debian.org> On 5/10/21 8:30 AM, Balazs Gibizer wrote: > Hi Zigo! > > I think you can configure this behavior via > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.enable_new_services > > > Cheers, > gibi Indeed, that's exactly what I needed. Thanks! Cheers, Thomas Goirand (zigo) > > On Sun, May 9, 2021 at 11:11, Thomas Goirand wrote: >> Hi Nova team, >> >> I was wondering if there was a way so that new compute nodes appear as >> disabled when they pop up. Indeed, the default workflow is that they >> first appear in the "nova" availability zone, and enabled by default. >> This doesn't really fit a production environment. I'd prefer if they >> could appear disabled, and if I had to enable them manually, when I'm >> finished running validity tests, and when my scripts have finished >> moving the new compute in the correct availability zone. >> >> Your thoughts? >> >> Cheers, >> >> Thomas Goirand (zigo) >> > > > From radoslaw.piliszek at gmail.com Mon May 10 10:02:40 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 10 May 2021 12:02:40 +0200 Subject: [oslo] [tooz] Why does not tooz's tox env use upper-constraints? In-Reply-To: References: Message-ID: On Mon, May 10, 2021 at 11:15 AM Herve Beraud wrote: > Hello, > > I don't know the reason behind this choice, however, you should notice > that tooz became an independent project a few weeks ago [1]. > > Usually independent projects (at least those under the Oslo scope) are not > using U.C (e.g pbr by example). So, by taking account of the independent > model, I wonder if we need a UC at this point. > > Any opinions or thoughts? > > [1] > https://opendev.org/openstack/releases/commit/a0f4ef7053e00418ac4800e4d8428d05dadc7594 > Oh, that is enough of a reason, thanks. :-) Also, thanks to others for providing the historical background. Cheers, -yoctozepto -------------- next part -------------- An HTML attachment was scrubbed... URL: From vuk.gojnic at gmail.com Mon May 10 12:28:01 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Mon, 10 May 2021 14:28:01 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi Julia,hello everybody, I have finally got some time to test it further and found some interesting things (see below). I have also got some good tips and support in the IRC. I have changed the defaults and set both boot parameters to "uefi": [deploy] default_boot_mode = uefi [ilo] default_boot_mode = uefi It is detecting the mode correctly and properly configures server to boot. See some latest extract from logs related to "openstack baremetal node provide" operation: 2021-05-10 08:45:28.233 561784 INFO ironic.conductor.task_manager [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "cleaning" from state "manageable"; target provision state is "available" 2021-05-10 08:45:32.235 561784 INFO ironic.drivers.modules.ilo.power [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power off' is completed in 2 seconds. 2021-05-10 08:45:32.255 561784 INFO ironic.conductor.utils [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power off by power off. 2021-05-10 08:45:34.056 561784 INFO ironic.drivers.modules.ilo.common [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe pending boot mode is uefi. 2021-05-10 08:45:35.470 561784 INFO ironic.drivers.modules.ilo.common [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Set the node ca718c74-77a6-46df-8b44-6a83db6a0ebe to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ca718c74-77a6-46df-8b44-6a83db6a0ebe.iso?filename=tmpi262v2zy.iso successfully. 2021-05-10 08:45:43.485 561784 WARNING oslo.service.loopingcall [-] Function 'ironic.drivers.modules.ilo.power._wait_for_state_change.._wait' run outlasted interval by 1.32 sec 2021-05-10 08:45:44.857 561784 INFO ironic.drivers.modules.ilo.power [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power on' is completed in 4 seconds. 2021-05-10 08:45:44.872 561784 INFO ironic.conductor.utils [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power on by rebooting. 2021-05-10 08:45:44.884 561784 INFO ironic.conductor.task_manager [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "clean wait" from state "cleaning"; target provision state is "available" Everyhing goes ok and I come to Grub2. I can load the kernel with: grub> linux /vmlinuz However when I try to load initrd with: grub> initrd /initrd It first waits and goes to black screen with red cursor which is frozen. I have tried same procedure with standard ubuntu kernel and initrd from http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/hd-media/ and it works correctly and starts the installer. I went to try the combinations: - kernel from Ubuntu server + initrd from custom IPA: this was failing the same way as described above - kernel from IPA + initrd from Ubuntu server: this was working and starting the Ubuntu installer. - kernel from Ubuntu servetr + initrd from IPA download server (ipa-centos8-stable-victoria.initramfs): failing same as above. I am pretty lost with what is going on :( Does anyone have more ideas? -Vuk On Thu, Apr 1, 2021 at 8:42 PM Julia Kreger wrote: > Adding the list back and trimming the message. Replies in-band. > > Well, that is good that the server is not signed, nor other esp images > are not working. > On Thu, Apr 1, 2021 at 11:20 AM Vuk Gojnic wrote: > > > > Hey Julia, > > > > Thanks for asking. I have tried with several ESP image options with same > effect (one taken from Ubuntu Live ISO that boots on that node, another > downloaded and third made with grub tools). None of them was signed. > > Interesting. At least it is consistent! Have you tried to pull down > the iso image and take it apart to verify it is UEFI bootable against > a VM or another physical machine? > > I'm wondering if you need both uefi parameters set. You definitely > don't have properties['capabilities']['boot_mode'] set which is used > or... maybe a better word to use is drawn in for asserting defaults, > but you do have the deploy_boot_mode setting set. > > I guess a quick manual sanity check of the actual resulting iso image > is going to be critical. Debug logging may also be useful, and I'm > only thinking that because there is no logging from the generation of > the image. > > > > > The server is not in UEFI secure boot mode. > > Interesting, sure sounds like it is based on your original message. :( > > > Btw. I will be on holidays for next week so I might not be able to > follow up on this discussion before Apr 12th. > > No worries, just ping us on irc.freenode.net in #openstack-ironic if a > reply on the mailing list doesn't grab our attention. > > > > > Bests, > > Vuk > > > > On Thu, Apr 1, 2021 at 4:20 PM Julia Kreger > wrote: > >> > >> Greetings, > >> > >> Two questions: > >> 1) Are the ESP image contents signed, or are they built using one of > >> the grub commands? > >> 2) Is the machine set to enforce secure boot at this time? > >> > >> > [trim] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Mon May 10 12:29:43 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 10 May 2021 12:29:43 +0000 Subject: [largescale-sig][neutron] What driver are you using? Message-ID: Hey large-scalers, We had a discusion in my company (OVH) about neutron drivers. We are using a custom driver based on BGP for public networking, and another custom driver for private networking (based on vlan). Benefits from this are obvious: - we maintain the code - we do what we want, not more, not less - it fits perfectly to the network layer our company is using - we have full control of the networking stack But it also have some downsides: - we have to maintain the code... (rebasing, etc.) - we introduce bugs that are not upstream (more code, more bugs) - a change in code is taking longer, we have few people working on this (compared to a community based) - this is not upstream (so not opensource) - we are not sharing (bad) So, we were wondering which drivers are used upstream in large scale environment (not sure a vlan driver can be used with more than 500 hypervisors / I dont know about vxlan or any other solution). Is there anyone willing to share this info? Thanks in advance! From whayutin at redhat.com Mon May 10 12:52:55 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 May 2021 06:52:55 -0600 Subject: [tripleo][ci] all c8 master / wallaby blocked Message-ID: https://bugs.launchpad.net/tripleo/+bug/1927952 https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-8-content-provider Issues atm w/ quay.io, we are making sure our failover is working properly atm. RDO was missing the latest ceph containers. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Mon May 10 14:57:11 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Mon, 10 May 2021 14:57:11 +0000 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <20210415170329.GA2777639@sm-workstation> References: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> <20210415170329.GA2777639@sm-workstation> Message-ID: <0670B960225633449A24709C291A52525115AB9B@COM01.performair.local> All; I've been successful at the tasks listed below, but the resulting volumes don't boot. They stop early in the boot process indicating the root partition cannot be found. I suspect the issue is with either the disk UUID links, or with partition detection. When running the VMs under XenServer, we have /dev/disk/by-uuid/ --> /dev/xvda# Are these created dynamically by the kernel, or are they static soft-links? Can I get away with adjusting fstab to use what the partitions are likely to be after transition to OpenStac? Are these of this form: /dev/vda#? Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean McGinnis [mailto:sean.mcginnis at gmx.com] Sent: Thursday, April 15, 2021 10:03 AM To: Eugen Block Cc: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][victoria][cinder] Import volume? On Thu, Apr 15, 2021 at 04:30:45PM +0000, Eugen Block wrote: > Hi, > > there’s a ‚cinder manage‘ command to import an rbd image into openstack. > But be aware that if you delete it in openstack it will be removed from > ceph, too (like a regular cinder volume). > I don’t have the exact command syntax at hand right now, but try ‚cinder > help manage‘ > > Regards > Eugen > Here is the documentation for that command: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-manage Also note, if you no longer need to manage the volume in Cinder, but you do not want it to be deleted from your storage backend, there is also the inverse command of `cinder unmanage`. Details for that command can be found here: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-unmanage > > Zitat von DHilsbos at performair.com: > > > All; > > > > I'm looking to transfer several VMs from XenServer to an OpenStack > > Victoria cloud. Finding explanations for importing Glance images is > > easy, but I haven't been able to find a tutorial on importing Cinder > > volumes. > > > > Since they are currently independent servers / volumes it seems somewhat > > wasteful and messy to import each VMs disk as an image just to spawn a > > volume from it. > > > > We're using Ceph as the storage provider for Glance and Cinder. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From ignaziocassano at gmail.com Mon May 10 15:16:28 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 10 May 2021 17:16:28 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> <95ccfc366d4b497c8af232f38d07559f@binero.com> <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> Message-ID: Hello Sean, I am testing the openstack migration on centos 7 train and live migration stops again: live migrated instances stop to responding to ping requests. I did not understand if I must apply patches you suggested in your last email to me and also the following: https://review.opendev.org/c/openstack/nova/+/741529 Il giorno ven 12 mar 2021 alle ore 23:44 Sean Mooney ha scritto: > On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > > Hello, > > > > If it's the same as us, then yes, the issue occurs on Train and is not > completely solved yet. > there is a downstream bug trackker for this > > https://bugzilla.redhat.com/show_bug.cgi?id=1917675 > > its fixed by a combination of 3 enturon patches and i think 1 nova one > > https://review.opendev.org/c/openstack/neutron/+/766277/ > https://review.opendev.org/c/openstack/neutron/+/753314/ > https://review.opendev.org/c/openstack/neutron/+/640258/ > > and > https://review.opendev.org/c/openstack/nova/+/770745 > > the first tree neutron patches would fix the evauate case but break live > migration > the nova patch means live migration will work too although to fully fix > the related > live migration packet loss issues you need > > https://review.opendev.org/c/openstack/nova/+/747454/4 > https://review.opendev.org/c/openstack/nova/+/742180/12 > to fix live migration with network abckend that dont suppor tmultiple port > binding > and > https://review.opendev.org/c/openstack/nova/+/602432 (the only one not > merged yet.) > for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall > driver, noop or ovn instead of ml2/ovs. > > multiple port binding was not actully the reason for this there was a race > in neutorn itslef that would have haapend > even without multiple port binding between the dhcp agent and l2 agent. > > some of those patches have been backported already and all shoudl > eventually make ti to train the could be brought to stine potentially > if peopel are open to backport/review them. > > > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano > > Sent: Friday, March 12, 2021 7:43:22 AM > > To: Tobias Urdin > > Cc: openstack-discuss > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello Tobias, the result is the same as your. > > I do not know what happens in depth to evaluate if the behavior is the > same. > > I solved on stein with patch suggested by Sean : force_legacy_port_bind > workaround. > > So I am asking if the problem exists also on train. > > Ignazio > > > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin tobias.urdin at binero.com>> ha scritto: > > > > Hello, > > > > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano ignaziocassano at gmail.com>> > > Sent: Wednesday, March 10, 2021 7:57:21 AM > > To: Sean Mooney > > Cc: openstack-discuss; Slawek Kaplonski > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello All, > > please, are there news about bug 1815989 ? > > On stein I modified code as suggested in the patches. > > I am worried when I will upgrade to train: wil this bug persist ? > > On which openstack version this bug is resolved ? > > Ignazio > > > > > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > Hello, I tried to update to last stein packages on yum and seems this > bug still exists. > > Before the yum update I patched some files as suggested and and ping to > vm worked fine. > > After yum update the issue returns. > > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > > Thanks > > Ignazio > > > > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney smooney at redhat.com>> ha scritto: > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > Many thanks. > > > Please keep in touch. > > here are the two patches. > > the first https://review.opendev.org/#/c/724386/ is the actual change > to add the new config opition > > this needs a release note and some tests but it shoudl be functional > hence the [WIP] > > i have not enable the workaround in any job in this patch so the ci run > will assert this does not break > > anything in the default case > > > > the second patch is https://review.opendev.org/#/c/724387/ which > enables the workaround in the multi node ci jobs > > and is testing that live migration exctra works when the workaround is > enabled. > > > > this should work as it is what we expect to happen if you are using a > moderne nova with an old neutron. > > its is marked [DNM] as i dont intend that patch to merge but if the > workaround is useful we migth consider enableing > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > i have not had time to deploy a 2 node env today but ill try and test > this locally tomorow. > > > > > > > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < > smooney at redhat.com> > > > ha scritto: > > > > > > > so bing pragmatic i think the simplest path forward given my other > patches > > > > have not laned > > > > in almost 2 years is to quickly add a workaround config option to > disable > > > > mulitple port bindign > > > > which we can backport and then we can try and work on the actual fix > after. > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > shoudl > > > > serve as a workaround > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > i can create a patch that will do that in an hour or so and submit a > > > > followup DNM patch to enabel the > > > > workaound in one of the gate jobs that tests live migration. > > > > i have a meeting in 10 mins and need to finish the pacht im > currently > > > > updating but ill submit a poc once that is done. > > > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > PS > > > > > I have testing environment on queens,rocky and stein and I can > make test > > > > > as you need. > > > > > Ignazio > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > ignaziocassano at gmail.com> ha > scritto: > > > > > > > > > > > Hello Sean, > > > > > > the following is the configuration on my compute nodes: > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > > > testing > > > > > > environment and the > > > > > > same firewall driver. > > > > > > Live migration on provider network on queens works fine. > > > > > > It does not work fine on rocky and stein (vm lost connection > after it > > > > > > > > is > > > > > > migrated and start to respond only when the vm send a network > packet , > > > > > > > > for > > > > > > example when chrony pools the time server). > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, some updated about this issue. > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must > be > > > > > > > > sent by > > > > > > > > qemu during live miration. > > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt > are > > > > > > > > bugged. > > > > > > > > > > > > > > it is not correct. > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve > as > > > > > > > > its mac > > > > > > > learning frames > > > > > > > instead > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > but was fixed by > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > can you confirm you are not using the broken 2.6.0 release and > are > > > > > > > > using > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > libvirt/qemu > > > > > > > > packages I installed on queens (I updated compute and > controllers > > > > > > > > node > > > > > > > > > > > > > > on > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > rocky > > > > > > > > and > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > On queens live migration on provider network continues to > work > > > > > > > > fine. > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > openstack > > > > > > > > components . > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > assumes > > > > > > > that the port binding details wont > > > > > > > change when it does a live migration and does not update the > xml for > > > > > > > > the > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > > post_livemigration > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > flow to > > > > > > > prebind the port to the destiatnion > > > > > > > so it can update the xml if needed and if post copy live > migration is > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > precreated > > > > > > > > the > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > migration started which will allow neutron to wire it up (put > it on > > > > > > > > the > > > > > > > correct vlan and install security groups) before > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh > ovs > > > > > > > > port > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > as a result there is a race when using openvswitch firewall > that can > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > iptables_hybrid > > > > > > > > > > firewall. > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > biding the > > > > > > > > > > > > > > port and > > > > > > > > > activating it when > > > > > > > > > the vm is resumed on the destiation host before we get to > pos > > > > > > > > live > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > elimiated > > > > > > > > as > > > > > > > > > > > > > > some > > > > > > > > > level of packet loss is > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > aware > > > > > > > > that > > > > > > > > > > > > > > if a > > > > > > > > > network partion happens > > > > > > > > > during a post copy live migration the vm will crash and > need to > > > > > > > > be > > > > > > > > > restarted. > > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > > > performace > > > > > > > > > > > > > > but > > > > > > > > > unlike pre copy migration if > > > > > > > > > the guess resumes on the dest and the mempry page has not > been > > > > > > > > copied > > > > > > > > > > > > > > yet > > > > > > > > > then it must wait for it to be copied > > > > > > > > > and retrive it form the souce host. if the connection too > the > > > > > > > > souce > > > > > > > > > > > > > > host > > > > > > > > > is intrupted then the vm cant > > > > > > > > > do that and the migration will fail and the instance will > crash. > > > > > > > > if > > > > > > > > > > > > > > you > > > > > > > > > are using precopy migration > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > migration will > > > > > > > > > fail but the instance will continue > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good > to be > > > > > > > > aware > > > > > > > > > > > > > > of > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < > smooney at redhat.com> ha > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano > wrote: > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > migrate > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > to another I cannot ping it for several minutes. If > in the > > > > > > > > vm I > > > > > > > > > > > > > > put a > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > migration > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > I can ping it. Why this happens ? I read something > about > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > older > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > RARP > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > using > > > > > > > > > > > > > > iptables > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we > can do > > > > > > > > until > > > > > > > > > > > > > > we > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > currently libvirt handels interface plugging for > kernel ovs > > > > > > > > when > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress > that > > > > > > > > but it > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > neutron patch are > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out > dated. > > > > > > > > while > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > a race condition where the RARP packets sent by qemu > and > > > > > > > > then mac > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > opnestack > > > > > > > > > > > > > > rock or > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > it should reduce the downtime. in this conficution we > do not > > > > > > > > have > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Mon May 10 15:41:53 2021 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 10 May 2021 11:41:53 -0400 Subject: [keystone] release notes for Victoria & wallaby? In-Reply-To: <20210505171508.hzo4guulzksk35zi@yuggoth.org> References: <20210505171508.hzo4guulzksk35zi@yuggoth.org> Message-ID: Thanks Jeremy! I got the info I wanted :) FWIW, I rebased https://review.opendev.org/783450 so if folks would review/approve before xena is released, there shouldn't be another conflict :) --ruby On Wed, May 5, 2021 at 1:18 PM Jeremy Stanley wrote: > On 2021-05-05 12:53:42 -0400 (-0400), Ruby Loo wrote: > > Where might I find out what changed in keystone in the wallaby release? I > > don't see any release notes for wallaby (or victoria) here: > > https://docs.openstack.org/releasenotes/keystone/. > > You can build them yourself after applying these, which the Keystone > reviewers seem to have missed approving until your message: > > https://review.opendev.org/754296 > https://review.opendev.org/783450 > > Or check again tomorrow after those merge and the publication jobs > are rerun for them. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Mon May 10 15:55:22 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 10 May 2021 18:55:22 +0300 Subject: [TripleO] next irc meeting Tuesday 11 May @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 11 May 1400 UTC in freenode irc channel: #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on Apr 27 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-04-27-14.00.html Hope you can make it, regards, marios From fungi at yuggoth.org Mon May 10 16:00:42 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:00:42 +0000 Subject: [glance][security-sig] Please revisit your open vulnerability report Message-ID: <20210510160042.chmujoojqj5g3psr@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following report: default paste_deploy.flavor is none, but config file text implies it is 'keystone' (was: non-admin users can see all tenants' images even when image is private) https://launchpad.net/bugs/1799588 Can it be exploited by a nefarious actor, and if so, how? Is it likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 10 16:02:01 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:02:01 +0000 Subject: [horizon][security-sig] Please revisit your open vulnerability reports Message-ID: <20210510160201.6yae6jnxij4zfnv3@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following reports: using glance v2 api does not remove temporary files https://launchpad.net/bugs/1674846 DOS : API_RESULT_LIMIT does not work for swift objects https://launchpad.net/bugs/1724598 XSS in adding JavaScript into the ‘Subnet Name’ field https://launchpad.net/bugs/1892848 Can these be exploited by a nefarious actor, and if so, how? Are they likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 10 16:02:50 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:02:50 +0000 Subject: [keystone][security-sig] Please revisit your open vulnerability report Message-ID: <20210510160250.fmftoozu47mzzwek@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following report: PCI-DSS account lock out DoS and account UUID lookup oracle https://launchpad.net/bugs/1688137 Can it be exploited by a nefarious actor, and if so, how? Is it likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 10 16:04:25 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:04:25 +0000 Subject: [neutron][security-sig] Please revisit your open vulnerability reports Message-ID: <20210510160425.4ac3473ecenfwqny@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following reports: Anti-spoofing bypass using Open vSwitch (CVE-2021-20267) https://launchpad.net/bugs/1902917 Neutron RBAC not working for multiple extensions https://launchpad.net/bugs/1784259 tenant isolation is bypassed if port admin-state-up=false https://launchpad.net/bugs/1798904 non-IP ethertypes are permitted with iptables_hybrid firewall driver https://launchpad.net/bugs/1838473 RA Leak on tenant network https://launchpad.net/bugs/1844712 Anti-spoofing bypass https://launchpad.net/bugs/1884341 Can these be exploited by a nefarious actor, and if so, how? Are they likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 10 16:05:31 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:05:31 +0000 Subject: [nova][security-sig] Please revisit your open vulnerability report Message-ID: <20210510160531.sdufshwstx2st2e3@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following report: tenant isolation is bypassed if port admin-state-up=false https://launchpad.net/bugs/1798904 Can it be exploited by a nefarious actor, and if so, how? Is it likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 10 16:06:36 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 May 2021 16:06:36 +0000 Subject: [swift][security-sig] Please revisit your open vulnerability report Message-ID: <20210510160636.odtmnfaseqpyzku6@yuggoth.org> Please help the OpenStack Vulnerability Management Team by taking a look at the following report: Swift tempurl middleware reveals signatures in the logfiles (CVE-2017-8761) https://launchpad.net/bugs/1685798 Can it be exploited by a nefarious actor, and if so, how? Is it likely to be fixable in all our supported stable branches, respecting stable backport policy? What deployment configurations and options might determine whether a particular installation is susceptible? This is the sort of feedback we depend on to make determinations regarding whether and how to keep the public notified, so they can make informed decisions. Thanks for doing your part to keep our users safe! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From artem.goncharov at gmail.com Mon May 10 17:08:25 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Mon, 10 May 2021 19:08:25 +0200 Subject: [sdk]: identity service if get_application_credential method could use user name In-Reply-To: References: <308b555bdd500119c9f17535a50c0649@uvic.ca> Message-ID: <9EC48E81-B822-4232-8010-D14C91EE9C49@gmail.com> > On 10. May 2021, at 18:47, dmeng wrote: > > Good morning, > > Thanks for replying back to me. I tried to use the fine_user to get the user id by username, but it seems like not all the user can use the find_user method. > > If I do: find_user = conn.identity.find_user(name_or_id='catherine'), it will show me that "You are not authorized to perform the requested action: identity:get_user". > > If I do: find_user = conn.identity.find_user(name_or_id='my_user_Id'), then it works fine. > > But I would like to use the username to find the user and get the id, so I'm not sure why in this case find_user only work with id not name. > > > Depending on the configuration of your Keystone (what is already a default) and the account privileges you use (admin, domain_admin, token scope) you may be allowed or not allowed to search/list another users. Normally this is only possible in the domain scope, so maybe you would need to use account with more powers. > Thanks and have a great day! > > Catherine > > > > > > > > > On 2021-05-08 03:01, Artem Goncharov wrote: > >> Hi >> >> >>> We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. >> >> You can use: >> >> user = conn.identity.find_user(name_or_id = 'my_user') >> ac = conn.identity.find_application_credential(user=user, name_or_id='app_cred') >> >> Regards, >> Artem > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Mon May 10 16:47:41 2021 From: dmeng at uvic.ca (dmeng) Date: Mon, 10 May 2021 09:47:41 -0700 Subject: [sdk]: identity service if get_application_credential method could use user name In-Reply-To: References: <308b555bdd500119c9f17535a50c0649@uvic.ca> Message-ID: Good morning, Thanks for replying back to me. I tried to use the fine_user to get the user id by username, but it seems like not all the user can use the find_user method. If I do: find_user = conn.identity.find_user(name_or_id='catherine'), it will show me that "You are not authorized to perform the requested action: identity:get_user". If I do: find_user = conn.identity.find_user(name_or_id='my_user_Id'), then it works fine. But I would like to use the username to find the user and get the id, so I'm not sure why in this case find_user only work with id not name. Thanks and have a great day! Catherine On 2021-05-08 03:01, Artem Goncharov wrote: > Hi > >> We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. > > You can use: > > user = conn.identity.find_user(name_or_id = 'my_user') > ac = conn.identity.find_application_credential(user=user, name_or_id='app_cred') > > Regards, > Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmeng at uvic.ca Mon May 10 17:27:55 2021 From: dmeng at uvic.ca (dmeng) Date: Mon, 10 May 2021 10:27:55 -0700 Subject: [sdk]: identity service if get_application_credential method could use user name In-Reply-To: <9EC48E81-B822-4232-8010-D14C91EE9C49@gmail.com> References: <308b555bdd500119c9f17535a50c0649@uvic.ca> <9EC48E81-B822-4232-8010-D14C91EE9C49@gmail.com> Message-ID: Thanks Artem, just wondering how about if I use my own identity to get connected, and try to find the user id of myself? Like: auth = v3.ApplicationCredential( auth_url="my_auth_url", application_credential_secret="my_cred_secret", application_credential_id="my_cred_id", username='catherine', ) sess = session.Session(auth=auth) conn = connection.Connection( session=sess, region_name='Victoria', identity_api_version='3', ) # tested the above connection works well find_user = conn.identity.find_user(name_or_id='catherine') This returns me that "You are not authorized to perform the requested action: identity:get_user"; but conn.identity.find_user(name_or_id='my_user_Id') works fine. Think in the openstack cli tools, I couldn't show other users, but I could use my own username to list the info of myself, "/usr/local/bin/openstack user show catherine", this works. Thanks for your help, Catherine On 2021-05-10 10:08, Artem Goncharov wrote: >> On 10. May 2021, at 18:47, dmeng wrote: >> >> Good morning, >> >> Thanks for replying back to me. I tried to use the fine_user to get the user id by username, but it seems like not all the user can use the find_user method. >> >> If I do: find_user = conn.identity.find_user(name_or_id='catherine'), it will show me that "You are not authorized to perform the requested action: identity:get_user". >> >> If I do: find_user = conn.identity.find_user(name_or_id='my_user_Id'), then it works fine. >> >> But I would like to use the username to find the user and get the id, so I'm not sure why in this case find_user only work with id not name. > > Depending on the configuration of your Keystone (what is already a default) and the account privileges you use (admin, domain_admin, token scope) you may be allowed or not allowed to search/list another users. Normally this is only possible in the domain scope, so maybe you would need to use account with more powers. > > Thanks and have a great day! > > Catherine > > On 2021-05-08 03:01, Artem Goncharov wrote: Hi > > We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. > > You can use: > > user = conn.identity.find_user(name_or_id = 'my_user') > ac = conn.identity.find_application_credential(user=user, name_or_id='app_cred') > > Regards, > Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon May 10 17:34:40 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 10 May 2021 10:34:40 -0700 Subject: =?UTF-8?Q?[all][infra][qa]_Retiring_Logstash, _Elasticsearch, _subunit2sql?= =?UTF-8?Q?,_and_Health?= Message-ID: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Hello everyone, Xenial has recently reached the end of its life. Our logstash+kibana+elasticsearch and subunit2sql+health data crunching services all run on Xenial. Even without the distro platform EOL concerns these services are growing old and haven't received the care they need to keep running reliably. Additionally these services represent a large portion of our resource consumption: * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers * 20 x 4 vcpu + 4GB RAM logstash-worker servers * 1 x 2 vcpu + 2GB RAM logstash/kibana central server * 2 x 8 vcpu + 8GB RAM subunit-worker servers * 64GB RAM + 500GB disk subunit2sql trove db server * 1 x 4 vcpu + 4GB RAM health server To put things in perspective, they account for more than a quarter of our control plane servers, occupying over a third of our block storage and in excess of half the total memory footprint. The OpenDev/OpenStack Infra team(s) don't seem to have the time available currently to do the major lifting required to bring these services up to date. I would like to propose that we simply turn them off. All of these services operate off of public data that will not be going away (specifically job log content). If others are interested in taking this on they can hook into this data and run their own processing pipelines. I am sure not everyone will be happy with this proposal. I get it. I came up with the idea for the elasticsearch job log processing way back at the San Diego summit. I spent many many many hours since working to get it up and running and to keep it running. But pragmatism means that my efforts and the team's efforts are better spent elsewhere. I am happy to hear feedback on this. Thank you for your time. Clark From whayutin at redhat.com Mon May 10 18:15:00 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 May 2021 12:15:00 -0600 Subject: [tripleo][ci] all c8 master / wallaby blocked In-Reply-To: References: Message-ID: On Mon, May 10, 2021 at 6:52 AM Wesley Hayutin wrote: > https://bugs.launchpad.net/tripleo/+bug/1927952 > > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-8-content-provider > > Issues atm w/ quay.io, we are making sure our failover is working > properly atm. RDO was missing the latest ceph containers. > > Thanks > Looks like things are coming back online for ceph. We're also reviewing our failover mechanisms. https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/782362/ Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From vuk.gojnic at gmail.com Mon May 10 18:43:23 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Mon, 10 May 2021 20:43:23 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: I finally got some error messages. Mars Toktonaliev suggested following: "You could also change virtual serial port in BIOS to be COM1, and then try to SSH to iLO, run VSP command and monitor serial port output: chances are high that any errors will be dumped there." I did that and following happens: 1. I get to Grub 2. I load the kernel with "linux /vmlinuz nofb nomodeset vga=normal console=tty0 console=ttyS0,115200n8" 3. I try to load initrd. It blocks in the loop that throws following error in the loop (all in red letters - I suspect it is where red cursor is comming from :D ): X64 Exception Type 0x03 - Breakpoint Exception RCX=00000000454F9110 DX=0000000000000020 R8=000000006FA6E168 R9=0000000000000000 RSP=00000000454F92D8 BP=000000003D2E4B40 AX=00000000D176B0E2 BX=000000003F3855C0 R10=FFFFFFFFFFFFFFFF 11=000000003D2E5080 12=000000003D2E4FC0 13=000000003D2E4B00 R14=0000000000020004 15=000000006FA677FE SI=00000000000000FF DI=000000003D2E4BE0 CR2=0000000000000000 CR3=00000000454FA000 CR0=80000013 CR4=00000668 CR8=00000000 CS=00000038 DS=00000030 SS=00000030 ES=00000030 RFLAGS=00200297 MSR: 0x1D9 = 00004801, 0x345=000033C5, 0x1C9=00000001 LBRs From To From To 01h 0000000000000006->0000000075B2406F 000000006FA6BB6B->000000006FA6BC88 03h 000000006FA6BF8D->000000006FA6BB42 000000006FA6BC8F->000000006FA6BF89 05h 000000006FA6BB6B->000000006FA6BC88 000000006FA6BF8D->000000006FA6BB42 07h 000000006FA6BC8F->000000006FA6BF89 000000006FA6BB6B->000000006FA6BC88 09h 000000006FA6BF8D->000000006FA6BB42 000000006FA6BC8F->000000006FA6BF89 0Bh 000000006FA6BB6B->000000006FA6BC88 000000006FA6BF8D->000000006FA6BB42 0Dh 000000006FA6BC8F->000000006FA6BF89 000000006FA6BB6B->000000006FA6BC88 0Fh 000000006FA6BF8D->000000006FA6BB42 0000000075B2407A->0000000078AB5660 CALL ImageBase ImageName+Offset 00h 0000000000000000 No Image Information CALL ImageBase ImageName+Offset STACK 00h 04h 08h 0Ch 10h 14h 18h 1Ch RSP+00h 3F478E6D 3D2E4BE0 3D2E4FC0 00000001 3F37F208 6FA6A617 3F37F208 00000000 RSP+20h 3F37F2C0 3F37F208 3F466A96 6FA6B52E 6FA67876 3F37F208 3F37F208 3F37F180 RSP+40h 3F469F11 00000000 00000000 00000000 6FA6A2B7 3F44120B 6FA67851 3F441B6C RSP+60h 00000002 00000000 00000000 3F4354B2 6FA67851 3F43FAD4 2F89A000 3F440ACF RSP+80h 00000000 3F43FAD4 6FA67851 3F43EE6E 3F4337B7 3F37EDE0 6FA6AB8A 3F433814 RSP+A0h 3F4338A6 6FA6CF13 3F380240 6FA6A0FD 6FA6C8F0 3F4FFFDA 3F3802C0 6FA6B149 RSP+C0h 3F380540 74002D18 00000000 74012640 6FA63017 6FBEDDE8 453C4D9E 453C3380 RSP+E0h 74006A18 73A51018 00000000 6F8FED98 6FA62000 00001000 798FE018 7400C398 On Mon, May 10, 2021 at 2:28 PM Vuk Gojnic wrote: > > Hi Julia,hello everybody, > > I have finally got some time to test it further and found some interesting things (see below). I have also got some good tips and support in the IRC. > > I have changed the defaults and set both boot parameters to "uefi": > [deploy] > default_boot_mode = uefi > > [ilo] > default_boot_mode = uefi > > It is detecting the mode correctly and properly configures server to boot. See some latest extract from logs related to "openstack baremetal node provide" operation: > > 2021-05-10 08:45:28.233 561784 INFO ironic.conductor.task_manager [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "cleaning" from state "manageable"; target provision state is "available" > 2021-05-10 08:45:32.235 561784 INFO ironic.drivers.modules.ilo.power [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power off' is completed in 2 seconds. > 2021-05-10 08:45:32.255 561784 INFO ironic.conductor.utils [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power off by power off. > 2021-05-10 08:45:34.056 561784 INFO ironic.drivers.modules.ilo.common [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe pending boot mode is uefi. > 2021-05-10 08:45:35.470 561784 INFO ironic.drivers.modules.ilo.common [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Set the node ca718c74-77a6-46df-8b44-6a83db6a0ebe to boot from URL https://10.23.137.234/tmp-images/ilo/boot-ca718c74-77a6-46df-8b44-6a83db6a0ebe.iso?filename=tmpi262v2zy.iso successfully. > 2021-05-10 08:45:43.485 561784 WARNING oslo.service.loopingcall [-] Function 'ironic.drivers.modules.ilo.power._wait_for_state_change.._wait' run outlasted interval by 1.32 sec > 2021-05-10 08:45:44.857 561784 INFO ironic.drivers.modules.ilo.power [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power on' is completed in 4 seconds. > 2021-05-10 08:45:44.872 561784 INFO ironic.conductor.utils [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power on by rebooting. > 2021-05-10 08:45:44.884 561784 INFO ironic.conductor.task_manager [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "clean wait" from state "cleaning"; target provision state is "available" > > Everyhing goes ok and I come to Grub2. > > I can load the kernel with: > grub> linux /vmlinuz > > However when I try to load initrd with: > grub> initrd /initrd > > It first waits and goes to black screen with red cursor which is frozen. > > I have tried same procedure with standard ubuntu kernel and initrd from http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/hd-media/ and it works correctly and starts the installer. > > I went to try the combinations: > - kernel from Ubuntu server + initrd from custom IPA: this was failing the same way as described above > - kernel from IPA + initrd from Ubuntu server: this was working and starting the Ubuntu installer. > - kernel from Ubuntu servetr + initrd from IPA download server (ipa-centos8-stable-victoria.initramfs): failing same as above. > > I am pretty lost with what is going on :( Does anyone have more ideas? > > -Vuk > > On Thu, Apr 1, 2021 at 8:42 PM Julia Kreger wrote: >> >> Adding the list back and trimming the message. Replies in-band. >> >> Well, that is good that the server is not signed, nor other esp images >> are not working. >> On Thu, Apr 1, 2021 at 11:20 AM Vuk Gojnic wrote: >> > >> > Hey Julia, >> > >> > Thanks for asking. I have tried with several ESP image options with same effect (one taken from Ubuntu Live ISO that boots on that node, another downloaded and third made with grub tools). None of them was signed. >> >> Interesting. At least it is consistent! Have you tried to pull down >> the iso image and take it apart to verify it is UEFI bootable against >> a VM or another physical machine? >> >> I'm wondering if you need both uefi parameters set. You definitely >> don't have properties['capabilities']['boot_mode'] set which is used >> or... maybe a better word to use is drawn in for asserting defaults, >> but you do have the deploy_boot_mode setting set. >> >> I guess a quick manual sanity check of the actual resulting iso image >> is going to be critical. Debug logging may also be useful, and I'm >> only thinking that because there is no logging from the generation of >> the image. >> >> > >> > The server is not in UEFI secure boot mode. >> >> Interesting, sure sounds like it is based on your original message. :( >> >> > Btw. I will be on holidays for next week so I might not be able to follow up on this discussion before Apr 12th. >> >> No worries, just ping us on irc.freenode.net in #openstack-ironic if a >> reply on the mailing list doesn't grab our attention. >> >> > >> > Bests, >> > Vuk >> > >> > On Thu, Apr 1, 2021 at 4:20 PM Julia Kreger wrote: >> >> >> >> Greetings, >> >> >> >> Two questions: >> >> 1) Are the ESP image contents signed, or are they built using one of >> >> the grub commands? >> >> 2) Is the machine set to enforce secure boot at this time? >> >> >> >> >> [trim] From geguileo at redhat.com Mon May 10 19:41:16 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 10 May 2021 21:41:16 +0200 Subject: [cinder] Multiattach volume mount/attach limit In-Reply-To: References: Message-ID: <20210510194116.qqtfeigsnft5lrn5@localhost> On 06/05, Adam Tomas wrote: > Hi, > Is there any way to limit the number of possible mounts of multiattach volume? > > Best regards > Adam Tomas > Hi, This is currently not possible, and I believe there is no ongoing effort to add this feature either. Cheers, Gorka. From juliaashleykreger at gmail.com Mon May 10 20:54:21 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 10 May 2021 16:54:21 -0400 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: My *guess* is that Secure Boot is being enforced and the asset being loaded may not be signed, but I've never seen such issue a stacktrace. Just a hard halt with a red screen on HPE gear. Anyway, check the bios settings and try disabling secure boot enforcement. -Julia On Mon, May 10, 2021, 14:43 Vuk Gojnic wrote: > I finally got some error messages. Mars Toktonaliev suggested following: > > "You could also change virtual serial port in BIOS to be COM1, and > then try to SSH to iLO, run VSP command and monitor serial port > output: chances are high that any errors will be dumped there." > > I did that and following happens: > 1. I get to Grub > 2. I load the kernel with "linux /vmlinuz nofb nomodeset vga=normal > console=tty0 console=ttyS0,115200n8" > 3. I try to load initrd. It blocks in the loop that throws following > error in the loop (all in red letters - I suspect it is where red > cursor is comming from :D ): > > X64 Exception Type 0x03 - Breakpoint Exception > > RCX=00000000454F9110 DX=0000000000000020 R8=000000006FA6E168 > R9=0000000000000000 > RSP=00000000454F92D8 BP=000000003D2E4B40 AX=00000000D176B0E2 > BX=000000003F3855C0 > R10=FFFFFFFFFFFFFFFF 11=000000003D2E5080 12=000000003D2E4FC0 > 13=000000003D2E4B00 > R14=0000000000020004 15=000000006FA677FE SI=00000000000000FF > DI=000000003D2E4BE0 > CR2=0000000000000000 CR3=00000000454FA000 CR0=80000013 CR4=00000668 > CR8=00000000 > CS=00000038 DS=00000030 SS=00000030 ES=00000030 RFLAGS=00200297 > MSR: 0x1D9 = 00004801, 0x345=000033C5, 0x1C9=00000001 > > LBRs From To From To > 01h 0000000000000006->0000000075B2406F 000000006FA6BB6B->000000006FA6BC88 > 03h 000000006FA6BF8D->000000006FA6BB42 000000006FA6BC8F->000000006FA6BF89 > 05h 000000006FA6BB6B->000000006FA6BC88 000000006FA6BF8D->000000006FA6BB42 > 07h 000000006FA6BC8F->000000006FA6BF89 000000006FA6BB6B->000000006FA6BC88 > 09h 000000006FA6BF8D->000000006FA6BB42 000000006FA6BC8F->000000006FA6BF89 > 0Bh 000000006FA6BB6B->000000006FA6BC88 000000006FA6BF8D->000000006FA6BB42 > 0Dh 000000006FA6BC8F->000000006FA6BF89 000000006FA6BB6B->000000006FA6BC88 > 0Fh 000000006FA6BF8D->000000006FA6BB42 0000000075B2407A->0000000078AB5660 > > CALL ImageBase ImageName+Offset > 00h 0000000000000000 No Image Information > > CALL ImageBase ImageName+Offset > > > > > STACK 00h 04h 08h 0Ch 10h 14h 18h 1Ch > RSP+00h 3F478E6D 3D2E4BE0 3D2E4FC0 00000001 3F37F208 6FA6A617 3F37F208 > 00000000 > RSP+20h 3F37F2C0 3F37F208 3F466A96 6FA6B52E 6FA67876 3F37F208 3F37F208 > 3F37F180 > RSP+40h 3F469F11 00000000 00000000 00000000 6FA6A2B7 3F44120B 6FA67851 > 3F441B6C > RSP+60h 00000002 00000000 00000000 3F4354B2 6FA67851 3F43FAD4 2F89A000 > 3F440ACF > RSP+80h 00000000 3F43FAD4 6FA67851 3F43EE6E 3F4337B7 3F37EDE0 6FA6AB8A > 3F433814 > RSP+A0h 3F4338A6 6FA6CF13 3F380240 6FA6A0FD 6FA6C8F0 3F4FFFDA 3F3802C0 > 6FA6B149 > RSP+C0h 3F380540 74002D18 00000000 74012640 6FA63017 6FBEDDE8 453C4D9E > 453C3380 > RSP+E0h 74006A18 73A51018 00000000 6F8FED98 6FA62000 00001000 798FE018 > 7400C398 > > > On Mon, May 10, 2021 at 2:28 PM Vuk Gojnic wrote: > > > > Hi Julia,hello everybody, > > > > I have finally got some time to test it further and found some > interesting things (see below). I have also got some good tips and support > in the IRC. > > > > I have changed the defaults and set both boot parameters to "uefi": > > [deploy] > > default_boot_mode = uefi > > > > [ilo] > > default_boot_mode = uefi > > > > It is detecting the mode correctly and properly configures server to > boot. See some latest extract from logs related to "openstack baremetal > node provide" operation: > > > > 2021-05-10 08:45:28.233 561784 INFO ironic.conductor.task_manager > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node > ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "cleaning" > from state "manageable"; target provision state is "available" > > 2021-05-10 08:45:32.235 561784 INFO ironic.drivers.modules.ilo.power > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node > ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power off' is completed > in 2 seconds. > > 2021-05-10 08:45:32.255 561784 INFO ironic.conductor.utils > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node > ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power off by power off. > > 2021-05-10 08:45:34.056 561784 INFO ironic.drivers.modules.ilo.common > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node > ca718c74-77a6-46df-8b44-6a83db6a0ebe pending boot mode is uefi. > > 2021-05-10 08:45:35.470 561784 INFO ironic.drivers.modules.ilo.common > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Set the node > ca718c74-77a6-46df-8b44-6a83db6a0ebe to boot from URL > https://10.23.137.234/tmp-images/ilo/boot-ca718c74-77a6-46df-8b44-6a83db6a0ebe.iso?filename=tmpi262v2zy.iso > successfully. > > 2021-05-10 08:45:43.485 561784 WARNING oslo.service.loopingcall [-] > Function > 'ironic.drivers.modules.ilo.power._wait_for_state_change.._wait' > run outlasted interval by 1.32 sec > > 2021-05-10 08:45:44.857 561784 INFO ironic.drivers.modules.ilo.power > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] The node > ca718c74-77a6-46df-8b44-6a83db6a0ebe operation of 'power on' is completed > in 4 seconds. > > 2021-05-10 08:45:44.872 561784 INFO ironic.conductor.utils > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Successfully set node > ca718c74-77a6-46df-8b44-6a83db6a0ebe power state to power on by rebooting. > > 2021-05-10 08:45:44.884 561784 INFO ironic.conductor.task_manager > [req-24fe55db-c252-471c-9639-9ad43a15137b - - - - -] Node > ca718c74-77a6-46df-8b44-6a83db6a0ebe moved to provision state "clean wait" > from state "cleaning"; target provision state is "available" > > > > Everyhing goes ok and I come to Grub2. > > > > I can load the kernel with: > > grub> linux /vmlinuz > > > > However when I try to load initrd with: > > grub> initrd /initrd > > > > It first waits and goes to black screen with red cursor which is frozen. > > > > I have tried same procedure with standard ubuntu kernel and initrd from > http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/hd-media/ > and it works correctly and starts the installer. > > > > I went to try the combinations: > > - kernel from Ubuntu server + initrd from custom IPA: this was failing > the same way as described above > > - kernel from IPA + initrd from Ubuntu server: this was working and > starting the Ubuntu installer. > > - kernel from Ubuntu servetr + initrd from IPA download server > (ipa-centos8-stable-victoria.initramfs): failing same as above. > > > > I am pretty lost with what is going on :( Does anyone have more ideas? > > > > -Vuk > > > > On Thu, Apr 1, 2021 at 8:42 PM Julia Kreger > wrote: > >> > >> Adding the list back and trimming the message. Replies in-band. > >> > >> Well, that is good that the server is not signed, nor other esp images > >> are not working. > >> On Thu, Apr 1, 2021 at 11:20 AM Vuk Gojnic > wrote: > >> > > >> > Hey Julia, > >> > > >> > Thanks for asking. I have tried with several ESP image options with > same effect (one taken from Ubuntu Live ISO that boots on that node, > another downloaded and third made with grub tools). None of them was signed. > >> > >> Interesting. At least it is consistent! Have you tried to pull down > >> the iso image and take it apart to verify it is UEFI bootable against > >> a VM or another physical machine? > >> > >> I'm wondering if you need both uefi parameters set. You definitely > >> don't have properties['capabilities']['boot_mode'] set which is used > >> or... maybe a better word to use is drawn in for asserting defaults, > >> but you do have the deploy_boot_mode setting set. > >> > >> I guess a quick manual sanity check of the actual resulting iso image > >> is going to be critical. Debug logging may also be useful, and I'm > >> only thinking that because there is no logging from the generation of > >> the image. > >> > >> > > >> > The server is not in UEFI secure boot mode. > >> > >> Interesting, sure sounds like it is based on your original message. :( > >> > >> > Btw. I will be on holidays for next week so I might not be able to > follow up on this discussion before Apr 12th. > >> > >> No worries, just ping us on irc.freenode.net in #openstack-ironic if a > >> reply on the mailing list doesn't grab our attention. > >> > >> > > >> > Bests, > >> > Vuk > >> > > >> > On Thu, Apr 1, 2021 at 4:20 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > >> >> > >> >> Greetings, > >> >> > >> >> Two questions: > >> >> 1) Are the ESP image contents signed, or are they built using one of > >> >> the grub commands? > >> >> 2) Is the machine set to enforce secure boot at this time? > >> >> > >> >> > >> [trim] > -------------- next part -------------- An HTML attachment was scrubbed... URL: From song.bao.hua at hisilicon.com Mon May 10 22:15:14 2021 From: song.bao.hua at hisilicon.com (Song Bao Hua (Barry Song)) Date: Mon, 10 May 2021 22:15:14 +0000 Subject: [dev][tc]nova-scheduler: cpu-topology: sub-numa(numa nodes within one CPU socket) aware scheduler Message-ID: <3d38f66a48d44426adeffa91a72c0c79@hisilicon.com> Dears, Historically, both AMD EPYC and Intel Xeon had NUMA nodes within one CPU socket. For Intel haswell/broadwell, they could have two rings to connect multiple CPUs: +----------------------------+ +-------------------------+ +---+ +-----+ +----+ +---+ +---+ +-----+ +----+ +---+ | | | | +---+ +-----+ +----+ ring +---+ +---+ ring +-----+ +----+ +---+ | | | | +---+ +-----+ +----+ +---+ +---+ +-----+ +----+ +---+ | | | | | | | | +----------------------------+ +-------------------------+ Due to the latency to access the memory located in different rings, cluster-on-die(COD) NUMA nodes did exist in haswell/ broadwell. For AMD EPYC, they used to have 4 DIEs in one CPU socket: +-------------------------------------------------+ | AMD EPYC | | +-------------+ +-------------+ | | | | | | | | | die | | die | | | | |-------- | | | | |\ /| | | | +-------------+ -\ /- +-------------+ | | | /-\ | | | +-------|------+/- -+-------|------+ | | | - | | | | | die | | die | | | | |------| | | | | | | | | | +--------------+ +--------------+ | | | +-------------------------------------------------+ These 4 different DIEs could be configured as 4 different NUMA. However, with the development of hardware, Intel and AMD no longer need to be aware of NUMA nodes within single one CPU. Intel has moved to mesh since skylake(server): +-------|-----------------------+ | | | | | | | | | | |-------|-------|-------|-------| | | | | | | | | | | |-------|-------|-------|-------| | | | | | | | | | | |-------|-------|-------|-------| | | | | | | | | | | +-------------------------------+ AMD moved memory and I/O to a separate DIE, thus made the whole CPU become UMA in EPYC II: +---------+ +---------+ +---------+ +---------+ | cpu die | |cpu die | |cpu die | |cpu die | | | | | | | | | +---------+ +---------+ +---------+ +---------+ +------------------------------------------------------+ | | | memory and I/O DIE | | | +------------------------------------------------------+ +---------+ +---------+ +---------+ +---------+ |cpu die | |cpu die | |cpu die | |cpu die | | | | | | | | | +---------+ +---------+ +---------+ +---------+ Skylake, while still having SNC(sub-numa-cluster) within a DIE for a huge mesh network, doesn't really suffer from this kind of internal sub-numa since the latency difference is really minor to enable SNC or not. According to "64-ia-32-architectures-optimization-manual"[1], for a typical 2P system, disabling and enabling SNC will bring minor latency difference for memory access within one CPU: SNC off: Using buffer size of 2000.000MB Measuring idle latencies (in ns)... Numa node Numa node 0 1 0 81.9 153.1 1 153.7 82.0 SNC on: Using buffer size of 2000.000MB Measuring idle latencies (in ns)... Numa node Numa node 0 1 2 3 0 81.6 89.4 140.4 153.6 1 86.5 78.5 144.3 162.8 2 142.3 153.0 81.6 89.3 3 144.5 162.8 85.5 77.4 As above, if we disable sub-numa in one CPU, the CPU has memory latency as 81.9, but if sub-numa is enabled, the advantage is really minor, SNC can access its own memory with the latency of 81.6 with 0.3 gap only. So SNC doesn't really matter on Xeon. However, the development of kunpeng920's topology is different from Intel and AMD. Similar with AMD EPYC, kunpeng920 has two DIEs in one CPU. Unlike EPYC which has only 8cores in each DIE, each DIE of kunpeng920 has 24 or 32 cores. For a typical 2P system, we can set it to 2NUMA or 4NUMA. +------------------------------+ +------------------------------+ | CPU | | CPU | | +----------+ +----------+ | | +----------+ +----------+ | | | | | | | | | | | | | | | DIE | | DIE | | | | DIE | | DIE | | | | | | | | | | | | | | | | | | | | | | | | | | | +----------+ +----------+ | | +----------+ +----------+ | | | | | +------------------------------+ +------------------------------+ * 2NUMA - DIE interleave In this case, two DIEs become one NUMA. The advantage is that we are getting more CPUs in one NUMA, so this decreases the fragment of CPUs and help deploy more virtual machines on the same host when we apply the rule of pinning VM within one NUMA, compared with disabling DIE interleave. But, this has some obvious disadvantage. Since we need to run DIE interleave, the average memory access latency could increase much. Enabling DIE interleave: Numa node Numa node 0 1 0 95.68 199.21 1 199.21 95.68 Disabling DIE interleave Numa node Numa node 0 1 2 3 0 85.79 104.33 189.95 209.00 1 104.33 85.79 209.00 229.60 2 189.95 209.00 85.79 104.33 3 209.00 229.60 104.33 85.79 As above, one DIE can access its local memory with latency of 85.79, but when die-interleave is enabled, the latency becomes 95.68. The gap 95.68-85.79 isn't minor. * 4NUMA - each DIE becomes one NUMA In this way, NUMA-aware software can access its local memory with much lower latency. Thus, we can gain some performance improvement in openstack if one VM is deployed in one DIE and only access its local memory. Testing has shown 4%+ performance improvement. However, under the rule that VM does not cross NUMA, less VMs could be deployed on the same host. In order to achieve both goals of performance improvement and anti-fragmentation, we are looking for some way of sub-numa awareness in openstack scheduler. That means, 1. we will disable DIE interleave, thus, for 2P system, we are going to have 4 NUMA; 2. we use the rule VM shouldn't cross CPU, thinking one CPU as a big NUMA and each DIE as a smaller sub-numa. 3. For guests, we can define different priorities, for high priority VMs whose performance is a great concern, openstack will try to deploy them in same sub-numa which is a DIE for kunpeng920; for low priority VMs who can endure across-die latency, they could be placed on different sub-numa, but still within same CPU. So basically, we'd like to make openstack aware of the topology of sub-numa within one cpu and support a rule which prevents cross-CPU but allows cross-DIE. At the same time, this scheduler should try its best to deploy important VMs in same sub-numa and allow relatively unimportant VMs to cross two sub-numa. This might be done by customized openstack flavor according to [2], but we are really looking for some flexible and automatic way. If openstack scheduler can recognize this kind of CPU topology and make smart decision accordingly, it would be nice. Please let me know how you think about it. And alternative ways also welcome. [1] https://software.intel.com/content/www/us/en/develop/download/intel-64-and-ia-32-architectures-optimization-reference-manual.html [2] https://docs.openstack.org/nova/pike/admin/cpu-topologies.html Thanks Barry From luke.camilleri at zylacomputing.com Tue May 11 00:15:18 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 11 May 2021 02:15:18 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> Message-ID: Hi Michael and thanks a lot for the detailed answer below. I believe I have got most of this sorted out apart from some small issues below: 1. If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? 2. How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? 3. Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? Thanks in advance for any feedback On 06/05/2021 22:46, Michael Johnson wrote: > Hi Luke, > > 1. I agree that DHCP is technically unnecessary for the o-hm0 > interface if you can manage your address allocation on the network you > are using for the lb-mgmt-net. > I don't have detailed information about the Ubuntu install > instructions, but I suspect it was done to simplify the IPAM to be > managed by whatever is providing DHCP on the lb-mgmt-net provided (be > it neutron or some other resource on a provider network). > The lb-mgmt-net is simply a neutron network that the amphora > management address is on. It is routable and does not require external > access. The only tricky part to it is the worker, health manager, and > housekeeping processes need to be reachable from the amphora, and the > controllers need to reach the amphora over the network(s). There are > many ways to accomplish this. > > 2. See my above answer. Fundamentally the lb-mgmt-net is just a > neutron network that nova can use to attach an interface to the > amphora instances for command and control traffic. As long as the > controllers can reach TCP 9433 on the amphora, and the amphora can > send UDP 5555 back to the health manager endpoints, it will work fine. > > 3. Octavia, with the amphora driver, does not require any special > configuration in Neutron (beyond the advanced services RBAC policy > being available for the neutron service account used in your octavia > configuration file). The neutron_lbaas.conf and services_lbaas.conf > are legacy configuration files/settings that were used for > neutron-lbaas which is now end of life. See the wiki page for > information on the deprecation of neutron-lbaas: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. > > Michael > > On Thu, May 6, 2021 at 12:30 PM Luke Camilleri > wrote: >> Hi Michael and thanks a lot for your help on this, after following your >> steps the agent got deployed successfully in the amphora-image. >> >> I have some other queries that I would like to ask mainly related to the >> health-manager/load-balancer network setup and IP assignment. First of >> all let me point out that I am using a manual installation process, and >> it might help others to understand the underlying infrastructure >> required to make this component work as expected. >> >> 1- The installation procedure contains this step: >> >> $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia >> >> which is later on called to assign the IP to the o-hm0 interface which >> is connected to the lb-management network as shown below: >> >> $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia >> >> Apart from having a dhcp config for a single IP seems a bit of an >> overkill, using these steps is injecting an additional routing table >> into the default namespace as shown below in my case: >> >> # route -n >> Kernel IP routing table >> Destination Gateway Genmask Flags Metric Ref Use >> Iface >> 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 >> 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX >> 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX >> 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 >> 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 >> >> Since the load-balancer management network does not need any external >> connectivity (but only communication between health-manager service and >> amphora-agent), why is a gateway required and why isn't the IP address >> allocated as part of the interface creation script which is called when >> the service is started or stopped (example below)? >> >> --- >> >> #!/bin/bash >> >> set -ex >> >> MAC=$MGMT_PORT_MAC >> BRNAME=$BRNAME >> >> if [ "$1" == "start" ]; then >> ip link add o-hm0 type veth peer name o-bhm0 >> brctl addif $BRNAME o-bhm0 >> ip link set o-bhm0 up >> ip link set dev o-hm0 address $MAC >> *** ip addr add 172.16.0.2/12 dev o-hm0 >> ***ip link set o-hm0 mtu 1500 >> ip link set o-hm0 up >> iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT >> elif [ "$1" == "stop" ]; then >> ip link del o-hm0 >> else >> brctl show $BRNAME >> ip a s dev o-hm0 >> fi >> >> --- >> >> 2- Is there a possibility to specify a fixed vlan outside of tenant >> range for the load balancer management network? >> >> 3- Are the configuration changes required only in neutron.conf or also >> in additional config files like neutron_lbaas.conf and >> services_lbaas.conf, similar to the vpnaas configuration? >> >> Thanks in advance for any assistance, but its like putting together a >> puzzle of information :-) >> >> On 05/05/2021 20:25, Michael Johnson wrote: >>> Hi Luke. >>> >>> Yes, the amphora-agent will listen on 9443 in the amphorae instances. >>> It uses TLS mutual authentication, so you can get a TLS response, but >>> it will not let you into the API without a valid certificate. A simple >>> "openssl s_client" is usually enough to prove that it is listening and >>> requesting the client certificate. >>> >>> I can't talk to the "openstack-octavia-diskimage-create" package you >>> found in centos, but I can discuss how to build an amphora image using >>> the OpenStack tools. >>> >>> If you get Octavia from git or via a release tarball, we provide a >>> script to build the amphora image. This is how we build our images for >>> the testing gates, etc. and is the recommended way (at least from the >>> OpenStack Octavia community) to create amphora images. >>> >>> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create >>> >>> For CentOS 8, the command would be: >>> >>> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 >>> is the minimum disk size for centos images, you may want more if you >>> are not offloading logs) >>> >>> I just did a run on a fresh centos 8 instance: >>> git clone https://opendev.org/openstack/octavia >>> python3 -m venv dib >>> source dib/bin/activate >>> pip3 install diskimage-builder PyYAML six >>> sudo dnf install yum-utils >>> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 >>> >>> This built an image. >>> >>> Off and on we have had issues building CentOS images due to issues in >>> the tools we rely on. If you run into issues with this image, drop us >>> a note back. >>> >>> Michael >>> >>> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri >>> wrote: >>>> Hi there, i am trying to get Octavia running on a Victoria deployment on >>>> CentOS 8. It was a bit rough getting to the point to launch an instance >>>> mainly due to the load-balancer management network and the lack of >>>> documentation >>>> (https://docs.openstack.org/octavia/victoria/install/install.html) to >>>> deploy this oN CentOS. I will try to fix this once I have my deployment >>>> up and running to help others on the way installing and configuring this :-) >>>> >>>> At this point a LB can be launched by the tenant and the instance is >>>> spawned in the Octavia project and I can ping and SSH into the amphora >>>> instance from the Octavia node where the octavia-health-manager service >>>> is running using the IP within the same subnet of the amphoras >>>> (172.16.0.0/12). >>>> >>>> Unfortunately I keep on getting these errors in the log file of the >>>> worker log (/var/log/octavia/worker.log): >>>> >>>> 2021-05-05 01:54:49.368 14521 WARNING >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >>>> to instance. Retrying.: requests.exceptions.ConnectionError: >>>> HTTPSConnectionPool(host='172.16.4.46', p >>>> ort=9443): Max retries exceeded with url: // (Caused by >>>> NewConnectionError('>>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] >>>> Connection ref >>>> used',)) >>>> >>>> 2021-05-05 01:54:54.374 14521 ERROR >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries >>>> (currently set to 120) exhausted. The amphora is unavailable. Reason: >>>> HTTPSConnectionPool(host='172.16 >>>> .4.46', port=9443): Max retries exceeded with url: // (Caused by >>>> NewConnectionError('>>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne >>>> ction refused',)) >>>> >>>> 2021-05-05 01:54:54.374 14521 ERROR >>>> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora >>>> compute instance failed to become reachable. This either means the >>>> compute driver failed to fully boot the >>>> instance inside the timeout interval or the instance is not reachable >>>> via the lb-mgmt-net.: >>>> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: >>>> contacting the amphora timed out >>>> >>>> obviously the instance is deleted then and the task fails from the >>>> tenant's perspective. >>>> >>>> The main issue here is that there is no service running on port 9443 on >>>> the amphora instance. I am assuming that this is in fact the >>>> amphora-agent service that is running on the instance which should be >>>> listening on this port 9443 but the service does not seem to be up or >>>> not installed at all. >>>> >>>> To create the image I have installed the CentOS package >>>> "openstack-octavia-diskimage-create" which provides the utility >>>> disk-image-create but from what I can conclude the amphora-agent is not >>>> being installed (thought this was done automatically by default :-( ) >>>> >>>> Can anyone let me know if the amphora-agent is what gets queried on port >>>> 9443 ? >>>> >>>> If the agent is not installed/injected by default when building the >>>> amphora image? >>>> >>>> The command to inject the amphora-agent into the amphora image when >>>> using the disk-image-create command? >>>> >>>> Thanks in advance for any assistance >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 11 04:26:19 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 10 May 2021 23:26:19 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 13th at 1500 UTC Message-ID: <17959acfbe4.c770ef05323333.5684594462011495541@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 13th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 12th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From jayadityagupta11 at gmail.com Tue May 11 04:52:08 2021 From: jayadityagupta11 at gmail.com (jayaditya gupta) Date: Tue, 11 May 2021 06:52:08 +0200 Subject: [placement] clouds.yaml does not use placement API version variable Message-ID: Even after specifying the variable in clouds.yaml placement still uses version 1.0 Story : https://storyboard.openstack.org/#!/story/2008553 Hello can someone help me on this one. I'm not sure where exactly is the issue in placement or in osc-placement . I would really appreciate the insight on this one. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Tue May 11 06:53:01 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 11 May 2021 06:53:01 +0000 Subject: [ops][victoria][cinder] Import volume? In-Reply-To: <0670B960225633449A24709C291A52525115AB9B@COM01.performair.local> References: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> <20210415170329.GA2777639@sm-workstation> <0670B960225633449A24709C291A52525115AB9B@COM01.performair.local> Message-ID: <20210511065301.Horde.ktHL2_x-DuouLKkWrMyXFC2@webmail.nde.ag> Hi, it's been a while since I had to deal with that. I wrote a blog post [1] a couple of years ago when we were migrating from xen to kvm. In our case we had to install virtio modules in order to be able to boot those images, they would run into a dracut timeout. > Can I get away with adjusting fstab to use what the partitions are > likely to be after transition to OpenStac? Are these of this form: > /dev/vda#? I'm not really sure, I guess you'll have to try it out. And yes, you should replace all references of /dev/xvd* and remove the x. Regards, Eugen [1] http://heiterbiswolkig.blogs.nde.ag/2017/08/10/migrate-from-xen-to-kvm/ Zitat von DHilsbos at performair.com: > All; > > I've been successful at the tasks listed below, but the resulting > volumes don't boot. They stop early in the boot process indicating > the root partition cannot be found. > > I suspect the issue is with either the disk UUID links, or with > partition detection. > > When running the VMs under XenServer, we have > /dev/disk/by-uuid/ --> /dev/xvda# > > Are these created dynamically by the kernel, or are they static soft-links? > > Can I get away with adjusting fstab to use what the partitions are > likely to be after transition to OpenStac? Are these of this form: > /dev/vda#? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > -----Original Message----- > From: Sean McGinnis [mailto:sean.mcginnis at gmx.com] > Sent: Thursday, April 15, 2021 10:03 AM > To: Eugen Block > Cc: Dominic Hilsbos; openstack-discuss at lists.openstack.org > Subject: Re: [ops][victoria][cinder] Import volume? > > On Thu, Apr 15, 2021 at 04:30:45PM +0000, Eugen Block wrote: >> Hi, >> >> there’s a ‚cinder manage‘ command to import an rbd image into openstack. >> But be aware that if you delete it in openstack it will be removed from >> ceph, too (like a regular cinder volume). >> I don’t have the exact command syntax at hand right now, but try ‚cinder >> help manage‘ >> >> Regards >> Eugen >> > > Here is the documentation for that command: > > https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-manage > > Also note, if you no longer need to manage the volume in Cinder, but > you do not > want it to be deleted from your storage backend, there is also the inverse > command of `cinder unmanage`. Details for that command can be found here: > > https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-unmanage > > >> >> Zitat von DHilsbos at performair.com: >> >> > All; >> > >> > I'm looking to transfer several VMs from XenServer to an OpenStack >> > Victoria cloud. Finding explanations for importing Glance images is >> > easy, but I haven't been able to find a tutorial on importing Cinder >> > volumes. >> > >> > Since they are currently independent servers / volumes it seems somewhat >> > wasteful and messy to import each VMs disk as an image just to spawn a >> > volume from it. >> > >> > We're using Ceph as the storage provider for Glance and Cinder. >> > >> > Thank you, >> > >> > Dominic L. Hilsbos, MBA >> > Director - Information Technology >> > Perform Air International Inc. >> > DHilsbos at PerformAir.com >> > www.PerformAir.com >> >> >> >> From balazs.gibizer at est.tech Tue May 11 07:29:13 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 11 May 2021 09:29:13 +0200 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: On Mon, May 10, 2021 at 10:34, Clark Boylan wrote: > Hello everyone, Hi, > > Xenial has recently reached the end of its life. Our > logstash+kibana+elasticsearch and subunit2sql+health data crunching > services all run on Xenial. Even without the distro platform EOL > concerns these services are growing old and haven't received the care > they need to keep running reliably. > > Additionally these services represent a large portion of our resource > consumption: > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > * 64GB RAM + 500GB disk subunit2sql trove db server > * 1 x 4 vcpu + 4GB RAM health server > > To put things in perspective, they account for more than a quarter of > our control plane servers, occupying over a third of our block > storage and in excess of half the total memory footprint. > > The OpenDev/OpenStack Infra team(s) don't seem to have the time > available currently to do the major lifting required to bring these > services up to date. I would like to propose that we simply turn them > off. All of these services operate off of public data that will not > be going away (specifically job log content). If others are > interested in taking this on they can hook into this data and run > their own processing pipelines. > > I am sure not everyone will be happy with this proposal. I get it. I > came up with the idea for the elasticsearch job log processing way > back at the San Diego summit. I spent many many many hours since > working to get it up and running and to keep it running. But > pragmatism means that my efforts and the team's efforts are better > spent elsewhere. > > I am happy to hear feedback on this. Thank you for your time. Thank you and the whole infra team(s) for the effort to keeping the infrastructure alive. I'm an active user of the ELK stack in OpenStack. I use it to figure out if a particular gate failure I see is just a one time event or it is a real failure we need to fix. So I'm sad that this tooling will be shut down as I think I loose one of the tools that helped me keeping our Gate healthy. But I understood how busy is everybody these days. I'm not an infra person but if I can help somehow from Nova perspective then let me know. (E.g. I can review elastic recheck signatures if that helps) Cheers, gibi > > Clark > > From bkslash at poczta.onet.pl Tue May 11 07:43:04 2021 From: bkslash at poczta.onet.pl (Adam Tomas) Date: Tue, 11 May 2021 09:43:04 +0200 Subject: [cinder] Multiattach volume mount/attach limit In-Reply-To: References: Message-ID: <8D94F669-6496-439F-A382-AAB84834A6B6@poczta.onet.pl> Hi, thank you for the answer. Regards Adam > > Hi, > > This is currently not possible, and I believe there is no ongoing effort > to add this feature either. > > Cheers, > Gorka. > > > From sylvain.bauza at gmail.com Tue May 11 07:47:45 2021 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 11 May 2021 09:47:45 +0200 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: Le mar. 11 mai 2021 à 09:35, Balazs Gibizer a écrit : > > > On Mon, May 10, 2021 at 10:34, Clark Boylan > wrote: > > Hello everyone, > > Hi, > > > > > Xenial has recently reached the end of its life. Our > > logstash+kibana+elasticsearch and subunit2sql+health data crunching > > services all run on Xenial. Even without the distro platform EOL > > concerns these services are growing old and haven't received the care > > they need to keep running reliably. > > > > Additionally these services represent a large portion of our resource > > consumption: > > > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > > * 64GB RAM + 500GB disk subunit2sql trove db server > > * 1 x 4 vcpu + 4GB RAM health server > > > > To put things in perspective, they account for more than a quarter of > > our control plane servers, occupying over a third of our block > > storage and in excess of half the total memory footprint. > > > > The OpenDev/OpenStack Infra team(s) don't seem to have the time > > available currently to do the major lifting required to bring these > > services up to date. I would like to propose that we simply turn them > > off. All of these services operate off of public data that will not > > be going away (specifically job log content). If others are > > interested in taking this on they can hook into this data and run > > their own processing pipelines. > > > > I am sure not everyone will be happy with this proposal. I get it. I > > came up with the idea for the elasticsearch job log processing way > > back at the San Diego summit. I spent many many many hours since > > working to get it up and running and to keep it running. But > > pragmatism means that my efforts and the team's efforts are better > > spent elsewhere. > > > > I am happy to hear feedback on this. Thank you for your time. > > Thank you and the whole infra team(s) for the effort to keeping the > infrastructure alive. I'm an active user of the ELK stack in OpenStack. > I use it to figure out if a particular gate failure I see is just a one > time event or it is a real failure we need to fix. So I'm sad that this > tooling will be shut down as I think I loose one of the tools that > helped me keeping our Gate healthy. But I understood how busy is > everybody these days. I'm not an infra person but if I can help somehow > from Nova perspective then let me know. (E.g. I can review elastic > recheck signatures if that helps) > > Worth said, gibi. I understand the reasoning behind the ELK sunset but I'm a bit afraid of not having a way to know the number of changes that were failing with the same exception than one I saw. Could we be discussing how we could try to find a workaround for this ? Maybe no longer using ELK, but at least still continuing to have the logs for, say, 2 weeks ? -Sylvain Cheers, > gibi > > > > > Clark > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue May 11 08:21:02 2021 From: smooney at redhat.com (Sean Mooney) Date: Tue, 11 May 2021 09:21:02 +0100 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: On Tue, 2021-05-11 at 09:47 +0200, Sylvain Bauza wrote: > Le mar. 11 mai 2021 à 09:35, Balazs Gibizer a > écrit : > > > > > > > On Mon, May 10, 2021 at 10:34, Clark Boylan > > wrote: > > > Hello everyone, > > > > Hi, > > > > > > > > Xenial has recently reached the end of its life. Our > > > logstash+kibana+elasticsearch and subunit2sql+health data crunching > > > services all run on Xenial. Even without the distro platform EOL > > > concerns these services are growing old and haven't received the care > > > they need to keep running reliably. > > > > > > Additionally these services represent a large portion of our resource > > > consumption: > > > > > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > > > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > > > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > > > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > > > * 64GB RAM + 500GB disk subunit2sql trove db server > > > * 1 x 4 vcpu + 4GB RAM health server > > > > > > To put things in perspective, they account for more than a quarter of > > > our control plane servers, occupying over a third of our block > > > storage and in excess of half the total memory footprint. > > > > > > The OpenDev/OpenStack Infra team(s) don't seem to have the time > > > available currently to do the major lifting required to bring these > > > services up to date. I would like to propose that we simply turn them > > > off. All of these services operate off of public data that will not > > > be going away (specifically job log content). If others are > > > interested in taking this on they can hook into this data and run > > > their own processing pipelines. > > > > > > I am sure not everyone will be happy with this proposal. I get it. I > > > came up with the idea for the elasticsearch job log processing way > > > back at the San Diego summit. I spent many many many hours since > > > working to get it up and running and to keep it running. But > > > pragmatism means that my efforts and the team's efforts are better > > > spent elsewhere. > > > > > > I am happy to hear feedback on this. Thank you for your time. > > > > Thank you and the whole infra team(s) for the effort to keeping the > > infrastructure alive. I'm an active user of the ELK stack in OpenStack. > > I use it to figure out if a particular gate failure I see is just a one > > time event or it is a real failure we need to fix. So I'm sad that this > > tooling will be shut down as I think I loose one of the tools that > > helped me keeping our Gate healthy. But I understood how busy is > > everybody these days. I'm not an infra person but if I can help somehow > > from Nova perspective then let me know. (E.g. I can review elastic > > recheck signatures if that helps) > > > > > Worth said, gibi. > I understand the reasoning behind the ELK sunset but I'm a bit afraid of > not having a way to know the number of changes that were failing with the > same exception than one I saw. > > Could we be discussing how we could try to find a workaround for this ? > Maybe no longer using ELK, but at least still continuing to have the logs > for, say, 2 weeks ? well we will continue to have all logs for at least 30 days in the ci results. currently all logs get uploaded to swift on the ci providers and that is what we see when we look at the zuul results. seperatly they are also streamed to logstash which is ingested and process so we can query them with kibana. its only the elk portion that is going away not the ci logs. the indexing of those and easy quering is what would be lost by this change. > > -Sylvain > > Cheers, > > gibi > > > > > > > > Clark > > > > > > > > > > > > > > From artem.goncharov at gmail.com Tue May 11 08:21:52 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 11 May 2021 10:21:52 +0200 Subject: [sdk]: identity service if get_application_credential method could use user name In-Reply-To: References: <308b555bdd500119c9f17535a50c0649@uvic.ca> <9EC48E81-B822-4232-8010-D14C91EE9C49@gmail.com> Message-ID: <25DDF925-9411-45C3-9F5D-B4722B4B17FE@gmail.com> > On 10. May 2021, at 19:27, dmeng wrote: > > > > Thanks Artem, just wondering how about if I use my own identity to get connected, and try to find the user id of myself? Like: > > auth = v3.ApplicationCredential( > auth_url="my_auth_url", > application_credential_secret="my_cred_secret", > application_credential_id="my_cred_id", > username='catherine', > ) > > sess = session.Session(auth=auth) > > conn = connection.Connection( > session=sess, > region_name='Victoria', > identity_api_version='3', > ) > > # tested the above connection works well > > find_user = conn.identity.find_user(name_or_id='catherine') > > This returns me that "You are not authorized to perform the requested action: identity:get_user"; but conn.identity.find_user(name_or_id='my_user_Id') works fine. > Knowing object ID you can do nearly everything. Name is something like an alias and we always need to find ID of the resource by its name. That’s why knowing ID you can proceed, but knowing name you need to invoke additional methods (listing all users in domain), which are depending on the privileges not allowed. > Think in the openstack cli tools, I couldn't show other users, but I could use my own username to list the info of myself, "/usr/local/bin/openstack user show catherine", this works. > > > > Thanks for your help, > > Catherine > > > > On 2021-05-10 10:08, Artem Goncharov wrote: > >> >> >>> On 10. May 2021, at 18:47, dmeng > wrote: >>> Good morning, >>> >>> Thanks for replying back to me. I tried to use the fine_user to get the user id by username, but it seems like not all the user can use the find_user method. >>> >>> If I do: find_user = conn.identity.find_user(name_or_id='catherine'), it will show me that "You are not authorized to perform the requested action: identity:get_user". >>> >>> If I do: find_user = conn.identity.find_user(name_or_id='my_user_Id'), then it works fine. >>> >>> But I would like to use the username to find the user and get the id, so I'm not sure why in this case find_user only work with id not name. >>> >>> >>> >> >> Depending on the configuration of your Keystone (what is already a default) and the account privileges you use (admin, domain_admin, token scope) you may be allowed or not allowed to search/list another users. Normally this is only possible in the domain scope, so maybe you would need to use account with more powers. >> >>> Thanks and have a great day! >>> >>> Catherine >>> >>> >>> >>> >>> >>> >>> >>> >>> On 2021-05-08 03:01, Artem Goncharov wrote: >>> >>> Hi >>> >>> >>> We are wondering if we could use the user name to get it instead of the user id? If I do get_application_credential(user='catherine', application_credential = 'app_cred_id'), then it will show me an error that "You are not authorized to perform the requested action: identity:get_application_credential". Is there any method that no need user info, can just use the application credential id to get the expiration date? We also didn't find any documentation about the application credential in openstacksdk identity service docs. >>> >>> You can use: >>> >>> user = conn.identity.find_user(name_or_id = 'my_user') >>> ac = conn.identity.find_application_credential(user=user, name_or_id='app_cred') >>> >>> Regards, >>> Artem >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From destienne.maxime at gmail.com Tue May 11 08:40:49 2021 From: destienne.maxime at gmail.com (Maxime d'Estienne) Date: Tue, 11 May 2021 10:40:49 +0200 Subject: [zun][horizon] How to display containers among VMs on network topology ? In-Reply-To: <5df04329.6e03.1790eddac72.Coremail.kira034@163.com> References: <5df04329.6e03.1790eddac72.Coremail.kira034@163.com> Message-ID: Thank you for your answer, unfortunately as I'm just an user I can't answer your question. In my mind it would be a huge benefit for small infrastructures to display zun containers and VMs on the same level on network topology. For now, I'm considering using Heat. Best regards, Maxime Le lun. 26 avr. 2021 à 17:48, Hongbin Lu a écrit : > From Zun's perspective, I would like to know if it is possible for a > Horizon plugin (like Zun-UI) to add resources (like containers) to the > network topology. If it is possible, I will consider to add support for > that. > > > The clostest solution is Heat. In the Horizon Heat's dashboard, it can > display containers in the topology defined by a Heat template (suppose you > create Zun containers in Heat). > > Best regards, > Hongbin > > > > > At 2021-04-26 23:13:19, "Maxime d'Estienne" > wrote: > > Hello, > > I installed Zun on my stack and it works very well, I have a few > containers up and running. > > I also added the zun-ui plugin for Horizon wich allows me to see the list > of containers and to create them. > > But I often use the network topology view, wich I found very clean > sometimes. I wondered if there is a solution to display the containers as > are Vm's ? > > Thank you ! > > Maxime > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue May 11 09:17:29 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 11 May 2021 18:17:29 +0900 Subject: [puppet] Lint job fails in some modules because of the latest lint packages Message-ID: Hello, I'd like to inform you that we are experiencing lint job failures in some modules since we removed pin of lint packages recently[1]. I started submitting fixes for these failures[2], so please be careful before you recheck the job. [1] https://review.opendev.org/c/openstack/puppet-openstack_spec_helper/+/761925 [2] https://review.opendev.org/q/topic:%22puppet-lint-latest%22+(status:open%20OR%20status:merged) Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Tue May 11 12:22:40 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 11 May 2021 14:22:40 +0200 Subject: [train][nova][neutron] gratiutous arp on centos 7 Message-ID: Hello All, We've just updated openstack centos7 fron stein to train. Unfortunately live migrations works but vm lost a lot of pings after migration. We also applied the following patches but the do not solve: https://review.opendev.org/c/openstack/neutron/+/640258/ https://review.opendev.org/c/openstack/neutron/+/753314/ https://review.opendev.org/c/openstack/neutron/+/766277/ https://review.opendev.org/c/openstack/nova/+/742180/12 https://review.opendev.org/c/openstack/nova/+/747454/4 After that we also tried to apply https://review.opendev.org/c/openstack/nova/+/602432 but the issue remains the same. Please, anyone can help us ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 11 12:54:52 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 11 May 2021 14:54:52 +0200 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: <4946322.7AqvCAgJaL@p1> Hi, Dnia wtorek, 11 maja 2021 09:29:13 CEST Balazs Gibizer pisze: > On Mon, May 10, 2021 at 10:34, Clark Boylan > > wrote: > > Hello everyone, > > Hi, > > > Xenial has recently reached the end of its life. Our > > logstash+kibana+elasticsearch and subunit2sql+health data crunching > > services all run on Xenial. Even without the distro platform EOL > > concerns these services are growing old and haven't received the care > > they need to keep running reliably. > > > > Additionally these services represent a large portion of our resource > > consumption: > > > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > > * 64GB RAM + 500GB disk subunit2sql trove db server > > * 1 x 4 vcpu + 4GB RAM health server > > > > To put things in perspective, they account for more than a quarter of > > our control plane servers, occupying over a third of our block > > storage and in excess of half the total memory footprint. > > > > The OpenDev/OpenStack Infra team(s) don't seem to have the time > > available currently to do the major lifting required to bring these > > services up to date. I would like to propose that we simply turn them > > off. All of these services operate off of public data that will not > > be going away (specifically job log content). If others are > > interested in taking this on they can hook into this data and run > > their own processing pipelines. > > > > I am sure not everyone will be happy with this proposal. I get it. I > > came up with the idea for the elasticsearch job log processing way > > back at the San Diego summit. I spent many many many hours since > > working to get it up and running and to keep it running. But > > pragmatism means that my efforts and the team's efforts are better > > spent elsewhere. > > > > I am happy to hear feedback on this. Thank you for your time. > > Thank you and the whole infra team(s) for the effort to keeping the > infrastructure alive. I'm an active user of the ELK stack in OpenStack. > I use it to figure out if a particular gate failure I see is just a one > time event or it is a real failure we need to fix. So I'm sad that this > tooling will be shut down as I think I loose one of the tools that > helped me keeping our Gate healthy. But I understood how busy is > everybody these days. I'm not an infra person but if I can help somehow > from Nova perspective then let me know. (E.g. I can review elastic > recheck signatures if that helps) I somehow missed that original email from Clark. But it's similar for Neutron team. I use logstash pretty often to check how ofter some issues happens in the CI. > > Cheers, > gibi > > > Clark -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From fungi at yuggoth.org Tue May 11 13:56:40 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 11 May 2021 13:56:40 +0000 Subject: [all][tact-sig][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: <20210511135640.2clgaykoaczx57ng@yuggoth.org> On 2021-05-11 09:47:45 +0200 (+0200), Sylvain Bauza wrote: [...] > Could we be discussing how we could try to find a workaround for > this? [...] That's absolutely worth discussing, but it needs people committed to building and maintaining something. The current implementation was never efficient, and we realized that when we started trying to operate it at scale. It relies on massive quantities of donated infrastructure for which we're trying to be responsible stewards (just the Elasticsearch cluster alone consumes 6x the resources of of our Gerrit deployment). We get that it's a useful service, but we need to weigh the relative utility against the cost, not just in server quota but ongoing maintenance. For a while now we've not had enough people involved in running our infrastructure as we need to maintain the services we built over the years. We've been shouting it from the rooftops, but that doesn't seem to change anything, so all we can do at this point is aggressively sunset noncritical systems in order to hopefully have a small enough remainder that the people we do have can still keep it in good shape. Some of the systems we operate are tightly-coupled and taking them down would have massive ripple effects in other systems which would, counterintuitively, require more people to help untangle. The logstash service, on the other hand, is sufficiently decoupled from our more crucial systems that we can make a large dent in our upgrade and configuration management overhaul backlog by just turning it off. The workaround to which you allude is actually fairly straightforward. Someone can look at what we had as a proof of concept and build an equivalent system using newer and possibly more appropriate technologies. Watch the Gerrit events, fetch logs from swift for anything which gets reported, postprocess and index those, providing a query interface folks can use to find patterns. None of that requires privileged access to our systems; it's all built on public data. That "someone" needs to come from "somewhere" though. Upgrading the existing systems at this point is probably at least the same amount of work, given all the moving parts, the need to completely redo the current configuration management for it, the recent license strangeness with Elasticsearch, the fact that Logstash and Kibana are increasingly open-core fighting to keep useful features exclusively for their paying users... the whole stack needs to be reevaluated, and new components and architecture considered. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From peter.matulis at canonical.com Tue May 11 15:14:11 2021 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 11 May 2021 11:14:11 -0400 Subject: [docs] Double headings on every page Message-ID: Hi, I'm hitting an oddity in one of my projects where the titles of all pages show up twice. Example: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-nova-cells.html Source file is here: https://opendev.org/openstack/charm-deployment-guide/src/branch/master/deploy-guide/source/app-nova-cells.rst Does anyone see what can be causing this? It appears to happen only for the current stable release ('wallaby') and 'latest'. Thanks, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue May 11 15:21:30 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 11 May 2021 08:21:30 -0700 Subject: =?UTF-8?Q?Re:_[all][tact-sig][infra][qa]_Retiring_Logstash, _Elasticsearc?= =?UTF-8?Q?h,_subunit2sql,_and_Health?= In-Reply-To: <20210511135640.2clgaykoaczx57ng@yuggoth.org> References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> <20210511135640.2clgaykoaczx57ng@yuggoth.org> Message-ID: <9f371339-3386-433c-8699-3f2208805c0c@www.fastmail.com> On Tue, May 11, 2021, at 6:56 AM, Jeremy Stanley wrote: > On 2021-05-11 09:47:45 +0200 (+0200), Sylvain Bauza wrote: > [...] > > Could we be discussing how we could try to find a workaround for > > this? > [...] > snip. What Fungi said is great. I just wanted to add a bit of detail below. > Upgrading the existing systems at this point is probably at least > the same amount of work, given all the moving parts, the need to > completely redo the current configuration management for it, the > recent license strangeness with Elasticsearch, the fact that > Logstash and Kibana are increasingly open-core fighting to keep > useful features exclusively for their paying users... the whole > stack needs to be reevaluated, and new components and architecture > considered. To add a bit more concrete info to this the current config management for all of this is Puppet. We no longer have the ability to run Puppet in our infrastructure on systems beyond Ubuntu Xenial. What we have been doing for newer systems is using Ansible (often coupled with docker + docker-compose) to deploy services. This means that all of the config management needs to be redone. The next problem you'll face is that Elasticsearch itself needs to be upgraded. Historically when we have done this, it has required also upgrading Kibana and Logstash due to compatibility problems. When you upgrade Kibana you have to sort out all of the data access and authorizations problems that Elasticsearch presents because it doesn't provide authentication and authorization (we cannot allow arbitrary writes into the ES cluster, Kibana assumes it can do this). With Logstash you end up rewriting all of your rules. Finally, I don't think we have enough room to do rolling replacements of Elasticsearch cluster members as they are so large. We have to delete servers to add servers. Typically we would add server, rotate in, then delete the old one. In this case the idea is probably to spin up an entirely new cluster along side the old one, check that it is functional, then shift the data streaming over to point at it. Unfortunately, that won't be possible. > -- > Jeremy Stanley From DHilsbos at performair.com Tue May 11 15:54:01 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Tue, 11 May 2021 15:54:01 +0000 Subject: FW: [ops][victoria][cinder] Import volume? References: <0670B960225633449A24709C291A52524FBE2F13@COM01.performair.local> <20210415163045.Horde.IKa9Iq6-satTI_sMmUk9Ahq@webmail.nde.ag> <20210415170329.GA2777639@sm-workstation> Message-ID: <0670B960225633449A24709C291A52525115C79C@COM01.performair.local> All; I apologize if this is hitting the list again; we were having email issues, and I didn’t see it come back. I've been successful at the tasks listed below, but the resulting volumes don't boot. They stop early in the boot process indicating the root partition cannot be found. I suspect the issue is with either the disk UUID links, or with partition detection. When running the VMs under XenServer, we have /dev/disk/by-uuid/ --> /dev/xvda# Are these created dynamically by the kernel, or are they static soft-links? Can I get away with adjusting fstab to use what the partitions are likely to be after transition to OpenStac? Are these of this form: /dev/vda#? Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: Sean McGinnis [mailto:sean.mcginnis at gmx.com] Sent: Thursday, April 15, 2021 10:03 AM To: Eugen Block Cc: Dominic Hilsbos; openstack-discuss at lists.openstack.org Subject: Re: [ops][victoria][cinder] Import volume? On Thu, Apr 15, 2021 at 04:30:45PM +0000, Eugen Block wrote: > Hi, > > there’s a ‚cinder manage‘ command to import an rbd image into openstack. > But be aware that if you delete it in openstack it will be removed from > ceph, too (like a regular cinder volume). > I don’t have the exact command syntax at hand right now, but try ‚cinder > help manage‘ > > Regards > Eugen > Here is the documentation for that command: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-manage Also note, if you no longer need to manage the volume in Cinder, but you do not want it to be deleted from your storage backend, there is also the inverse command of `cinder unmanage`. Details for that command can be found here: https://docs.openstack.org/python-cinderclient/latest/cli/details.html#cinder-unmanage > > Zitat von DHilsbos at performair.com: > > > All; > > > > I'm looking to transfer several VMs from XenServer to an OpenStack > > Victoria cloud. Finding explanations for importing Glance images is > > easy, but I haven't been able to find a tutorial on importing Cinder > > volumes. > > > > Since they are currently independent servers / volumes it seems somewhat > > wasteful and messy to import each VMs disk as an image just to spawn a > > volume from it. > > > > We're using Ceph as the storage provider for Glance and Cinder. > > > > Thank you, > > > > Dominic L. Hilsbos, MBA > > Director - Information Technology > > Perform Air International Inc. > > DHilsbos at PerformAir.com > > www.PerformAir.com > > > > From sbauza at redhat.com Tue May 11 16:02:52 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 11 May 2021 18:02:52 +0200 Subject: [all][tact-sig][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: <9f371339-3386-433c-8699-3f2208805c0c@www.fastmail.com> References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> <20210511135640.2clgaykoaczx57ng@yuggoth.org> <9f371339-3386-433c-8699-3f2208805c0c@www.fastmail.com> Message-ID: On Tue, May 11, 2021 at 5:29 PM Clark Boylan wrote: > On Tue, May 11, 2021, at 6:56 AM, Jeremy Stanley wrote: > > On 2021-05-11 09:47:45 +0200 (+0200), Sylvain Bauza wrote: > > [...] > > > Could we be discussing how we could try to find a workaround for > > > this? > > [...] > > > > snip. What Fungi said is great. I just wanted to add a bit of detail below. > > > Upgrading the existing systems at this point is probably at least > > the same amount of work, given all the moving parts, the need to > > completely redo the current configuration management for it, the > > recent license strangeness with Elasticsearch, the fact that > > Logstash and Kibana are increasingly open-core fighting to keep > > useful features exclusively for their paying users... the whole > > stack needs to be reevaluated, and new components and architecture > > considered. > > To add a bit more concrete info to this the current config management for > all of this is Puppet. We no longer have the ability to run Puppet in our > infrastructure on systems beyond Ubuntu Xenial. What we have been doing for > newer systems is using Ansible (often coupled with docker + docker-compose) > to deploy services. This means that all of the config management needs to > be redone. > > The next problem you'll face is that Elasticsearch itself needs to be > upgraded. Historically when we have done this, it has required also > upgrading Kibana and Logstash due to compatibility problems. When you > upgrade Kibana you have to sort out all of the data access and > authorizations problems that Elasticsearch presents because it doesn't > provide authentication and authorization (we cannot allow arbitrary writes > into the ES cluster, Kibana assumes it can do this). With Logstash you end > up rewriting all of your rules. > > Finally, I don't think we have enough room to do rolling replacements of > Elasticsearch cluster members as they are so large. We have to delete > servers to add servers. Typically we would add server, rotate in, then > delete the old one. In this case the idea is probably to spin up an > entirely new cluster along side the old one, check that it is functional, > then shift the data streaming over to point at it. Unfortunately, that > won't be possible. > > > -- > > Jeremy Stanley > > > First, thanks both Jeremy and fungi for explaining why we need to stop to provide a ELK environment for our logs. I now understand it better and honestly I can't really find a way to fix it just by me. I'm just sad we can't for the moment find a way to have a way to continue looking at this unless finding "someone" who would help us :-) Just a note, I then also guess that http://status.openstack.org/elastic-recheck/ will stop to work as well, right? Operators, if you read me and want to make sure that our upstream CI continues to work as we could see gate issues, please help us ! :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 11 16:57:57 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 11 May 2021 11:57:57 -0500 Subject: [all] Nomination open for OpenStack "Y" Release Naming until 10th June, 21 Message-ID: <1795c5d212b.1066d4d9f384359.4563971993311866521@ghanshyammann.com> Hello Everyone, We are now starting the process for the OpenStack 'Y' release name. Below are the details on dates and criteria: Dates[1]: ======= - Nomination Open: 2021-05-13 to 2021-06-10 - TC Poll on proposed names: 2021-06-10 to 2021-06-17 Criteria: ====== - Refer to the below governance page for the naming criteria: https://governance.openstack.org/tc/reference/release-naming.html#release-name-criteria - Any community members can propose the name to the below wiki page: https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals We encourage all community members to participate in this process. [1]https://governance.openstack.org/tc/reference/release-naming.html#polls -gmann From mparra at iaa.es Tue May 11 16:57:59 2021 From: mparra at iaa.es (ManuParra) Date: Tue, 11 May 2021 18:57:59 +0200 Subject: Restart cinder-volume with Ceph rdb Message-ID: <40522F3F-CDF7-4C28-A36A-9777BEACD031@iaa.es> Dear OpenStack community, I have encountered a problem a few days ago and that is that when creating new volumes with: "openstack volume create --size 20 testmv" the volume creation status shows an error. If I go to the error log detail it indicates: "Schedule allocate volume: Could not find any available weighted backend". Indeed then I go to the cinder log and it indicates: "volume service is down - host: rbd:volumes at ceph-rbd”. I check with: "openstack volume service list” in which state are the services and I see that indeed this happens: | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | And stopped since 2021-04-29 ! I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). Thank you very much in advance. Regards. From vuk.gojnic at gmail.com Tue May 11 18:37:10 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Tue, 11 May 2021 20:37:10 +0200 Subject: Fwd: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi Julia, thanks for the response. I have now verified that the server has Secure Boot set to Disabled. It happens on all our HPE Gen10 and Gen10+ servers that are booting with UEFI in the same configuration (Secure Boot Disabled). Unsigned initrd from ubuntu for example works with Kernel from IPA: I have also updated firmware of the server to the latest version, but it is still the same. I am going to try to drill deeper in IPA builder code and IPA initrd which seems not to go well with UEFI boot mode. I am grateful for any hint. Bests, Vuk From rigault.francois at gmail.com Tue May 11 19:28:42 2021 From: rigault.francois at gmail.com (Francois) Date: Tue, 11 May 2021 21:28:42 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi! Out of curiosity, how did you generate the esp? Normally Ironic will build a grub.cfg file and include it into the iso, then it is loaded automatically (providing the grub_config_path is properly set) so you don't have anything to type by hand... That grub.cfg default template uses "linuxefi" and "initrdefi" (instead of linux/initrd you are using) which seem correct for most distributions (but depending on the way you built the esp, if you used grub from sources for example, linux/initrd would still be the right commands). Maybe you can try the "efi" variants. Best of luck! Francois (frigo) On Tue, 11 May 2021 at 20:45, Vuk Gojnic wrote: > > Hi Julia, thanks for the response. > > I have now verified that the server has Secure Boot set to Disabled. > It happens on all our HPE Gen10 and Gen10+ servers that are booting > with UEFI in the same configuration (Secure Boot Disabled). > > Unsigned initrd from ubuntu for example works with Kernel from IPA: > > I have also updated firmware of the server to the latest version, but > it is still the same. > > I am going to try to drill deeper in IPA builder code and IPA initrd > which seems not to go well with UEFI boot mode. > > I am grateful for any hint. > > Bests, > Vuk > From eblock at nde.ag Tue May 11 20:30:53 2021 From: eblock at nde.ag (Eugen Block) Date: Tue, 11 May 2021 20:30:53 +0000 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <40522F3F-CDF7-4C28-A36A-9777BEACD031@iaa.es> Message-ID: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> Hi, so restart the volume service;-) systemctl restart openstack-cinder-volume.service Zitat von ManuParra : > Dear OpenStack community, > > I have encountered a problem a few days ago and that is that when > creating new volumes with: > > "openstack volume create --size 20 testmv" > > the volume creation status shows an error. If I go to the error log > detail it indicates: > > "Schedule allocate volume: Could not find any available weighted backend". > > Indeed then I go to the cinder log and it indicates: > > "volume service is down - host: rbd:volumes at ceph-rbd”. > > I check with: > > "openstack volume service list” in which state are the services > and I see that indeed this happens: > > > | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | > 2021-04-29T09:48:42.000000 | > > And stopped since 2021-04-29 ! > > I have checked Ceph (monitors,managers, osds. etc) and there are no > problems with the Ceph BackEnd, everything is apparently working. > > This happened after an uncontrolled outage.So my question is how do > I restart only cinder-volumes (I also have cinder-backup, > cinder-scheduler but they are ok). > > Thank you very much in advance. Regards. From skaplons at redhat.com Tue May 11 21:04:39 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 11 May 2021 23:04:39 +0200 Subject: [neutron] Specs to review Message-ID: <6256726.1pupk66BIt@p1> Hi, As we spoke on today's team meeting. I set review-priority = +1 for some of the specs which I think would be good to review. List is available at https:// tinyurl.com/22ct4dwx[1] Please take a look at those specs if You will have some time. Thx in advance. -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://tinyurl.com/22ct4dwx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From vuk.gojnic at gmail.com Tue May 11 21:08:13 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Tue, 11 May 2021 23:08:13 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi Francois, Thanks for the reply. I am using esp from Ubuntu install iso. Intentionally go to Grub prompt to play with kernel parameters in attempt to debug. linuxefi/initrdefi had seemingly same behaviour like linux/initrd. What I actually just detected is that if I try to do “ls” of the device where kernel and initrd are residing it lists it correctly, but if I do “ls -la” it lists couple of files, but when it comes to initrd it freezes and I get again “Red Screen of Death” with same errors. Initrd that works just fine is 52MB big and uncompressed, while the one That blocks is 406MB big compressed. I started suspecting that the problem us is filesize limit. -Vuk Sent from my Telekom.de iPhone 14 prototype > On 11. May 2021, at 21:28, Francois wrote: > > Hi! Out of curiosity, how did you generate the esp? > Normally Ironic will build a grub.cfg file and include it into the > iso, then it is loaded automatically (providing the grub_config_path > is properly set) so you don't have anything to type by hand... That > grub.cfg default template uses "linuxefi" and "initrdefi" (instead of > linux/initrd you are using) which seem correct for most distributions > (but depending on the way you built the esp, if you used grub from > sources for example, linux/initrd would still be the right commands). > Maybe you can try the "efi" variants. > > Best of luck! > Francois (frigo) > > >> On Tue, 11 May 2021 at 20:45, Vuk Gojnic wrote: >> >> Hi Julia, thanks for the response. >> >> I have now verified that the server has Secure Boot set to Disabled. >> It happens on all our HPE Gen10 and Gen10+ servers that are booting >> with UEFI in the same configuration (Secure Boot Disabled). >> >> Unsigned initrd from ubuntu for example works with Kernel from IPA: >> >> I have also updated firmware of the server to the latest version, but >> it is still the same. >> >> I am going to try to drill deeper in IPA builder code and IPA initrd >> which seems not to go well with UEFI boot mode. >> >> I am grateful for any hint. >> >> Bests, >> Vuk >> From mparra at iaa.es Tue May 11 22:00:19 2021 From: mparra at iaa.es (ManuParra) Date: Wed, 12 May 2021 00:00:19 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> Message-ID: <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. Kind regards. > On 11 May 2021, at 22:30, Eugen Block wrote: > > Hi, > > so restart the volume service;-) > > systemctl restart openstack-cinder-volume.service > > > Zitat von ManuParra : > >> Dear OpenStack community, >> >> I have encountered a problem a few days ago and that is that when creating new volumes with: >> >> "openstack volume create --size 20 testmv" >> >> the volume creation status shows an error. If I go to the error log detail it indicates: >> >> "Schedule allocate volume: Could not find any available weighted backend". >> >> Indeed then I go to the cinder log and it indicates: >> >> "volume service is down - host: rbd:volumes at ceph-rbd”. >> >> I check with: >> >> "openstack volume service list” in which state are the services and I see that indeed this happens: >> >> >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >> >> And stopped since 2021-04-29 ! >> >> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >> >> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >> >> Thank you very much in advance. Regards. > > > > From laurentfdumont at gmail.com Tue May 11 23:18:24 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 11 May 2021 19:18:24 -0400 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> Message-ID: The default error messages for cinder-volume can be pretty vague. I would suggest enabling Debug for Cinder + service restart and seeing the error logs when the service goes up --> down. That should be in the cinder-volumes logs. On Tue, May 11, 2021 at 6:05 PM ManuParra wrote: > Thanks, I have restarted the service and I see that after a few minutes > then cinder-volume service goes down again when I check it with the command > openstack volume service list. > The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd > that is RDB in Ceph, so the problem does not come from Cinder, rather from > Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked > Ceph and the status of everything is correct, no errors or warnings. > The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. > Any further suggestions? Thanks in advance. > Kind regards. > > > On 11 May 2021, at 22:30, Eugen Block wrote: > > > > Hi, > > > > so restart the volume service;-) > > > > systemctl restart openstack-cinder-volume.service > > > > > > Zitat von ManuParra : > > > >> Dear OpenStack community, > >> > >> I have encountered a problem a few days ago and that is that when > creating new volumes with: > >> > >> "openstack volume create --size 20 testmv" > >> > >> the volume creation status shows an error. If I go to the error log > detail it indicates: > >> > >> "Schedule allocate volume: Could not find any available weighted > backend". > >> > >> Indeed then I go to the cinder log and it indicates: > >> > >> "volume service is down - host: rbd:volumes at ceph-rbd”. > >> > >> I check with: > >> > >> "openstack volume service list” in which state are the services and I > see that indeed this happens: > >> > >> > >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | > 2021-04-29T09:48:42.000000 | > >> > >> And stopped since 2021-04-29 ! > >> > >> I have checked Ceph (monitors,managers, osds. etc) and there are no > problems with the Ceph BackEnd, everything is apparently working. > >> > >> This happened after an uncontrolled outage.So my question is how do I > restart only cinder-volumes (I also have cinder-backup, cinder-scheduler > but they are ok). > >> > >> Thank you very much in advance. Regards. > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Tue May 11 23:21:09 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Tue, 11 May 2021 19:21:09 -0400 Subject: [largescale-sig][neutron] What driver are you using? In-Reply-To: References: Message-ID: I feel like it depends a lot on the scale/target usage (public vs private cloud). But at $dayjob, we are leveraging - vlans for external networking (linux-bridge + OVS) - vxlans for internal Openstack networks. We like the simplicity of vxlan with minimal overlay configuration. There are some scaling/performance issues with stuff like l2 population. VLANs are okay but it's hard to predict the next 5 years of growth. On Mon, May 10, 2021 at 8:34 AM Arnaud Morin wrote: > Hey large-scalers, > > We had a discusion in my company (OVH) about neutron drivers. > We are using a custom driver based on BGP for public networking, and > another custom driver for private networking (based on vlan). > > Benefits from this are obvious: > - we maintain the code > - we do what we want, not more, not less > - it fits perfectly to the network layer our company is using > - we have full control of the networking stack > > But it also have some downsides: > - we have to maintain the code... (rebasing, etc.) > - we introduce bugs that are not upstream (more code, more bugs) > - a change in code is taking longer, we have few people working on this > (compared to a community based) > - this is not upstream (so not opensource) > - we are not sharing (bad) > > So, we were wondering which drivers are used upstream in large scale > environment (not sure a vlan driver can be used with more than 500 > hypervisors / I dont know about vxlan or any other solution). > > Is there anyone willing to share this info? > > Thanks in advance! > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From DHilsbos at performair.com Wed May 12 01:43:06 2021 From: DHilsbos at performair.com (DHilsbos at performair.com) Date: Wed, 12 May 2021 01:43:06 +0000 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> Message-ID: <0670B960225633449A24709C291A52525115D369@COM01.performair.local> Is this a new cluster, or one that has been running for a while? Did you just setup integration with Ceph? This part: "rbd:volumes at ceph-rbd" doesn't look right to me. For me (Victoria / Nautilus) this looks like: :. name is configured in the cinder.conf with a [] section, and enabled_backends= in the [DEFAULT] section. cinder-volume-host is something that resolves to the host running openstack-cinder-volume.service. What version of OpenStack, and what version of Ceph are you running? Thank you, Dominic L. Hilsbos, MBA Vice President – Information Technology Perform Air International Inc. DHilsbos at PerformAir.com www.PerformAir.com -----Original Message----- From: ManuParra [mailto:mparra at iaa.es] Sent: Tuesday, May 11, 2021 3:00 PM To: Eugen Block Cc: openstack-discuss at lists.openstack.org Subject: Re: Restart cinder-volume with Ceph rdb Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. Kind regards. > On 11 May 2021, at 22:30, Eugen Block wrote: > > Hi, > > so restart the volume service;-) > > systemctl restart openstack-cinder-volume.service > > > Zitat von ManuParra : > >> Dear OpenStack community, >> >> I have encountered a problem a few days ago and that is that when creating new volumes with: >> >> "openstack volume create --size 20 testmv" >> >> the volume creation status shows an error. If I go to the error log detail it indicates: >> >> "Schedule allocate volume: Could not find any available weighted backend". >> >> Indeed then I go to the cinder log and it indicates: >> >> "volume service is down - host: rbd:volumes at ceph-rbd”. >> >> I check with: >> >> "openstack volume service list” in which state are the services and I see that indeed this happens: >> >> >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >> >> And stopped since 2021-04-29 ! >> >> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >> >> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >> >> Thank you very much in advance. Regards. > > > > From tkajinam at redhat.com Wed May 12 02:11:26 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Wed, 12 May 2021 11:11:26 +0900 Subject: [puppet][tripleo] Lint job fails in some modules because of the latest lint packages In-Reply-To: References: Message-ID: Adding TripleO because puppet-triple is also affected. I hope I've submitted lint fixes for all modules about which the lint job fails, but please let me know if you still see anything missing. https://review.opendev.org/q/topic:%22puppet-lint-latest%22+(status:open%20OR%20status:merged) I noticed that the lint job for puppet-tripleo is also failing against the latest lint package. I created a bug [1] and submitted a fix[2]. [1] https://launchpad.net/bugs/1928079 [2] https://review.opendev.org/c/openstack/puppet-tripleo/+/790676 On Tue, May 11, 2021 at 6:17 PM Takashi Kajinami wrote: > Hello, > > > I'd like to inform you that we are experiencing lint job failures in some > modules > since we removed pin of lint packages recently[1]. > I started submitting fixes for these failures[2], so please be careful > before you recheck the job. > > [1] > https://review.opendev.org/c/openstack/puppet-openstack_spec_helper/+/761925 > [2] > https://review.opendev.org/q/topic:%22puppet-lint-latest%22+(status:open%20OR%20status:merged) > > Thank you, > Takashi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From coolsvap at gmail.com Wed May 12 03:21:02 2021 From: coolsvap at gmail.com (=?UTF-8?B?yoLKjcmSz4HGnsSvxYIg0p7GsMi0xLfJksqByonJqA==?=) Date: Wed, 12 May 2021 08:51:02 +0530 Subject: [all] Pycharm Project of the Decade - OpenStack Message-ID: Hello All, OpenStack recently voted the PyCharm project of the decade. Here's a latest blog article published on Jetbrains [1] [1] https://www.jetbrains.com/company/customers/experience/openstack/ Best Regards, Swapnil Kulkarni coolsvap at gmail dot com From johnsomor at gmail.com Wed May 12 06:37:42 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 11 May 2021 23:37:42 -0700 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> Message-ID: Answers inline below. Michael On Mon, May 10, 2021 at 5:15 PM Luke Camilleri wrote: > > Hi Michael and thanks a lot for the detailed answer below. > > I believe I have got most of this sorted out apart from some small issues below: > > If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? The amphorae do not have issues with DHCP and gateways as we control the DHCP client configuration inside the amphora. It does only have one IP on the lb-mgmt-net, it will honor gateways provided by neutron for the lb-mgmt-net traffic, but a gateway is not required on the lb-mgmt-network unless you are routing the lb-mgmt-net traffic across subnets. > How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? The worker and housekeeping processes only create connections to the amphora, they do not receive connections from them. The amphora send a heartbeat packet to the health manager endpoints every ten seconds by default. The list of valid health manager endpoints is included in the amphora agent configuration file that is injected into the service VM at boot time. It can be updated using the Octavia admin API for refreshing the amphora agent configuration. > Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? Octavia does not use floating IPs. Users can create and assign floating IPs via neutron if they would like, but they are not necessary. Octavia VIPs can be created directly on neutron "external" networks, avoiding the NAT overhead of floating IPs. There is no practical reason to assign a floating IP to a port on the lb-mgmt-net as tenant traffic is never on or accessible from that network. > Thanks in advance for any feedback > > On 06/05/2021 22:46, Michael Johnson wrote: > > Hi Luke, > > 1. I agree that DHCP is technically unnecessary for the o-hm0 > interface if you can manage your address allocation on the network you > are using for the lb-mgmt-net. > I don't have detailed information about the Ubuntu install > instructions, but I suspect it was done to simplify the IPAM to be > managed by whatever is providing DHCP on the lb-mgmt-net provided (be > it neutron or some other resource on a provider network). > The lb-mgmt-net is simply a neutron network that the amphora > management address is on. It is routable and does not require external > access. The only tricky part to it is the worker, health manager, and > housekeeping processes need to be reachable from the amphora, and the > controllers need to reach the amphora over the network(s). There are > many ways to accomplish this. > > 2. See my above answer. Fundamentally the lb-mgmt-net is just a > neutron network that nova can use to attach an interface to the > amphora instances for command and control traffic. As long as the > controllers can reach TCP 9433 on the amphora, and the amphora can > send UDP 5555 back to the health manager endpoints, it will work fine. > > 3. Octavia, with the amphora driver, does not require any special > configuration in Neutron (beyond the advanced services RBAC policy > being available for the neutron service account used in your octavia > configuration file). The neutron_lbaas.conf and services_lbaas.conf > are legacy configuration files/settings that were used for > neutron-lbaas which is now end of life. See the wiki page for > information on the deprecation of neutron-lbaas: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. > > Michael > > On Thu, May 6, 2021 at 12:30 PM Luke Camilleri > wrote: > > Hi Michael and thanks a lot for your help on this, after following your > steps the agent got deployed successfully in the amphora-image. > > I have some other queries that I would like to ask mainly related to the > health-manager/load-balancer network setup and IP assignment. First of > all let me point out that I am using a manual installation process, and > it might help others to understand the underlying infrastructure > required to make this component work as expected. > > 1- The installation procedure contains this step: > > $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia > > which is later on called to assign the IP to the o-hm0 interface which > is connected to the lb-management network as shown below: > > $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia > > Apart from having a dhcp config for a single IP seems a bit of an > overkill, using these steps is injecting an additional routing table > into the default namespace as shown below in my case: > > # route -n > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use > Iface > 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 > 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX > 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX > 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 > 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 > > Since the load-balancer management network does not need any external > connectivity (but only communication between health-manager service and > amphora-agent), why is a gateway required and why isn't the IP address > allocated as part of the interface creation script which is called when > the service is started or stopped (example below)? > > --- > > #!/bin/bash > > set -ex > > MAC=$MGMT_PORT_MAC > BRNAME=$BRNAME > > if [ "$1" == "start" ]; then > ip link add o-hm0 type veth peer name o-bhm0 > brctl addif $BRNAME o-bhm0 > ip link set o-bhm0 up > ip link set dev o-hm0 address $MAC > *** ip addr add 172.16.0.2/12 dev o-hm0 > ***ip link set o-hm0 mtu 1500 > ip link set o-hm0 up > iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT > elif [ "$1" == "stop" ]; then > ip link del o-hm0 > else > brctl show $BRNAME > ip a s dev o-hm0 > fi > > --- > > 2- Is there a possibility to specify a fixed vlan outside of tenant > range for the load balancer management network? > > 3- Are the configuration changes required only in neutron.conf or also > in additional config files like neutron_lbaas.conf and > services_lbaas.conf, similar to the vpnaas configuration? > > Thanks in advance for any assistance, but its like putting together a > puzzle of information :-) > > On 05/05/2021 20:25, Michael Johnson wrote: > > Hi Luke. > > Yes, the amphora-agent will listen on 9443 in the amphorae instances. > It uses TLS mutual authentication, so you can get a TLS response, but > it will not let you into the API without a valid certificate. A simple > "openssl s_client" is usually enough to prove that it is listening and > requesting the client certificate. > > I can't talk to the "openstack-octavia-diskimage-create" package you > found in centos, but I can discuss how to build an amphora image using > the OpenStack tools. > > If you get Octavia from git or via a release tarball, we provide a > script to build the amphora image. This is how we build our images for > the testing gates, etc. and is the recommended way (at least from the > OpenStack Octavia community) to create amphora images. > > https://opendev.org/openstack/octavia/src/branch/master/diskimage-create > > For CentOS 8, the command would be: > > diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 > is the minimum disk size for centos images, you may want more if you > are not offloading logs) > > I just did a run on a fresh centos 8 instance: > git clone https://opendev.org/openstack/octavia > python3 -m venv dib > source dib/bin/activate > pip3 install diskimage-builder PyYAML six > sudo dnf install yum-utils > ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 > > This built an image. > > Off and on we have had issues building CentOS images due to issues in > the tools we rely on. If you run into issues with this image, drop us > a note back. > > Michael > > On Wed, May 5, 2021 at 9:37 AM Luke Camilleri > wrote: > > Hi there, i am trying to get Octavia running on a Victoria deployment on > CentOS 8. It was a bit rough getting to the point to launch an instance > mainly due to the load-balancer management network and the lack of > documentation > (https://docs.openstack.org/octavia/victoria/install/install.html) to > deploy this oN CentOS. I will try to fix this once I have my deployment > up and running to help others on the way installing and configuring this :-) > > At this point a LB can be launched by the tenant and the instance is > spawned in the Octavia project and I can ping and SSH into the amphora > instance from the Octavia node where the octavia-health-manager service > is running using the IP within the same subnet of the amphoras > (172.16.0.0/12). > > Unfortunately I keep on getting these errors in the log file of the > worker log (/var/log/octavia/worker.log): > > 2021-05-05 01:54:49.368 14521 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.4.46', p > ort=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] > Connection ref > used',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries > (currently set to 120) exhausted. The amphora is unavailable. Reason: > HTTPSConnectionPool(host='172.16 > .4.46', port=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne > ction refused',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora > compute instance failed to become reachable. This either means the > compute driver failed to fully boot the > instance inside the timeout interval or the instance is not reachable > via the lb-mgmt-net.: > octavia.amphorae.driver_exceptions.exceptions.TimeOutException: > contacting the amphora timed out > > obviously the instance is deleted then and the task fails from the > tenant's perspective. > > The main issue here is that there is no service running on port 9443 on > the amphora instance. I am assuming that this is in fact the > amphora-agent service that is running on the instance which should be > listening on this port 9443 but the service does not seem to be up or > not installed at all. > > To create the image I have installed the CentOS package > "openstack-octavia-diskimage-create" which provides the utility > disk-image-create but from what I can conclude the amphora-agent is not > being installed (thought this was done automatically by default :-( ) > > Can anyone let me know if the amphora-agent is what gets queried on port > 9443 ? > > If the agent is not installed/injected by default when building the > amphora image? > > The command to inject the amphora-agent into the amphora image when > using the disk-image-create command? > > Thanks in advance for any assistance > > From skaplons at redhat.com Wed May 12 07:22:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 12 May 2021 09:22:31 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> Message-ID: <6767170.WaEBY35tY3@p1> Hi, Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > Hi, > > Ocata is unfortunately unmaintained for a long time as some general test > jobs are broken there, so as a stable-maint-core member I support to tag > neutron's stable/ocata as End of Life. After the branch is tagged, > please ping me and I can arrange the deletion of the branch. > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > still keep that offer, however I am clearly not enough to keep it > maintained, besides backports are not arriving for stable/pike in > neutron. Anyway, if the gate is functional there, then I say we could > keep it open (but as far as I see how gate situation is worsen now, as > more and more things go wrong, I don't expect that will take long). If > not, then I only ask that let's do the EOL'ing first with Ocata and when > it is done, then continue with neutron's stable/pike. > > For the process please follow the steps here: > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > (with the only exception, that in the last step, instead of infra team, > please turn to me/release team - patch for the documentation change is > on the way: > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) Thx. I just proposed patch https://review.opendev.org/c/openstack/releases/+/790904[1] to make ocata-eol in all neutron projects. > > Thanks, > > Előd > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > Hi, > > > > > > I checked today that stable/ocata and stable/pike branches in both > > Neutron and neutron stadium projects are pretty inactive since long time. > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > was in July 2020 and in ocata October 2019, > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > According to [3] and [4] there are no opened patches for any of those > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > So based on that info I want to propose that we will close both those > > branches are EOL now and before doing that, I would like to know if > > anyone would like to keep those branches to be open still. > > > > > > [1] > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:merged > > > > > > [2] > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > [3] > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ ocata+OR+branch:stable/pike)+status:open > > > > > > [4] > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/releases/+/790904 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From pierre at stackhpc.com Wed May 12 08:00:17 2021 From: pierre at stackhpc.com (Pierre Riteau) Date: Wed, 12 May 2021 10:00:17 +0200 Subject: [blazar] Proposing Jason Anderson as new core reviewer In-Reply-To: References: Message-ID: No objections from anyone so I have added Jason as a member of blazar-core and blazar-release. Welcome to the team Jason! On Thu, 6 May 2021 at 17:53, Pierre Riteau wrote: > > Hello, > > Jason has been involved with the Blazar project for several years now. > He has submitted useful new features and fixed bugs, and even wrote a > couple of specs. I am proposing to add him as a core reviewer. If > there is no objection, I will grant him +2 rights. > > Thank you Jason for your contributions! > > Best wishes, > Pierre Riteau (priteau) From ssbarnea at redhat.com Wed May 12 09:05:57 2021 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Wed, 12 May 2021 02:05:57 -0700 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: I just came back from vacation and reading this does worry me a lot. The ES server is a crucial piece used to identify zuul job failures on openstack. LOTS of them have causes external to the job itself, either infra, packaging (os or pip) or unavailability of some services used during build. Without being able to query specific error messages across multiple jobs we will be in a vary bad spot as we loose the ability to look outside a single project. TripleO health check project relies on being able to query ER from both opendev and rdo in order to easy identification of problems. Maybe instead of dropping we should rethink what it is supposed to index and not, set some hard limits per job and scale down the deployment. IMHO, one of the major issues with it is that it does try to index maybe too much w/o filtering noisy output before indexing. If we can delay making a decision a little bit so we can investigate all available options it would really be great. I worth noting that I personally do not have a special love for ES but I do value a lot what it does. I am also pragmatic and I would not be very upset to make use of a SaaS service as an alternative, especially as I recognize how costly is to run and maintain an instance. Maybe we can find a SaaS log processing vendor willing to sponsor OpenStack? In the past I used DataDog for monitoring but they also offer log processing and they have a program for open-source but I am not sure they would be willing to process that amount of data for us. Cheers Sorin Sbarnea Red Hat On 10 May 2021 at 18:34:40, Clark Boylan wrote: > Hello everyone, > > Xenial has recently reached the end of its life. Our > logstash+kibana+elasticsearch and subunit2sql+health data crunching > services all run on Xenial. Even without the distro platform EOL concerns > these services are growing old and haven't received the care they need to > keep running reliably. > > Additionally these services represent a large portion of our resource > consumption: > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > * 64GB RAM + 500GB disk subunit2sql trove db server > * 1 x 4 vcpu + 4GB RAM health server > > To put things in perspective, they account for more than a quarter of our > control plane servers, occupying over a third of our block storage and in > excess of half the total memory footprint. > > The OpenDev/OpenStack Infra team(s) don't seem to have the time available > currently to do the major lifting required to bring these services up to > date. I would like to propose that we simply turn them off. All of these > services operate off of public data that will not be going away > (specifically job log content). If others are interested in taking this on > they can hook into this data and run their own processing pipelines. > > I am sure not everyone will be happy with this proposal. I get it. I came > up with the idea for the elasticsearch job log processing way back at the > San Diego summit. I spent many many many hours since working to get it up > and running and to keep it running. But pragmatism means that my efforts > and the team's efforts are better spent elsewhere. > > I am happy to hear feedback on this. Thank you for your time. > > Clark > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed May 12 09:49:43 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 12 May 2021 11:49:43 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> Message-ID: <20210512094943.nfttmyxoss3zut2n@localhost> On 12/05, ManuParra wrote: > Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. > The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. > The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > Kind regards. > Hi, You are most likely using an older release, have a high number of cinder RBD volumes, and have not changed configuration option "rbd_exclusive_cinder_pool" from its default "false" value. Please add to your driver's section in cinder.conf the following: rbd_exclusive_cinder_pool = true And restart the service. Cheers, Gorka. > > On 11 May 2021, at 22:30, Eugen Block wrote: > > > > Hi, > > > > so restart the volume service;-) > > > > systemctl restart openstack-cinder-volume.service > > > > > > Zitat von ManuParra : > > > >> Dear OpenStack community, > >> > >> I have encountered a problem a few days ago and that is that when creating new volumes with: > >> > >> "openstack volume create --size 20 testmv" > >> > >> the volume creation status shows an error. If I go to the error log detail it indicates: > >> > >> "Schedule allocate volume: Could not find any available weighted backend". > >> > >> Indeed then I go to the cinder log and it indicates: > >> > >> "volume service is down - host: rbd:volumes at ceph-rbd”. > >> > >> I check with: > >> > >> "openstack volume service list” in which state are the services and I see that indeed this happens: > >> > >> > >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | > >> > >> And stopped since 2021-04-29 ! > >> > >> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. > >> > >> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). > >> > >> Thank you very much in advance. Regards. > > > > > > > > > > From srelf at ukcloud.com Wed May 12 11:30:24 2021 From: srelf at ukcloud.com (Steven Relf) Date: Wed, 12 May 2021 11:30:24 +0000 Subject: [CINDER] - RBD backend reporting In-Reply-To: References: Message-ID: Hey list.   Currently when using an RBD backend using the default settings, total_capacity_gb is reported as MAX_AVAIL + USED bytes, converted into GiB. This to mean seems a little odd, as I would expect that total_capacity_gb should report the total size in GiB of the backend cluster.   This of course can be fixed by adding "report_dynamic_total_capacity = false" into the cinder.conf section for the rbd backend. This works fine for ceph clusters where all pools consume from a single disk type/root. But in clusters where you have multiple root's or device types it does not work correctly.   Is this proving to be a pain point for anyone else, or is it just me, and if it is proving a problem for others, im happy to write a patch.   Im thinking something that gets the pools crushrule and works out the total_capacity_gb, based off the total available capacity of a pool based on its root/crushrules.   Rgds Steve. The future has already arrived. It's just not evenly distributed yet - William Gibson From vuk.gojnic at gmail.com Wed May 12 12:30:47 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Wed, 12 May 2021 14:30:47 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hello everybody, I have finally found the root cause of the problem. It is indeed in the size of "initramfs" that ironic-python-agent-builder produces. They are all normally 350-420 MB large (compressed). However, since boot protocol 2.03 kernel explicitly limits highest initrd address available to the bootloader (https://www.kernel.org/doc/Documentation/x86/boot.txt), thus limits the size of initrd that could be used. Therefore none of production IPA initramfs images from official location: https://tarballs.openstack.org/ironic-python-agent/dib/files/, nor custom made by ironic-python-agent-builder can not boot. The bootloader is just crashing on them. When I turned to TinyIPA from here: https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/ (which are explicitly marked not for production), it worked well and booted the IPA. So I will proceed with Tiny, but it might be useful to give hint of to others. Thanks anyway for support from everybody here. See you! -Vuk They are in the range of 400MB On Tue, May 11, 2021 at 11:08 PM Vuk Gojnic wrote: > > Hi Francois, > > Thanks for the reply. > > I am using esp from Ubuntu install iso. Intentionally go to Grub prompt to play with kernel parameters in attempt to debug. > > linuxefi/initrdefi had seemingly same behaviour like linux/initrd. > > What I actually just detected is that if I try to do “ls” of the device where kernel and initrd are residing it lists it correctly, but if I do “ls -la” it lists couple of files, but when it comes to initrd it freezes and I get again “Red Screen of Death” with same errors. > > Initrd that works just fine is 52MB big and uncompressed, while the one That blocks is 406MB big compressed. I started suspecting that the problem us is filesize limit. > > -Vuk > > Sent from my Telekom.de iPhone 14 prototype > > > On 11. May 2021, at 21:28, Francois wrote: > > > > Hi! Out of curiosity, how did you generate the esp? > > Normally Ironic will build a grub.cfg file and include it into the > > iso, then it is loaded automatically (providing the grub_config_path > > is properly set) so you don't have anything to type by hand... That > > grub.cfg default template uses "linuxefi" and "initrdefi" (instead of > > linux/initrd you are using) which seem correct for most distributions > > (but depending on the way you built the esp, if you used grub from > > sources for example, linux/initrd would still be the right commands). > > Maybe you can try the "efi" variants. > > > > Best of luck! > > Francois (frigo) > > From fungi at yuggoth.org Wed May 12 13:17:13 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 12 May 2021 13:17:13 +0000 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: <20210512131713.pmdr7zhgsaz52ryk@yuggoth.org> On 2021-05-12 02:05:57 -0700 (-0700), Sorin Sbarnea wrote: [...] > TripleO health check project relies on being able to query ER from > both opendev and rdo in order to easy identification of problems. Since you say RDO has a similar setup, could they just expand to start indexing our logs? As previously stated, doing that doesn't require any special access to our infrastructure. > Maybe instead of dropping we should rethink what it is supposed to > index and not, set some hard limits per job and scale down the > deployment. IMHO, one of the major issues with it is that it does > try to index maybe too much w/o filtering noisy output before > indexing. Reducing how much we index doesn't solve the most pressing problem, which is that we need to upgrade the underlying operating system, therefore replace the current current configuration management which won't work on newer platforms, and also almost certainly upgrade versions of the major components in use for it. Nobody has time to do that, at least nobody who has heard our previous cries for help. > If we can delay making a decision a little bit so we can > investigate all available options it would really be great. This thread hasn't set any timeline for stopping the service, not yet anyway. > I worth noting that I personally do not have a special love for ES > but I do value a lot what it does. I am also pragmatic and I would > not be very upset to make use of a SaaS service as an alternative, > especially as I recognize how costly is to run and maintain an > instance. [...] It's been pointed out that OVH has a similar-sounding service, if someone is interested in experimenting with it: https://www.ovhcloud.com/en-ca/data-platforms/logs/ The case with this, and I think with any SaaS solution, is that there would still need to be a separate ingestion mechanism to identify when new logs are available, postprocess them to remove debug lines, and then feed them to the indexing service at the provider... something our current team doesn't have time to design and manage. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From senrique at redhat.com Wed May 12 13:38:14 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 12 May 2021 10:38:14 -0300 Subject: [cinder] Bug deputy report for week of 2021-05-12 Message-ID: Hello, This is a bug report from 2021-05-05 to 2021-05-12. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ------------------------------------------------------------------------------------------------------------ Critical: - https://bugs.launchpad.net/cinder/+bug/1928083 : "conditional_update broken in sqlalchemy 1.4". Assigned to Gorka Eguileor. High: Medium: - https://bugs.launchpad.net/cinder/+bug/1927784 : "NetApp ONTAP Failing Create FlexVol Pool Replica". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1927783 : "NetApp ONTAP Failing Create FlexVol Replica". Unassigned. - https://bugs.launchpad.net/os-brick/+bug/1928065: "Fails to create vm with nvme native multipath". Unassigned. Low:- Incomplete:- Cheers, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From elfosardo at gmail.com Wed May 12 14:54:11 2021 From: elfosardo at gmail.com (Riccardo Pittau) Date: Wed, 12 May 2021 16:54:11 +0200 Subject: [ironic] Announcing retirement of sushy-cli Message-ID: Hello ironicers! After a brief exchange in the ML and a discussion during the latest ironic meeting on monday, the community has agreed to retire the sushy-cli project. We'll start the procedure in the next days aiming to conclude the retirement during the xena cycle. Thank you, Cheers, Riccardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-francois.taltavull at elca.ch Wed May 12 15:15:35 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Wed, 12 May 2021 15:15:35 +0000 Subject: [Octavia dashboard] Impossible to fill the subnet field when creating a load balancer Message-ID: <37fc26ac7ba043418540c022c6d30e1b@elca.ch> Hi All, On my OSA Victoria deployment (on Ubuntu 20.04), the "Create Load Balancer" form does not allow me to fill the mandatory "subnet" field and therefore I can't create a load balancer with Horizon. On the other hand, Octavia works fine and I can create load balancers with the CLI, Terraform, etc. Has one of you already faced such a situation ? Best regards, Jean-François From dtantsur at redhat.com Wed May 12 15:36:24 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 12 May 2021 17:36:24 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: I'm glad that it worked for you! Before others follow your advice: the difference in size in DIB builds and tinyIPA is mostly because of firmware and kernel modules. If tinyIPA does not work for you or behaves in a weird way (no disks detected, some NICs not detected), then you're stuck with DIB builds. Vuk, there is one more option you could exercise. IPA-builder supports an --lzma flag to pack the initramfs with a more efficient algorithm: https://opendev.org/openstack/ironic-python-agent-builder/src/branch/master/ironic_python_agent_builder/__init__.py#L56 . Dmitry On Wed, May 12, 2021 at 2:33 PM Vuk Gojnic wrote: > Hello everybody, > > I have finally found the root cause of the problem. It is indeed in > the size of "initramfs" that ironic-python-agent-builder produces. > They are all normally 350-420 MB large (compressed). However, since > boot protocol 2.03 kernel explicitly limits highest initrd address > available to the bootloader > (https://www.kernel.org/doc/Documentation/x86/boot.txt), thus limits > the size of initrd that could be used. Therefore none of production > IPA initramfs images from official location: > https://tarballs.openstack.org/ironic-python-agent/dib/files/, nor > custom made by ironic-python-agent-builder can not boot. The > bootloader is just crashing on them. > > When I turned to TinyIPA from here: > https://tarballs.openstack.org/ironic-python-agent/tinyipa/files/ > (which are explicitly marked not for production), it worked well and > booted the IPA. So I will proceed with Tiny, but it might be useful to > give hint of to others. > > Thanks anyway for support from everybody here. > > See you! > > -Vuk > > They are in the range of 400MB > > On Tue, May 11, 2021 at 11:08 PM Vuk Gojnic wrote: > > > > Hi Francois, > > > > Thanks for the reply. > > > > I am using esp from Ubuntu install iso. Intentionally go to Grub prompt > to play with kernel parameters in attempt to debug. > > > > linuxefi/initrdefi had seemingly same behaviour like linux/initrd. > > > > What I actually just detected is that if I try to do “ls” of the device > where kernel and initrd are residing it lists it correctly, but if I do “ls > -la” it lists couple of files, but when it comes to initrd it freezes and I > get again “Red Screen of Death” with same errors. > > > > Initrd that works just fine is 52MB big and uncompressed, while the one > That blocks is 406MB big compressed. I started suspecting that the problem > us is filesize limit. > > > > -Vuk > > > > Sent from my Telekom.de iPhone 14 prototype > > > > > On 11. May 2021, at 21:28, Francois > wrote: > > > > > > Hi! Out of curiosity, how did you generate the esp? > > > Normally Ironic will build a grub.cfg file and include it into the > > > iso, then it is loaded automatically (providing the grub_config_path > > > is properly set) so you don't have anything to type by hand... That > > > grub.cfg default template uses "linuxefi" and "initrdefi" (instead of > > > linux/initrd you are using) which seem correct for most distributions > > > (but depending on the way you built the esp, if you used grub from > > > sources for example, linux/initrd would still be the right commands). > > > Maybe you can try the "efi" variants. > > > > > > Best of luck! > > > Francois (frigo) > > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From antonio at getnexar.com Wed May 12 17:55:21 2021 From: antonio at getnexar.com (Antonio Gomez) Date: Wed, 12 May 2021 12:55:21 -0500 Subject: SNS like in Swift Message-ID: Hi all, I'm trying to achieve an event based action based on uploading an Object to Swift, similar to SNS on AWS - where a file is uploaded and then some action happens. I've looked into https://github.com/openstack-archive/qinling due to https://www.youtube.com/watch?v=K2SiMZllN_A&t=30s&ab_channel=LingxianKong which shows how a function is triggered when a file is uploaded to Swift. However I see the project has been archived and not maintained and was wondering if there is something that maybe I can use? Thanks in advance, -- *Antonio Gomez * -------------- next part -------------- An HTML attachment was scrubbed... URL: From vuk.gojnic at gmail.com Wed May 12 18:09:56 2021 From: vuk.gojnic at gmail.com (Vuk Gojnic) Date: Wed, 12 May 2021 20:09:56 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi Dmitry, Thanks for additional tipps. When investigating the initrd I have noticed that the most of the space goes on firmware and modules/drivers. If we notice something not working with TinyIPA we can probably cherry-pick the modules that we need and leave everything else out and that way get smaller image. I have another question though - do you know how could we make Kernel/Grub accept to boot large initrd? How are other folks doing it? I assume not everybody is just using TinyIPA for production... Tnx! Vuk On Wed, May 12, 2021 at 5:36 PM Dmitry Tantsur wrote: > > I'm glad that it worked for you! > > Before others follow your advice: the difference in size in DIB builds and tinyIPA is mostly because of firmware and kernel modules. If tinyIPA does not work for you or behaves in a weird way (no disks detected, some NICs not detected), then you're stuck with DIB builds. > > Vuk, there is one more option you could exercise. IPA-builder supports an --lzma flag to pack the initramfs with a more efficient algorithm: https://opendev.org/openstack/ironic-python-agent-builder/src/branch/master/ironic_python_agent_builder/__init__.py#L56. > > Dmitry > From lucasagomes at gmail.com Wed May 12 18:26:24 2021 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Wed, 12 May 2021 19:26:24 +0100 Subject: [all][neutron][nova][ironic][cinder][keystone][glance][swift] OVN is now default in DevStack Message-ID: Hi, As of today, OVN is the default network backend driver in DevStack [0]. This effort has been discussed in previous PTGs as well as emails to this mail list for a few cycles and now we are happy to announce that it's complete! We are aware that OpenStack is big and it has many projects included in it so, we tried to make this transition as smooth as possible for each other but, in case this change breaks your project CI in any way know that there's an easy fix for that which is explicitly enabling the ML2/OVS driver back until we can better investigate the problem. Here are some examples of patches doing it: [1][2][3]. Also, I would like to reinforce that this DOES NOT mean that the ML2/OVS driver will be deprecated, both drivers are and will be actively maintained by the Neutron community. Big thanks to everyone that participated in this effort! [0] https://review.opendev.org/c/openstack/devstack/+/735097/ [1] https://review.opendev.org/c/openstack/ironic/+/739945 [2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/790928 [3] https://review.opendev.org/c/openstack/metalsmith/+/749346 Cheers, Lucas From ignaziocassano at gmail.com Wed May 12 18:46:20 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 12 May 2021 20:46:20 +0200 Subject: [Openstack][train][neutron] centos 7 after live migration instances stop to respond to ping Message-ID: Hello everyone, After migrating from stein to train, I am facing again with live migration issues because live migrated vm stop to respond to ping. On Stein I solved following a workaround for enabling legacy port binding. On centos 7 train I have not found any workaround for solving this issue yet. I think it is very important issue because train is the last release for centos 7 and it could be a bridge for moving from centos 7 to centos 8. I tried to talk on irc but seems some patches are not backported to train. So the question is: which is the strategy for moving from stein to a newer release ? On which version of openstack the multiport binding works fine in live migration case ? How can upgrade from stein to an openstack release where I can forget the above issue? Many thanks and Regards. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed May 12 19:43:16 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 12 May 2021 14:43:16 -0500 Subject: Wallaby RDO Release Announcement Message-ID: If you're having trouble with the formatting, this release announcement isavailable online at https://blogs.rdoproject.org/2021/05/rdo-wallaby-released/ --- *RDO Wallaby Released* The RDO community is pleased to announce the general availability of the RDO build for OpenStack Wallaby for RPM-based distributions, CentOS Stream and Red Hat Enterprise Linux. RDO is suitable for building private, public, and hybrid clouds. Wallaby is the 23rd release from the OpenStack project, which is the work of more than 1,000 contributors from around the world. The release is already available on the CentOS mirror network at http://mirror.centos.org/centos/8 -stream /cloud/x86_64/openstack- wallaby / . The RDO community project curates, packages, builds, tests and maintains a complete OpenStack component set for RHEL and CentOS Stream and is a member of the CentOS Cloud Infrastructure SIG. The Cloud Infrastructure SIG focuses on delivering a great user experience for CentOS users looking to build and maintain their own on-premise, public or hybrid clouds. All work on RDO and on the downstream release, Red Hat OpenStack Platform, is 100% open source, with all code changes going upstream first. PLEASE NOTE: RDO Wallaby provides packages for CentOS Stream 8 and Python 3 only. Please use the Victoria release for CentOS8. For CentOS7 and python 2.7, please use the Train release. *Interesting things in the Wallaby release include:* - RBAC supported added in multiple projects including Designate, Glance, Horizon, Ironic, and Octavia - Glance added support for distributed image import - Ironic added deployment and cleaning enhancements including UEFI Partition Image handling, NVMe Secure Erase, per-instance deployment driver interface overrides, deploy time “deploy_steps”, and file injection. - Kuryr added nested mode with node VMs running in multiple subnets is now available. To use that functionality a new option [pod_vif_nested]worker_nodes_subnets is introduced accepting multiple Subnet IDs. - Manila added the ability for Operators to now set maximum and minimum share sizes as extra specifications on share types. - Neutron added a new subnet type network:routed is now available. IPs of this subnet type can be advertised with BGP over a provider network. - TripleO moved network and network port creation out of the Heat stack and into the baremetal provisioning workflow. Other highlights of the broader upstream OpenStack project may be read via https://releases.openstack.org/wallaby/highlights.html *Contributors* - During the Wallaby cycle, we saw the following new RDO contributors: - Adriano Petrich - Ananya Banerjee - Artom Lifshitz - Attila Fazekas - Brian Haley - David J Peacock - Jason Joyce - Jeremy Freudberg - Jiri Podivin - Martin Kopec - Waleed Mousa Welcome to all of you and Thank You So Much for participating! But we wouldn’t want to overlook anyone. A super massive Thank You to all 58 contributors who participated in producing this release. This list includes commits to rdo-packages, rdo-infra, and redhat-website repositories: - Adriano Petrich - Alex Schultz - Alfredo Moralejo - Amol Kahat - Amy Marrich - Ananya Banerjee - Artom Lifshitz - Arx Cruz - Attila Fazekas - Bhagyashri Shewale - Brian Haley - Cédric Jeanneret - Chandan Kumar - Daniel Pawlik - David J Peacock - Dmitry Tantsur - Emilien Macchi - Eric Harney - Fabien Boucher - frenzyfriday - Gabriele Cerami - Gael Chamoulaud - Grzegorz Grasza - Harald Jensas - Jason Joyce - Javier Pena - Jeremy Freudberg - Jiri Podivin - Joel Capitao - Kevin Carter - Luigi Toscano - Marc Dequenes - Marios Andreou - Martin Kopec - Mathieu Bultel - Matthias Runge - Mike Turek - Nicolas Hicher - Pete Zaitcev - Pooja Jadhav - Rabi Mishra - Riccardo Pittau - Roman Gorshunov - Ronelle Landy - Sagi Shnaidman - Sandeep Yadav - Slawek Kaplonski - Sorin Sbarnea - Steve Baker - Takashi Kajinami - Tristan Cacqueray - Waleed Mousa - Wes Hayutin - Yatin Karel *The Next Release Cycle* At the end of one release, focus shifts immediately to the next release i.e Xena. *Get Started* There are three ways to get started with RDO. To spin up a proof of concept cloud, quickly, and on limited hardware, try an All-In-One Packstack installation. You can run RDO on a single node to get a feel for how it works. For a production deployment of RDO, use TripleO and you’ll be running a production cloud in short order. Finally, for those that don’t have any hardware or physical resources, there’s the OpenStack Global Passport Program. This is a collaborative effort between OpenStack public cloud providers to let you experience the freedom, performance and interoperability of open source infrastructure. You can quickly and easily gain access to OpenStack infrastructure via trial programs from participating OpenStack public cloud providers around the world. *Get Help* The RDO Project has our users at lists.rdoproject.org for RDO-specific users and operators. For more developer-oriented content we recommend joining the dev at lists.rdoproject.org mailing list. Remember to post a brief introduction about yourself and your RDO story. The mailing lists archives are all available at https://mail.rdoproject.org. You can also find extensive documentation on RDOproject.org. The #rdo channel on Freenode IRC is also an excellent place to find and give help. We also welcome comments and requests on the CentOS devel mailing list and the CentOS and TripleO IRC channels (#centos, #centos-devel, and #tripleo on irc.freenode.net), however we have a more focused audience within the RDO venues. *Get Involved* To get involved in the OpenStack RPM packaging effort, check out the RDO contribute pages, peruse the CentOS Cloud SIG page, and inhale the RDO packaging documentation. Join us in #rdo and #tripleo on the Freenode IRC network and follow us on Twitter @RDOCommunity. You can also find us on Facebook and YouTube. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed May 12 20:56:37 2021 From: sbaker at redhat.com (Steve Baker) Date: Thu, 13 May 2021 08:56:37 +1200 Subject: [baremetal-sig][ironic] Tue May 11, 2021, 2pm UTC: Why Ironic? In-Reply-To: <4e754784-6574-b173-74d2-d4758716c157@cern.ch> References: <4e754784-6574-b173-74d2-d4758716c157@cern.ch> Message-ID: <03b84a38-64bf-bb19-92d3-f7e3c5f3b6af@redhat.com> On 7/05/21 9:09 pm, Arne Wiebalck wrote: > Dear all, > > The Bare Metal SIG will meet next week Tue May 11, 2021, > at 2pm UTC on zoom. > > The meeting will feature a "topic-of-the-day" presentation > by Arne Wiebalck (arne_wiebalck) on > >   "Why Ironic? (The Case of Ironic in CERN IT)" > > As usual, all details on https://etherpad.opendev.org/p/bare-metal-sig > The recording for this presentation is now available at: https://youtu.be/SskdCxLvjiw All presentations are available in the Baremetal SIG Series playlist https://www.youtube.com/playlist?list=PLKqaoAnDyfgoBFAjUvZGjKXQjogWZBLL_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mparra at iaa.es Wed May 12 21:12:29 2021 From: mparra at iaa.es (ManuParra) Date: Wed, 12 May 2021 23:12:29 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <0670B960225633449A24709C291A52525115D369@COM01.performair.local> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <0670B960225633449A24709C291A52525115D369@COM01.performair.local> Message-ID: <7792BC84-057C-4EF6-AABF-800B5D65C9DB@iaa.es> Hi Laurent, I included the Debug=True mode for Cinder-Volumes and Cinder-Scheduler, and the result is that I now have the following in the Debug: DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 DEBUG cinder.volume.drivers.rbd [req-a0cb90b6-ca5d-496c-9a0b-e2296f1946ca - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 Every time a new volume is requested cinder-volumes is called which is a ceph-rbd pool. I have restarted all cinder services on the three controller/monitor nodes I have and also restarted all ceph daemons, but I still see that when doing openstack volume service list +------------------+----------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+----------------------+------+---------+-------+----------------------------+ | cinder-scheduler | spsrc-contr-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 | | cinder-scheduler | spsrc-contr-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | | cinder-scheduler | spsrc-contr-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 | | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-05-11T10:48:42.000000 | | cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | | cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 | | cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 | +------------------+----------------------+------+---------+-------+----------------------------+ cinder-volume is down and cannot create new volumes to associate to a VM. Kind regards. > On 12 May 2021, at 03:43, DHilsbos at performair.com wrote: > > Is this a new cluster, or one that has been running for a while? > > Did you just setup integration with Ceph? > > This part: "rbd:volumes at ceph-rbd" doesn't look right to me. For me (Victoria / Nautilus) this looks like: :. > > name is configured in the cinder.conf with a [] section, and enabled_backends= in the [DEFAULT] section. > cinder-volume-host is something that resolves to the host running openstack-cinder-volume.service. > > What version of OpenStack, and what version of Ceph are you running? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: ManuParra [mailto:mparra at iaa.es ] > Sent: Tuesday, May 11, 2021 3:00 PM > To: Eugen Block > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Restart cinder-volume with Ceph rdb > > Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. > The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. > The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > Kind regards. > >> On 11 May 2021, at 22:30, Eugen Block > wrote: >> >> Hi, >> >> so restart the volume service;-) >> >> systemctl restart openstack-cinder-volume.service >> >> >> Zitat von ManuParra >: >> >>> Dear OpenStack community, >>> >>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>> >>> "openstack volume create --size 20 testmv" >>> >>> the volume creation status shows an error. If I go to the error log detail it indicates: >>> >>> "Schedule allocate volume: Could not find any available weighted backend". >>> >>> Indeed then I go to the cinder log and it indicates: >>> >>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>> >>> I check with: >>> >>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>> >>> >>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>> >>> And stopped since 2021-04-29 ! >>> >>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>> >>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>> >>> Thank you very much in advance. Regards. >> >> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mparra at iaa.es Wed May 12 21:15:42 2021 From: mparra at iaa.es (ManuParra) Date: Wed, 12 May 2021 23:15:42 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <0670B960225633449A24709C291A52525115D369@COM01.performair.local> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <0670B960225633449A24709C291A52525115D369@COM01.performair.local> Message-ID: Hello Dominique, the integration with CEPH was already done and apparently everything works, I can create with CephFS manila but not Block Storage for volumes for Cinder. The OpenStack version is Train and Ceph is Octopus. If I check the Ceph Pools, I see that there is indeed a pool called cinder-volumes which is the one that connects cinder with ceph. Regards. > On 12 May 2021, at 03:43, DHilsbos at performair.com wrote: > > Is this a new cluster, or one that has been running for a while? > > Did you just setup integration with Ceph? > > This part: "rbd:volumes at ceph-rbd" doesn't look right to me. For me (Victoria / Nautilus) this looks like: :. > > name is configured in the cinder.conf with a [] section, and enabled_backends= in the [DEFAULT] section. > cinder-volume-host is something that resolves to the host running openstack-cinder-volume.service. > > What version of OpenStack, and what version of Ceph are you running? > > Thank you, > > Dominic L. Hilsbos, MBA > Vice President – Information Technology > Perform Air International Inc. > DHilsbos at PerformAir.com > www.PerformAir.com > > > -----Original Message----- > From: ManuParra [mailto:mparra at iaa.es] > Sent: Tuesday, May 11, 2021 3:00 PM > To: Eugen Block > Cc: openstack-discuss at lists.openstack.org > Subject: Re: Restart cinder-volume with Ceph rdb > > Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. > The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. > The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > Kind regards. > >> On 11 May 2021, at 22:30, Eugen Block wrote: >> >> Hi, >> >> so restart the volume service;-) >> >> systemctl restart openstack-cinder-volume.service >> >> >> Zitat von ManuParra : >> >>> Dear OpenStack community, >>> >>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>> >>> "openstack volume create --size 20 testmv" >>> >>> the volume creation status shows an error. If I go to the error log detail it indicates: >>> >>> "Schedule allocate volume: Could not find any available weighted backend". >>> >>> Indeed then I go to the cinder log and it indicates: >>> >>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>> >>> I check with: >>> >>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>> >>> >>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>> >>> And stopped since 2021-04-29 ! >>> >>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>> >>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>> >>> Thank you very much in advance. Regards. >> >> >> >> > > > From mparra at iaa.es Wed May 12 21:23:18 2021 From: mparra at iaa.es (ManuParra) Date: Wed, 12 May 2021 23:23:18 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <20210512094943.nfttmyxoss3zut2n@localhost> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> Message-ID: <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> Hi Gorka, let me show the cinder config: [ceph-rbd] rbd_ceph_conf = /etc/ceph/ceph.conf rbd_user = cinder backend_host = rbd:volumes rbd_pool = cinder.volumes volume_backend_name = ceph-rbd volume_driver = cinder.volume.drivers.rbd.RBDDriver … So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. Regards. > On 12 May 2021, at 11:49, Gorka Eguileor wrote: > > On 12/05, ManuParra wrote: >> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. >> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. >> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >> Kind regards. >> > > Hi, > > You are most likely using an older release, have a high number of cinder > RBD volumes, and have not changed configuration option > "rbd_exclusive_cinder_pool" from its default "false" value. > > Please add to your driver's section in cinder.conf the following: > > rbd_exclusive_cinder_pool = true > > > And restart the service. > > Cheers, > Gorka. > >>> On 11 May 2021, at 22:30, Eugen Block wrote: >>> >>> Hi, >>> >>> so restart the volume service;-) >>> >>> systemctl restart openstack-cinder-volume.service >>> >>> >>> Zitat von ManuParra : >>> >>>> Dear OpenStack community, >>>> >>>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>>> >>>> "openstack volume create --size 20 testmv" >>>> >>>> the volume creation status shows an error. If I go to the error log detail it indicates: >>>> >>>> "Schedule allocate volume: Could not find any available weighted backend". >>>> >>>> Indeed then I go to the cinder log and it indicates: >>>> >>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>>> >>>> I check with: >>>> >>>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>>> >>>> >>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>>> >>>> And stopped since 2021-04-29 ! >>>> >>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>>> >>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>>> >>>> Thank you very much in advance. Regards. >>> >>> >>> >>> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed May 12 21:34:46 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 12 May 2021 23:34:46 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> Message-ID: <20210512213446.7c222mlcdwxiosly@localhost> On 12/05, ManuParra wrote: > Hi Gorka, let me show the cinder config: > > [ceph-rbd] > rbd_ceph_conf = /etc/ceph/ceph.conf > rbd_user = cinder > backend_host = rbd:volumes > rbd_pool = cinder.volumes > volume_backend_name = ceph-rbd > volume_driver = cinder.volume.drivers.rbd.RBDDriver > … > > So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. Hi, Your backend_host doesn't have a valid hostname, please set a proper hostname in that configuration option. Then the next thing you need to have is the cinder-volume service running correctly before making any requests. I would try adding rbd_exclusive_cinder_pool=true then tailing the volume logs, and restarting the service. See if the logs show any ERROR level entries. I would also check the service-list output right after the service is restarted, if it's up then I would check it again after 2 minutes. Cheers, Gorka. > > Regards. > > > > On 12 May 2021, at 11:49, Gorka Eguileor wrote: > > > > On 12/05, ManuParra wrote: > >> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. > >> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. > >> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > >> Kind regards. > >> > > > > Hi, > > > > You are most likely using an older release, have a high number of cinder > > RBD volumes, and have not changed configuration option > > "rbd_exclusive_cinder_pool" from its default "false" value. > > > > Please add to your driver's section in cinder.conf the following: > > > > rbd_exclusive_cinder_pool = true > > > > > > And restart the service. > > > > Cheers, > > Gorka. > > > >>> On 11 May 2021, at 22:30, Eugen Block wrote: > >>> > >>> Hi, > >>> > >>> so restart the volume service;-) > >>> > >>> systemctl restart openstack-cinder-volume.service > >>> > >>> > >>> Zitat von ManuParra : > >>> > >>>> Dear OpenStack community, > >>>> > >>>> I have encountered a problem a few days ago and that is that when creating new volumes with: > >>>> > >>>> "openstack volume create --size 20 testmv" > >>>> > >>>> the volume creation status shows an error. If I go to the error log detail it indicates: > >>>> > >>>> "Schedule allocate volume: Could not find any available weighted backend". > >>>> > >>>> Indeed then I go to the cinder log and it indicates: > >>>> > >>>> "volume service is down - host: rbd:volumes at ceph-rbd”. > >>>> > >>>> I check with: > >>>> > >>>> "openstack volume service list” in which state are the services and I see that indeed this happens: > >>>> > >>>> > >>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | > >>>> > >>>> And stopped since 2021-04-29 ! > >>>> > >>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. > >>>> > >>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). > >>>> > >>>> Thank you very much in advance. Regards. > >>> > >>> > >>> > >>> > >> > >> > > > > > From mparra at iaa.es Wed May 12 21:39:14 2021 From: mparra at iaa.es (ManuParra) Date: Wed, 12 May 2021 23:39:14 +0200 Subject: cinder.volume.drivers.rbd connecting Message-ID: Hi, we have faced some problems when creating volumes to add to VMs, to see what was happening I activated the Debug=True mode of Cinder in the cinder.conf file. I see that when I try to create a new volume I get the following in the log: "DEBUG cinder.volume.drivers.rbd connecting to (conf=/etc/ceph/ceph.conf, timeout=-1) _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431” I’m using OpenStack Train and Ceph Octopus. When I check with openstack volume service list +------------------+----------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+----------------------+------+---------+-------+----------------------------+ | cinder-scheduler | spsrc-controller-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 | | cinder-scheduler | spsrc-controller-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | | cinder-scheduler | spsrc-controller-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 | | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-11T10:48:42.000000 | | cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | | cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 | | cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 | +------------------+----------------------+------+---------+-------+——————————————+ So cinder-volume is Down, I compare "cinder-backup" Ceph config with "cinder-volume", and they are equal! so why only one of them works? diff /etc/kolla/cinder-backup/ceph.conf /etc/kolla/cinder-volume/ceph.conf I go inside the "cinder_volume" container docker exec -it cinder_volume /bin/bash Try listing cinder volumes, works! rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls Any Ideas. Kind regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed May 12 23:52:11 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 12 May 2021 16:52:11 -0700 Subject: [Octavia dashboard] Impossible to fill the subnet field when creating a load balancer In-Reply-To: <37fc26ac7ba043418540c022c6d30e1b@elca.ch> References: <37fc26ac7ba043418540c022c6d30e1b@elca.ch> Message-ID: Hi Jean-François, The Subnet drop-down on the load balancer create screen in horizon should be populated with the neutron subnets your project has access to. Can you check that you can see at least one subnet (or the specific subnet you are looking for) in the networking section of the horizon UI? The field should be a drop-down listing the available neutron subnets. You can see a demo of that in our Boston presentation here: https://youtu.be/BBgP3_qhJ00?t=935 Michael On Wed, May 12, 2021 at 8:18 AM Taltavull Jean-Francois wrote: > > Hi All, > > On my OSA Victoria deployment (on Ubuntu 20.04), the "Create Load Balancer" form does not allow me to fill the mandatory "subnet" field and therefore I can't create a load balancer with Horizon. > > On the other hand, Octavia works fine and I can create load balancers with the CLI, Terraform, etc. > > Has one of you already faced such a situation ? > > > Best regards, > > Jean-François > > > From gmann at ghanshyammann.com Thu May 13 00:46:30 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 May 2021 19:46:30 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 13th at 1500 UTC In-Reply-To: <17959acfbe4.c770ef05323333.5684594462011495541@ghanshyammann.com> References: <17959acfbe4.c770ef05323333.5684594462011495541@ghanshyammann.com> Message-ID: <17963307646.11c9c0678464956.4234349735716476639@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on May 13th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Planning for TC + PTL interaction (gmann) ** https://etherpad.opendev.org/p/tc-ptl-interaction * Xena cycle tracker status check ** https://etherpad.opendev.org/p/tc-xena-tracker * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Mon, 10 May 2021 23:26:19 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 13th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 12th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From mparra at iaa.es Thu May 13 06:30:38 2021 From: mparra at iaa.es (ManuParra) Date: Thu, 13 May 2021 08:30:38 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <20210512213446.7c222mlcdwxiosly@localhost> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> Message-ID: <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> Hi Gorka again, yes, the first thing is to know why you can't connect to that host (Ceph is actually set up for HA) so that's the way to do it. I tell you this because previously from the beginning of the setup of our setup it has always been like that, with that hostname and there has been no problem. As for the errors, the strangest thing is that in Monasca I have not found any error log, only warning on “volume service is down. (host: rbd:volumes at ceph-rbd)" and info, which is even stranger. Regards. > On 12 May 2021, at 23:34, Gorka Eguileor wrote: > > On 12/05, ManuParra wrote: >> Hi Gorka, let me show the cinder config: >> >> [ceph-rbd] >> rbd_ceph_conf = /etc/ceph/ceph.conf >> rbd_user = cinder >> backend_host = rbd:volumes >> rbd_pool = cinder.volumes >> volume_backend_name = ceph-rbd >> volume_driver = cinder.volume.drivers.rbd.RBDDriver >> … >> >> So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. > > Hi, > > Your backend_host doesn't have a valid hostname, please set a proper > hostname in that configuration option. > > Then the next thing you need to have is the cinder-volume service > running correctly before making any requests. > > I would try adding rbd_exclusive_cinder_pool=true then tailing the > volume logs, and restarting the service. > > See if the logs show any ERROR level entries. > > I would also check the service-list output right after the service is > restarted, if it's up then I would check it again after 2 minutes. > > Cheers, > Gorka. > > >> >> Regards. >> >> >>> On 12 May 2021, at 11:49, Gorka Eguileor wrote: >>> >>> On 12/05, ManuParra wrote: >>>> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. >>>> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. >>>> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >>>> Kind regards. >>>> >>> >>> Hi, >>> >>> You are most likely using an older release, have a high number of cinder >>> RBD volumes, and have not changed configuration option >>> "rbd_exclusive_cinder_pool" from its default "false" value. >>> >>> Please add to your driver's section in cinder.conf the following: >>> >>> rbd_exclusive_cinder_pool = true >>> >>> >>> And restart the service. >>> >>> Cheers, >>> Gorka. >>> >>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >>>>> >>>>> Hi, >>>>> >>>>> so restart the volume service;-) >>>>> >>>>> systemctl restart openstack-cinder-volume.service >>>>> >>>>> >>>>> Zitat von ManuParra : >>>>> >>>>>> Dear OpenStack community, >>>>>> >>>>>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>>>>> >>>>>> "openstack volume create --size 20 testmv" >>>>>> >>>>>> the volume creation status shows an error. If I go to the error log detail it indicates: >>>>>> >>>>>> "Schedule allocate volume: Could not find any available weighted backend". >>>>>> >>>>>> Indeed then I go to the cinder log and it indicates: >>>>>> >>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>>>>> >>>>>> I check with: >>>>>> >>>>>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>>>>> >>>>>> >>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>>>>> >>>>>> And stopped since 2021-04-29 ! >>>>>> >>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>>>>> >>>>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>>>>> >>>>>> Thank you very much in advance. Regards. >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 13 07:06:34 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 13 May 2021 09:06:34 +0200 Subject: [neutron] Drivers meeting 14.05.2021 cancelled Message-ID: <8187593.9fkBf64Uhv@p1> Hi, Due to lack of the agenda let's cancel tomorrow's drivers meeting. In the meantime, please review already accepted specs, especially those with review priority which can be found on: https://review.opendev.org/q/label:Review-priority%253D%252B1+project:openstack/neutron-specs[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/q/label:Review-priority%253D%252B1+project:openstack/neutron-specs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 484 bytes Desc: This is a digitally signed message part. URL: From geguileo at redhat.com Thu May 13 07:37:22 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 13 May 2021 09:37:22 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> Message-ID: <20210513073722.x3z3qkcpvg5am6ia@localhost> On 13/05, ManuParra wrote: > Hi Gorka again, yes, the first thing is to know why you can't connect to that host (Ceph is actually set up for HA) so that's the way to do it. I tell you this because previously from the beginning of the setup of our setup it has always been like that, with that hostname and there has been no problem. > > As for the errors, the strangest thing is that in Monasca I have not found any error log, only warning on “volume service is down. (host: rbd:volumes at ceph-rbd)" and info, which is even stranger. Have you tried the configuration change I recommended? > > Regards. > > > On 12 May 2021, at 23:34, Gorka Eguileor wrote: > > > > On 12/05, ManuParra wrote: > >> Hi Gorka, let me show the cinder config: > >> > >> [ceph-rbd] > >> rbd_ceph_conf = /etc/ceph/ceph.conf > >> rbd_user = cinder > >> backend_host = rbd:volumes > >> rbd_pool = cinder.volumes > >> volume_backend_name = ceph-rbd > >> volume_driver = cinder.volume.drivers.rbd.RBDDriver > >> … > >> > >> So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. > > > > Hi, > > > > Your backend_host doesn't have a valid hostname, please set a proper > > hostname in that configuration option. > > > > Then the next thing you need to have is the cinder-volume service > > running correctly before making any requests. > > > > I would try adding rbd_exclusive_cinder_pool=true then tailing the > > volume logs, and restarting the service. > > > > See if the logs show any ERROR level entries. > > > > I would also check the service-list output right after the service is > > restarted, if it's up then I would check it again after 2 minutes. > > > > Cheers, > > Gorka. > > > > > >> > >> Regards. > >> > >> > >>> On 12 May 2021, at 11:49, Gorka Eguileor wrote: > >>> > >>> On 12/05, ManuParra wrote: > >>>> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. > >>>> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. > >>>> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > >>>> Kind regards. > >>>> > >>> > >>> Hi, > >>> > >>> You are most likely using an older release, have a high number of cinder > >>> RBD volumes, and have not changed configuration option > >>> "rbd_exclusive_cinder_pool" from its default "false" value. > >>> > >>> Please add to your driver's section in cinder.conf the following: > >>> > >>> rbd_exclusive_cinder_pool = true > >>> > >>> > >>> And restart the service. > >>> > >>> Cheers, > >>> Gorka. > >>> > >>>>> On 11 May 2021, at 22:30, Eugen Block wrote: > >>>>> > >>>>> Hi, > >>>>> > >>>>> so restart the volume service;-) > >>>>> > >>>>> systemctl restart openstack-cinder-volume.service > >>>>> > >>>>> > >>>>> Zitat von ManuParra : > >>>>> > >>>>>> Dear OpenStack community, > >>>>>> > >>>>>> I have encountered a problem a few days ago and that is that when creating new volumes with: > >>>>>> > >>>>>> "openstack volume create --size 20 testmv" > >>>>>> > >>>>>> the volume creation status shows an error. If I go to the error log detail it indicates: > >>>>>> > >>>>>> "Schedule allocate volume: Could not find any available weighted backend". > >>>>>> > >>>>>> Indeed then I go to the cinder log and it indicates: > >>>>>> > >>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. > >>>>>> > >>>>>> I check with: > >>>>>> > >>>>>> "openstack volume service list” in which state are the services and I see that indeed this happens: > >>>>>> > >>>>>> > >>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | > >>>>>> > >>>>>> And stopped since 2021-04-29 ! > >>>>>> > >>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. > >>>>>> > >>>>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). > >>>>>> > >>>>>> Thank you very much in advance. Regards. > >>>>> > >>>>> > >>>>> > >>>>> > >>>> > >>>> > >>> > >>> > >> > > > From geguileo at redhat.com Thu May 13 08:31:58 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 13 May 2021 10:31:58 +0200 Subject: [CINDER] - RBD backend reporting In-Reply-To: References: Message-ID: <20210513083158.3n656shnkpey5kfg@localhost> On 12/05, Steven Relf wrote: > Hey list. >   > Currently when using an RBD backend using the default settings, total_capacity_gb is reported as MAX_AVAIL + USED bytes, converted into GiB. This to mean seems a little odd, as I would expect that total_capacity_gb should report the total size in GiB of the backend cluster. >   > This of course can be fixed by adding "report_dynamic_total_capacity = false" into the cinder.conf section for the rbd backend. Hi Steve, That's the problem of having to keep backward compatibility, that even if the driver is doing something non-standard and it's inconveniencing some users [1][2], we cannot just change the behavior, as it could make trouble for another group of users who are currently relying on the current behavior. That's why I had to set the default to true (keep old behavior) in the fix. [1]: https://bugs.launchpad.net/cinder/+bug/1712549 [2]: https://bugs.launchpad.net/cinder/+bug/1706057 > This works fine for ceph clusters where all pools consume from a single disk type/root. But in clusters where you have multiple root's or device types it does not work correctly. >   I don't currently have a system like that to check it, but I would assume current code works as intended: It gets the stats and quota for the pool and uses the most limiting value of the two. As far as I know the stats should be returning the aggregate of the different disks that form the pool. I would like to better understand the difference between what is being reported and what is expected in your environment. Could you share the output of the following commands?: $ ceph -f json-pretty df $ ceph -f json-pretty osd pool get-quota Also what values is Cinder reporting, and what values it should be reporting? Thanks. > Is this proving to be a pain point for anyone else, or is it just me, and if it is proving a problem for others, im happy to write a patch. >   I haven't heard anyone having that problem before, though that doesn't mean there are not people suffering it as well. Cheers, Gorka. > Im thinking something that gets the pools crushrule and works out the total_capacity_gb, based off the total available capacity of a pool based on its root/crushrules. >   > Rgds > Steve. > > The future has already arrived. It's just not evenly distributed yet - William Gibson > > From mparra at iaa.es Thu May 13 10:50:11 2021 From: mparra at iaa.es (ManuParra) Date: Thu, 13 May 2021 12:50:11 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <20210513073722.x3z3qkcpvg5am6ia@localhost> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost> Message-ID: Hello Gorka, not yet, let me update cinder configuration, add the option, restart cinder and I’ll update the status. Do you recommend other things to try for this cycle? Regards. > On 13 May 2021, at 09:37, Gorka Eguileor wrote: > > On 13/05, ManuParra wrote: >> Hi Gorka again, yes, the first thing is to know why you can't connect to that host (Ceph is actually set up for HA) so that's the way to do it. I tell you this because previously from the beginning of the setup of our setup it has always been like that, with that hostname and there has been no problem. >> >> As for the errors, the strangest thing is that in Monasca I have not found any error log, only warning on “volume service is down. (host: rbd:volumes at ceph-rbd)" and info, which is even stranger. > > Have you tried the configuration change I recommended? > > >> >> Regards. >> >>> On 12 May 2021, at 23:34, Gorka Eguileor wrote: >>> >>> On 12/05, ManuParra wrote: >>>> Hi Gorka, let me show the cinder config: >>>> >>>> [ceph-rbd] >>>> rbd_ceph_conf = /etc/ceph/ceph.conf >>>> rbd_user = cinder >>>> backend_host = rbd:volumes >>>> rbd_pool = cinder.volumes >>>> volume_backend_name = ceph-rbd >>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver >>>> … >>>> >>>> So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. >>> >>> Hi, >>> >>> Your backend_host doesn't have a valid hostname, please set a proper >>> hostname in that configuration option. >>> >>> Then the next thing you need to have is the cinder-volume service >>> running correctly before making any requests. >>> >>> I would try adding rbd_exclusive_cinder_pool=true then tailing the >>> volume logs, and restarting the service. >>> >>> See if the logs show any ERROR level entries. >>> >>> I would also check the service-list output right after the service is >>> restarted, if it's up then I would check it again after 2 minutes. >>> >>> Cheers, >>> Gorka. >>> >>> >>>> >>>> Regards. >>>> >>>> >>>>> On 12 May 2021, at 11:49, Gorka Eguileor wrote: >>>>> >>>>> On 12/05, ManuParra wrote: >>>>>> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. >>>>>> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. >>>>>> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >>>>>> Kind regards. >>>>>> >>>>> >>>>> Hi, >>>>> >>>>> You are most likely using an older release, have a high number of cinder >>>>> RBD volumes, and have not changed configuration option >>>>> "rbd_exclusive_cinder_pool" from its default "false" value. >>>>> >>>>> Please add to your driver's section in cinder.conf the following: >>>>> >>>>> rbd_exclusive_cinder_pool = true >>>>> >>>>> >>>>> And restart the service. >>>>> >>>>> Cheers, >>>>> Gorka. >>>>> >>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >>>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> so restart the volume service;-) >>>>>>> >>>>>>> systemctl restart openstack-cinder-volume.service >>>>>>> >>>>>>> >>>>>>> Zitat von ManuParra : >>>>>>> >>>>>>>> Dear OpenStack community, >>>>>>>> >>>>>>>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>>>>>>> >>>>>>>> "openstack volume create --size 20 testmv" >>>>>>>> >>>>>>>> the volume creation status shows an error. If I go to the error log detail it indicates: >>>>>>>> >>>>>>>> "Schedule allocate volume: Could not find any available weighted backend". >>>>>>>> >>>>>>>> Indeed then I go to the cinder log and it indicates: >>>>>>>> >>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>>>>>>> >>>>>>>> I check with: >>>>>>>> >>>>>>>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>>>>>>> >>>>>>>> >>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>>>>>>> >>>>>>>> And stopped since 2021-04-29 ! >>>>>>>> >>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>>>>>>> >>>>>>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>>>>>>> >>>>>>>> Thank you very much in advance. Regards. >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>> >> > > From tobias.urdin at binero.com Thu May 13 11:07:13 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Thu, 13 May 2021 11:07:13 +0000 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost>, Message-ID: <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> Hello, I just saw that you are running Ceph Octopus with Train release and wanted to let you know that we saw issues with the os-brick version shipped with Train not supporting client version of Ceph Octopus. So for our Ceph cluster running Octopus we had to keep the client version on Nautilus until upgrading to Victoria which included a newer version of os-brick. Maybe this is unrelated to your issue but just wanted to put it out there. Best regards Tobias > On 13 May 2021, at 12:55, ManuParra wrote: > > Hello Gorka, not yet, let me update cinder configuration, add the option, restart cinder and I’ll update the status. > Do you recommend other things to try for this cycle? > Regards. > >> On 13 May 2021, at 09:37, Gorka Eguileor wrote: >> >>> On 13/05, ManuParra wrote: >>> Hi Gorka again, yes, the first thing is to know why you can't connect to that host (Ceph is actually set up for HA) so that's the way to do it. I tell you this because previously from the beginning of the setup of our setup it has always been like that, with that hostname and there has been no problem. >>> >>> As for the errors, the strangest thing is that in Monasca I have not found any error log, only warning on “volume service is down. (host: rbd:volumes at ceph-rbd)" and info, which is even stranger. >> >> Have you tried the configuration change I recommended? >> >> >>> >>> Regards. >>> >>>> On 12 May 2021, at 23:34, Gorka Eguileor wrote: >>>> >>>> On 12/05, ManuParra wrote: >>>>> Hi Gorka, let me show the cinder config: >>>>> >>>>> [ceph-rbd] >>>>> rbd_ceph_conf = /etc/ceph/ceph.conf >>>>> rbd_user = cinder >>>>> backend_host = rbd:volumes >>>>> rbd_pool = cinder.volumes >>>>> volume_backend_name = ceph-rbd >>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver >>>>> … >>>>> >>>>> So, using rbd_exclusive_cinder_pool=True it will be used just for volumes? but the log is saying no connection to the backend_host. >>>> >>>> Hi, >>>> >>>> Your backend_host doesn't have a valid hostname, please set a proper >>>> hostname in that configuration option. >>>> >>>> Then the next thing you need to have is the cinder-volume service >>>> running correctly before making any requests. >>>> >>>> I would try adding rbd_exclusive_cinder_pool=true then tailing the >>>> volume logs, and restarting the service. >>>> >>>> See if the logs show any ERROR level entries. >>>> >>>> I would also check the service-list output right after the service is >>>> restarted, if it's up then I would check it again after 2 minutes. >>>> >>>> Cheers, >>>> Gorka. >>>> >>>> >>>>> >>>>> Regards. >>>>> >>>>> >>>>>> On 12 May 2021, at 11:49, Gorka Eguileor wrote: >>>>>> >>>>>> On 12/05, ManuParra wrote: >>>>>>> Thanks, I have restarted the service and I see that after a few minutes then cinder-volume service goes down again when I check it with the command openstack volume service list. >>>>>>> The host/service that contains the cinder-volumes is rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the volumes. I have checked Ceph and the status of everything is correct, no errors or warnings. >>>>>>> The error I have is that cinder can’t connect to rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >>>>>>> Kind regards. >>>>>>> >>>>>> >>>>>> Hi, >>>>>> >>>>>> You are most likely using an older release, have a high number of cinder >>>>>> RBD volumes, and have not changed configuration option >>>>>> "rbd_exclusive_cinder_pool" from its default "false" value. >>>>>> >>>>>> Please add to your driver's section in cinder.conf the following: >>>>>> >>>>>> rbd_exclusive_cinder_pool = true >>>>>> >>>>>> >>>>>> And restart the service. >>>>>> >>>>>> Cheers, >>>>>> Gorka. >>>>>> >>>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >>>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> so restart the volume service;-) >>>>>>>> >>>>>>>> systemctl restart openstack-cinder-volume.service >>>>>>>> >>>>>>>> >>>>>>>> Zitat von ManuParra : >>>>>>>> >>>>>>>>> Dear OpenStack community, >>>>>>>>> >>>>>>>>> I have encountered a problem a few days ago and that is that when creating new volumes with: >>>>>>>>> >>>>>>>>> "openstack volume create --size 20 testmv" >>>>>>>>> >>>>>>>>> the volume creation status shows an error. If I go to the error log detail it indicates: >>>>>>>>> >>>>>>>>> "Schedule allocate volume: Could not find any available weighted backend". >>>>>>>>> >>>>>>>>> Indeed then I go to the cinder log and it indicates: >>>>>>>>> >>>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>>>>>>>> >>>>>>>>> I check with: >>>>>>>>> >>>>>>>>> "openstack volume service list” in which state are the services and I see that indeed this happens: >>>>>>>>> >>>>>>>>> >>>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-29T09:48:42.000000 | >>>>>>>>> >>>>>>>>> And stopped since 2021-04-29 ! >>>>>>>>> >>>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are no problems with the Ceph BackEnd, everything is apparently working. >>>>>>>>> >>>>>>>>> This happened after an uncontrolled outage.So my question is how do I restart only cinder-volumes (I also have cinder-backup, cinder-scheduler but they are ok). >>>>>>>>> >>>>>>>>> Thank you very much in advance. Regards. >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> >> > > From luke.camilleri at zylacomputing.com Thu May 13 11:23:13 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 13 May 2021 13:23:13 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> Message-ID: <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> HI Michael, thanks a lot for the below information it is very helpful. I ended up setting the o-hm0 interface statically in the octavia-interface.sh script which is called by the service and also added a delay to make sure that the bridges are up before trying to create a veth pair and connect the endpoints. Also I edited the unit section of the health-manager service and at the after option I added octavia-interface.service or else on startup the health manager will not bind to the lb-mgmt-net since it would not be up yet The floating IPs part was a bit tricky until I understood what was really going on with the VIP concept and how better and more flexible it is to set the VIP on the tenant network and then associate with public ip to the VIP. With this being said I noticed that 2 IPs are being assigned to the amphora instance and that the actual port assigned to the instance has an allowed pair with the VIP port. I checked online and it seems that there is an active/standby project going on with VRRP/keepalived and in fact the keepalived daemon is running in the amphora instance. Am I on the right track with the active/standby feature and if so do you have any installation/project links to share please so that I can test it? Regards On 12/05/2021 08:37, Michael Johnson wrote: > Answers inline below. > > Michael > > On Mon, May 10, 2021 at 5:15 PM Luke Camilleri > wrote: >> Hi Michael and thanks a lot for the detailed answer below. >> >> I believe I have got most of this sorted out apart from some small issues below: >> >> If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? > The amphorae do not have issues with DHCP and gateways as we control > the DHCP client configuration inside the amphora. It does only have > one IP on the lb-mgmt-net, it will honor gateways provided by neutron > for the lb-mgmt-net traffic, but a gateway is not required on the > lb-mgmt-network unless you are routing the lb-mgmt-net traffic across > subnets. > >> How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? > The worker and housekeeping processes only create connections to the > amphora, they do not receive connections from them. The amphora send a > heartbeat packet to the health manager endpoints every ten seconds by > default. The list of valid health manager endpoints is included in the > amphora agent configuration file that is injected into the service VM > at boot time. It can be updated using the Octavia admin API for > refreshing the amphora agent configuration. > >> Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? > Octavia does not use floating IPs. Users can create and assign > floating IPs via neutron if they would like, but they are not > necessary. Octavia VIPs can be created directly on neutron "external" > networks, avoiding the NAT overhead of floating IPs. > There is no practical reason to assign a floating IP to a port on the > lb-mgmt-net as tenant traffic is never on or accessible from that > network. > >> Thanks in advance for any feedback >> >> On 06/05/2021 22:46, Michael Johnson wrote: >> >> Hi Luke, >> >> 1. I agree that DHCP is technically unnecessary for the o-hm0 >> interface if you can manage your address allocation on the network you >> are using for the lb-mgmt-net. >> I don't have detailed information about the Ubuntu install >> instructions, but I suspect it was done to simplify the IPAM to be >> managed by whatever is providing DHCP on the lb-mgmt-net provided (be >> it neutron or some other resource on a provider network). >> The lb-mgmt-net is simply a neutron network that the amphora >> management address is on. It is routable and does not require external >> access. The only tricky part to it is the worker, health manager, and >> housekeeping processes need to be reachable from the amphora, and the >> controllers need to reach the amphora over the network(s). There are >> many ways to accomplish this. >> >> 2. See my above answer. Fundamentally the lb-mgmt-net is just a >> neutron network that nova can use to attach an interface to the >> amphora instances for command and control traffic. As long as the >> controllers can reach TCP 9433 on the amphora, and the amphora can >> send UDP 5555 back to the health manager endpoints, it will work fine. >> >> 3. Octavia, with the amphora driver, does not require any special >> configuration in Neutron (beyond the advanced services RBAC policy >> being available for the neutron service account used in your octavia >> configuration file). The neutron_lbaas.conf and services_lbaas.conf >> are legacy configuration files/settings that were used for >> neutron-lbaas which is now end of life. See the wiki page for >> information on the deprecation of neutron-lbaas: >> https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. >> >> Michael >> >> On Thu, May 6, 2021 at 12:30 PM Luke Camilleri >> wrote: >> >> Hi Michael and thanks a lot for your help on this, after following your >> steps the agent got deployed successfully in the amphora-image. >> >> I have some other queries that I would like to ask mainly related to the >> health-manager/load-balancer network setup and IP assignment. First of >> all let me point out that I am using a manual installation process, and >> it might help others to understand the underlying infrastructure >> required to make this component work as expected. >> >> 1- The installation procedure contains this step: >> >> $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia >> >> which is later on called to assign the IP to the o-hm0 interface which >> is connected to the lb-management network as shown below: >> >> $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia >> >> Apart from having a dhcp config for a single IP seems a bit of an >> overkill, using these steps is injecting an additional routing table >> into the default namespace as shown below in my case: >> >> # route -n >> Kernel IP routing table >> Destination Gateway Genmask Flags Metric Ref Use >> Iface >> 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 >> 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX >> 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX >> 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 >> 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 >> >> Since the load-balancer management network does not need any external >> connectivity (but only communication between health-manager service and >> amphora-agent), why is a gateway required and why isn't the IP address >> allocated as part of the interface creation script which is called when >> the service is started or stopped (example below)? >> >> --- >> >> #!/bin/bash >> >> set -ex >> >> MAC=$MGMT_PORT_MAC >> BRNAME=$BRNAME >> >> if [ "$1" == "start" ]; then >> ip link add o-hm0 type veth peer name o-bhm0 >> brctl addif $BRNAME o-bhm0 >> ip link set o-bhm0 up >> ip link set dev o-hm0 address $MAC >> *** ip addr add 172.16.0.2/12 dev o-hm0 >> ***ip link set o-hm0 mtu 1500 >> ip link set o-hm0 up >> iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT >> elif [ "$1" == "stop" ]; then >> ip link del o-hm0 >> else >> brctl show $BRNAME >> ip a s dev o-hm0 >> fi >> >> --- >> >> 2- Is there a possibility to specify a fixed vlan outside of tenant >> range for the load balancer management network? >> >> 3- Are the configuration changes required only in neutron.conf or also >> in additional config files like neutron_lbaas.conf and >> services_lbaas.conf, similar to the vpnaas configuration? >> >> Thanks in advance for any assistance, but its like putting together a >> puzzle of information :-) >> >> On 05/05/2021 20:25, Michael Johnson wrote: >> >> Hi Luke. >> >> Yes, the amphora-agent will listen on 9443 in the amphorae instances. >> It uses TLS mutual authentication, so you can get a TLS response, but >> it will not let you into the API without a valid certificate. A simple >> "openssl s_client" is usually enough to prove that it is listening and >> requesting the client certificate. >> >> I can't talk to the "openstack-octavia-diskimage-create" package you >> found in centos, but I can discuss how to build an amphora image using >> the OpenStack tools. >> >> If you get Octavia from git or via a release tarball, we provide a >> script to build the amphora image. This is how we build our images for >> the testing gates, etc. and is the recommended way (at least from the >> OpenStack Octavia community) to create amphora images. >> >> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create >> >> For CentOS 8, the command would be: >> >> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 >> is the minimum disk size for centos images, you may want more if you >> are not offloading logs) >> >> I just did a run on a fresh centos 8 instance: >> git clone https://opendev.org/openstack/octavia >> python3 -m venv dib >> source dib/bin/activate >> pip3 install diskimage-builder PyYAML six >> sudo dnf install yum-utils >> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 >> >> This built an image. >> >> Off and on we have had issues building CentOS images due to issues in >> the tools we rely on. If you run into issues with this image, drop us >> a note back. >> >> Michael >> >> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri >> wrote: >> >> Hi there, i am trying to get Octavia running on a Victoria deployment on >> CentOS 8. It was a bit rough getting to the point to launch an instance >> mainly due to the load-balancer management network and the lack of >> documentation >> (https://docs.openstack.org/octavia/victoria/install/install.html) to >> deploy this oN CentOS. I will try to fix this once I have my deployment >> up and running to help others on the way installing and configuring this :-) >> >> At this point a LB can be launched by the tenant and the instance is >> spawned in the Octavia project and I can ping and SSH into the amphora >> instance from the Octavia node where the octavia-health-manager service >> is running using the IP within the same subnet of the amphoras >> (172.16.0.0/12). >> >> Unfortunately I keep on getting these errors in the log file of the >> worker log (/var/log/octavia/worker.log): >> >> 2021-05-05 01:54:49.368 14521 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >> to instance. Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.4.46', p >> ort=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] >> Connection ref >> used',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries >> (currently set to 120) exhausted. The amphora is unavailable. Reason: >> HTTPSConnectionPool(host='172.16 >> .4.46', port=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne >> ction refused',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora >> compute instance failed to become reachable. This either means the >> compute driver failed to fully boot the >> instance inside the timeout interval or the instance is not reachable >> via the lb-mgmt-net.: >> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: >> contacting the amphora timed out >> >> obviously the instance is deleted then and the task fails from the >> tenant's perspective. >> >> The main issue here is that there is no service running on port 9443 on >> the amphora instance. I am assuming that this is in fact the >> amphora-agent service that is running on the instance which should be >> listening on this port 9443 but the service does not seem to be up or >> not installed at all. >> >> To create the image I have installed the CentOS package >> "openstack-octavia-diskimage-create" which provides the utility >> disk-image-create but from what I can conclude the amphora-agent is not >> being installed (thought this was done automatically by default :-( ) >> >> Can anyone let me know if the amphora-agent is what gets queried on port >> 9443 ? >> >> If the agent is not installed/injected by default when building the >> amphora image? >> >> The command to inject the amphora-agent into the amphora image when >> using the disk-image-create command? >> >> Thanks in advance for any assistance >> >> From marios at redhat.com Thu May 13 12:47:04 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 13 May 2021 15:47:04 +0300 Subject: [TripleO] tripleo repos going Extended Maintenance stable/train OK? (not yet IMO) Message-ID: Hello TripleO o/ per [1] and the proposal at [2] the stable/train branch for all tripleo repos [3] is going to transition to extended maintenance [4]. Once [2] merges, we can still merge things to stable/train but it means we can no longer make official openstack tagged releases for stable/train. TripleO is a trailing project so if we want to hold on this for a while longer I think that is OK and that would also be my personal preference. >From a quick check just now e.g. tripleo-heat-templates @ [5] and at current time there are 87 commits since last September which isn't a tiny amount. So I don't think TripleO is ready to declare stable/train as extended maintenance, but perhaps I am wrong, what do you think? Please comment here or directly at [2] if you prefer regards, marios [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022287.html [2] https://review.opendev.org/c/openstack/releases/+/790778/2#message-e981f749aeca64ea971f4e697dd16ba5100ca4a4 [3] https://releases.openstack.org/teams/tripleo.html#train [4] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases [5] https://github.com/openstack/tripleo-heat-templates/compare/11.5.0...stable/train From rosmaita.fossdev at gmail.com Thu May 13 13:28:59 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 13 May 2021 09:28:59 -0400 Subject: [cinder] two priority reviews Message-ID: <1f8291ed-fcd3-d82a-ff5b-1c7a08a96a38@gmail.com> Hello Cinder team, Please direct your attention to these patches, which address issues that are preventing OpenStack from upgrading to SQLAlchemy 1.4 [0]: - https://review.opendev.org/c/openstack/cinder/+/790796 - https://review.opendev.org/c/openstack/cinder/+/790797 Thanks! [0] https://review.opendev.org/c/openstack/requirements/+/788339/ From luke.camilleri at zylacomputing.com Thu May 13 13:46:04 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 13 May 2021 15:46:04 +0200 Subject: [octavia] victoria - loadbalancer works but its operational status is offline In-Reply-To: References: Message-ID: <96ed78a8-48e0-793b-59d6-4128bdf83987@zylacomputing.com> HI Gregory, I have the same issue as described here by Piotr and on checking the config I also noticed that the key is missing from the config file. What is the heartbeat_key string though, following the docs to install and configure, where is this key located? On 28/03/2021 20:32, apps at mossakowski.ch wrote: > Yes, that was it: missing [health_manager].heartbeat_key in octavia.conf > It is not present in openstack victoria octavia docs, I'll push it > together with my installation guide for centos8. > Thanks for your accurate hint Gregory. > It is always crucial to ask the right guy:) > > Regards, > Piotr Mossakowski > Sent from ProtonMail mobile > > > > -------- Original Message -------- > On 22 Mar 2021, 09:10, Gregory Thiemonge < gthiemonge at redhat.com> wrote: > > > Hi, > > Most of the OFFLINE operational status issues are caused by > communication problems between the amphorae and the Octavia > health-manager. > > In your case, the "Ignoring this packet. Exception: 'NoneType' > object has no attribute 'encode'" log message shows that the > health-manager receives the heartbeat packets from the amphorae > but it is unable to decode them. Those packets are encrypted JSON > messages and it seems that the key ([health_manager].heartbeat_key > see > https://docs.openstack.org/octavia/latest/configuration/configref.html#health-manager > ) > used to encrypt those messages is not defined in your > configuration file. So I would suggest configuring it and > restarting the Octavia services, then you can re-create or > failover the load balancers (you cannot change this parameter in a > running load balancer). > > Gregory > > On Sun, Mar 21, 2021 at 6:17 PM > wrote: > > Hello, > I have stable/victoria baremetal openstack with octavia > installed on centos8 using openvswitch mechanism driver: > octavia api on controller, health-manager,housekeeping,worker > on 3 compute/network nodes. > Official docs include only ubuntu with linuxbridge mechanism > but I used https://github.com/prastamaha/openstack-octavia > as a > reference to get it working on centos8 with ovs. > I will push those docs instructions for centos8 soon: > https://github.com/openstack/octavia/tree/master/doc/source/install > . > > I created basic http scenario using > https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer > . > Loadbalancer works but its operational status is offline > (openstack_loadbalancer_outputs.txt). > On all octavia workers I see the same warning message in > health_manager.log: > Health Manager experienced an exception processing a heartbeat > message from ('172.31.255.233', 1907). Ignoring this packet. > Exception: 'NoneType' object has no attribute 'encode' > I've searched for related active bug but all I found is this > not related in my opinion: > https://storyboard.openstack.org/#!/story/2008615 > > I'm attaching all info I've gathered: > > * octavia.conf and health_manager debug logs > (octavia_config_and_health_manager_logs.txt) > * tcpdump from amphora VM (tcpdump_from_amphora_vm.txt) > * tcpdump from octavia worker (tcpdump_from_octavia_worker.txt) > * debug amphora-agent.log from amphora VM (amphora-agent.log) > > Can you point me to the right direction what I have missed? > Thanks! > Piotr Mossakowski > https://github.com/moss2k13 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpawlik at redhat.com Thu May 13 14:23:51 2021 From: dpawlik at redhat.com (Daniel Pawlik) Date: Thu, 13 May 2021 16:23:51 +0200 Subject: [all][infra][qa] Retiring Logstash, Elasticsearch, subunit2sql, and Health In-Reply-To: <20210512131713.pmdr7zhgsaz52ryk@yuggoth.org> References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> <20210512131713.pmdr7zhgsaz52ryk@yuggoth.org> Message-ID: Hello Folks, Thank you Jeremy and Clark for sharing the issue that you have. I understand that the main issue is related to a lack of time. ELK stack requires a lot of resources, but the values that you share probably can be optimized. Is it possible to share the architecture, how many servers are using which Elasticsearch server role (master, data servers, etc.) ? My team is managing RDO infra, which contains an ELK stack based on Opendistro for Elasticsearch. We have ansible playbooks to setup Elasticsearch base on Opendistro just on one node. Almost all of ELK stack services are located on one server that does not utilize a lot of resources (the retention time is set to 10 days, 90GB of HDD is used, 2GB of RAM for Elasticsearch, 512MB for Logstash). Could you share, what is the retention time set currently in the cluster that it requires 1 TB disk? Also other statistics like how many queries are done in kibana and how much of HDD disk space is used by the Openstack project and compare it to other projects that are available in Opendev? In the end, I would like to ask, if you can share what is the Elasticsearch version currently running on your servers and if you can share the -Xmx and -Xms parameters that are set in Logstash, Elasticsearch and Kibana. Thank you for your time and effort in keeping things running smoothly for OpenDev. We find the OpenDev ELK stack valuable enough to the OpenDev community to take a much larger role in keeping it running. If you can think of any additional links or information that may be helpful to us taking a larger role here, please do not hesitate to share it. Dan On Wed, May 12, 2021 at 3:20 PM Jeremy Stanley wrote: > On 2021-05-12 02:05:57 -0700 (-0700), Sorin Sbarnea wrote: > [...] > > TripleO health check project relies on being able to query ER from > > both opendev and rdo in order to easy identification of problems. > > Since you say RDO has a similar setup, could they just expand to > start indexing our logs? As previously stated, doing that doesn't > require any special access to our infrastructure. > > > Maybe instead of dropping we should rethink what it is supposed to > > index and not, set some hard limits per job and scale down the > > deployment. IMHO, one of the major issues with it is that it does > > try to index maybe too much w/o filtering noisy output before > > indexing. > > Reducing how much we index doesn't solve the most pressing problem, > which is that we need to upgrade the underlying operating system, > therefore replace the current current configuration management which > won't work on newer platforms, and also almost certainly upgrade > versions of the major components in use for it. Nobody has time to > do that, at least nobody who has heard our previous cries for help. > > > If we can delay making a decision a little bit so we can > > investigate all available options it would really be great. > > This thread hasn't set any timeline for stopping the service, not > yet anyway. > > > I worth noting that I personally do not have a special love for ES > > but I do value a lot what it does. I am also pragmatic and I would > > not be very upset to make use of a SaaS service as an alternative, > > especially as I recognize how costly is to run and maintain an > > instance. > [...] > > It's been pointed out that OVH has a similar-sounding service, if > someone is interested in experimenting with it: > > https://www.ovhcloud.com/en-ca/data-platforms/logs/ > > The case with this, and I think with any SaaS solution, is that > there would still need to be a separate ingestion mechanism to > identify when new logs are available, postprocess them to remove > debug lines, and then feed them to the indexing service at the > provider... something our current team doesn't have time to design > and manage. > -- > Jeremy Stanley > -- Regards, Daniel Pawlik -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu May 13 14:24:41 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 13 May 2021 10:24:41 -0400 Subject: [interop][cinder] review of next.json (part 2) Message-ID: <07c47d7a-0ca1-aad4-4cd8-369049bd6b63@gmail.com> Hello Interop WG, The Cinder team completed its discussion of what capabilities should be added as advisory in next.json at yesterday's weekly cinder meeting [0]. The tldr; is that we don't have anything to propose for inclusion in next.json at the present time. The team does have some general questions that would help us determine the suitability of some proposed capabilities. We'd like to invite someone from the Interop WG to the cinder xena R-9 virtual midcycle (Wednesday 4 August 2021, 1400-1600 UTC) so we can discuss this "live". So if someone could put 1400 UTC 4 August on their schedule for a 20 minute discussion, that would be very helpful. cheers, brian [0] http://eavesdrop.openstack.org/meetings/cinder/2021/cinder.2021-05-12-14.00.log.html#l-56 From Arkady.Kanevsky at dell.com Thu May 13 14:38:47 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 13 May 2021 14:38:47 +0000 Subject: [interop][cinder] review of next.json (part 2) In-Reply-To: <07c47d7a-0ca1-aad4-4cd8-369049bd6b63@gmail.com> References: <07c47d7a-0ca1-aad4-4cd8-369049bd6b63@gmail.com> Message-ID: Thanks Brian. I will discuss with the team and we will have somebody from the Interop team attending it. Thanks, -----Original Message----- From: Brian Rosmaita Sent: Thursday, May 13, 2021 9:25 AM To: openstack-discuss at lists.openstack.org Subject: [interop][cinder] review of next.json (part 2) [EXTERNAL EMAIL] Hello Interop WG, The Cinder team completed its discussion of what capabilities should be added as advisory in next.json at yesterday's weekly cinder meeting [0]. The tldr; is that we don't have anything to propose for inclusion in next.json at the present time. The team does have some general questions that would help us determine the suitability of some proposed capabilities. We'd like to invite someone from the Interop WG to the cinder xena R-9 virtual midcycle (Wednesday 4 August 2021, 1400-1600 UTC) so we can discuss this "live". So if someone could put 1400 UTC 4 August on their schedule for a 20 minute discussion, that would be very helpful. cheers, brian [0] https://urldefense.com/v3/__http://eavesdrop.openstack.org/meetings/cinder/2021/cinder.2021-05-12-14.00.log.html*l-56__;Iw!!LpKI!3oVP5SSe-ToYSHOfsA9WAOBJ8-FR3lhcJTUfDJ1qgvxOJCd-ynT4xdI1xLxBt4nkgneC$ [eavesdrop[.]openstack[.]org] From cboylan at sapwetik.org Thu May 13 14:56:53 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 13 May 2021 07:56:53 -0700 Subject: =?UTF-8?Q?Re:_[all][infra][qa]_Retiring_Logstash, _Elasticsearch, _subunit?= =?UTF-8?Q?2sql,_and_Health?= In-Reply-To: References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> <20210512131713.pmdr7zhgsaz52ryk@yuggoth.org> Message-ID: <394634e7-d144-4284-bd82-b68e30e2ba3e@www.fastmail.com> On Thu, May 13, 2021, at 7:23 AM, Daniel Pawlik wrote: > Hello Folks, > > Thank you Jeremy and Clark for sharing the issue that you have. I > understand that the main issue is related to a lack of time. > ELK stack requires a lot of resources, but the values that you share > probably can be optimized. Is it possible to share > the architecture, how many servers are using which Elasticsearch server > role (master, data servers, etc.) ? All of this information is public. We host high level docs [0] and you can always check the configuration management [1][2][3]. > > My team is managing RDO infra, which contains an ELK stack based on > Opendistro for Elasticsearch. > We have ansible playbooks to setup Elasticsearch base on Opendistro > just on one node. Almost all of ELK > stack services are located on one server that does not utilize a lot of > resources (the retention time is set to > 10 days, 90GB of HDD is used, 2GB of RAM for Elasticsearch, 512MB for > Logstash). > Could you share, what is the retention time set currently in the > cluster that it requires 1 TB disk? Also other statistics like > how many queries are done in kibana and how much of HDD disk space is > used by the Openstack project and compare > it to other projects that are available in Opendev? We currently have retention time set to 7 days. At peak we were indexing over a billion documents per day (this is after removing DEBUG logs too) and we run with a single replica. Cacti records [4] disk use by elasticsearch over time. Note that due to our use of a single replica we always want to have some free space to accommodate rebalancing if a cluster member is down. We don't break this down as openstack vs not openstack at an elasticsearch level but typical numbers for Zuul test node CPU time shows us we are about 95% openstack and 5% not openstack. I don't know what the total number of queries made against kibana is, but the bulk of querying is likely done by elastic-recheck which also has a public set of queries [5]. These are run multiple times an hour to keep dashboards up to date. > > In the end, I would like to ask, if you can share what is the > Elasticsearch version currently running on your servers and if > you can share the -Xmx and -Xms parameters that are set in Logstash, > Elasticsearch and Kibana. This info (at least for elasticsearch) is availabe in [1]. > > Thank you for your time and effort in keeping things running smoothly > for OpenDev. We find the OpenDev ELK stack > valuable enough to the OpenDev community to take a much larger role in > keeping it running. > If you can think of any additional links or information that may be > helpful to us taking a larger role here, please do not > hesitate to share it. > > Dan > [0] https://docs.opendev.org/opendev/system-config/latest/logstash.html [1] https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/manifests/elasticsearch_node.pp [2] https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/manifests/logstash_worker.pp [3] https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/manifests/logstash.pp [4] http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=66519&rra_id=3&view_type=&graph_start=1618239228&graph_end=1620917628 [5] https://opendev.org/opendev/elastic-recheck/src/branch/master/queries From gagehugo at gmail.com Thu May 13 15:46:46 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 13 May 2021 10:46:46 -0500 Subject: [openstack-helm] Announcing new core reviewers Message-ID: This week we announced the addition of Sangeet Gupta and Jinyuan Liu to the openstack-helm core reviewer team. Thanks to you both for your hard work! -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Thu May 13 16:32:20 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Thu, 13 May 2021 18:32:20 +0200 Subject: [nova] spec review day Message-ID: Hi, During the today's meeting it came up that we should have a spec review day around M1 (and later one more before M2). As M1 is 27th of May, I propose to do a spec review day on 25th of May, which is a Tuesday. Let me know if the timing does not good for you. The rules are the usual. Let's use this day to focus on open specs, trying to reach agreement on as many thing as possible with close cooperation during the day. Cheers, gibi From johnsomor at gmail.com Thu May 13 16:33:27 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 13 May 2021 09:33:27 -0700 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> Message-ID: You are correct that two IPs are being allocated for the VIP, one is a secondary IP which neutron implements as an "allowed address pairs" port. We do this to allow failover of the amphora instance should nova fail the service VM. We hold the VIP IP in a special port so the IP is not lost while we rebuild the service VM. If you are using the active/standby topology (or an Octavia flavor with active/standby enabled), this failover is accelerated with nearly no visible impact to the flows through the load balancer. Active/Standby has been an Octavia feature since the Mitaka release. I gave a demo of it at the Tokoyo summit here: https://youtu.be/8n7FGhtOiXk?t=1420 You can enable active/standby as the default by setting the "loadbalancer_topology" setting in the configuration file (https://docs.openstack.org/octavia/latest/configuration/configref.html#controller_worker.loadbalancer_topology) or by creating an Octavia flavor that creates the load balancer with an active/standby topology (https://docs.openstack.org/octavia/latest/admin/flavors.html). Michael On Thu, May 13, 2021 at 4:23 AM Luke Camilleri wrote: > > HI Michael, thanks a lot for the below information it is very helpful. I > ended up setting the o-hm0 interface statically in the > octavia-interface.sh script which is called by the service and also > added a delay to make sure that the bridges are up before trying to > create a veth pair and connect the endpoints. > > Also I edited the unit section of the health-manager service and at the > after option I added octavia-interface.service or else on startup the > health manager will not bind to the lb-mgmt-net since it would not be up yet > > The floating IPs part was a bit tricky until I understood what was > really going on with the VIP concept and how better and more flexible it > is to set the VIP on the tenant network and then associate with public > ip to the VIP. > > With this being said I noticed that 2 IPs are being assigned to the > amphora instance and that the actual port assigned to the instance has > an allowed pair with the VIP port. I checked online and it seems that > there is an active/standby project going on with VRRP/keepalived and in > fact the keepalived daemon is running in the amphora instance. > > Am I on the right track with the active/standby feature and if so do you > have any installation/project links to share please so that I can test it? > > Regards > > On 12/05/2021 08:37, Michael Johnson wrote: > > Answers inline below. > > > > Michael > > > > On Mon, May 10, 2021 at 5:15 PM Luke Camilleri > > wrote: > >> Hi Michael and thanks a lot for the detailed answer below. > >> > >> I believe I have got most of this sorted out apart from some small issues below: > >> > >> If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? > > The amphorae do not have issues with DHCP and gateways as we control > > the DHCP client configuration inside the amphora. It does only have > > one IP on the lb-mgmt-net, it will honor gateways provided by neutron > > for the lb-mgmt-net traffic, but a gateway is not required on the > > lb-mgmt-network unless you are routing the lb-mgmt-net traffic across > > subnets. > > > >> How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? > > The worker and housekeeping processes only create connections to the > > amphora, they do not receive connections from them. The amphora send a > > heartbeat packet to the health manager endpoints every ten seconds by > > default. The list of valid health manager endpoints is included in the > > amphora agent configuration file that is injected into the service VM > > at boot time. It can be updated using the Octavia admin API for > > refreshing the amphora agent configuration. > > > >> Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? > > Octavia does not use floating IPs. Users can create and assign > > floating IPs via neutron if they would like, but they are not > > necessary. Octavia VIPs can be created directly on neutron "external" > > networks, avoiding the NAT overhead of floating IPs. > > There is no practical reason to assign a floating IP to a port on the > > lb-mgmt-net as tenant traffic is never on or accessible from that > > network. > > > >> Thanks in advance for any feedback > >> > >> On 06/05/2021 22:46, Michael Johnson wrote: > >> > >> Hi Luke, > >> > >> 1. I agree that DHCP is technically unnecessary for the o-hm0 > >> interface if you can manage your address allocation on the network you > >> are using for the lb-mgmt-net. > >> I don't have detailed information about the Ubuntu install > >> instructions, but I suspect it was done to simplify the IPAM to be > >> managed by whatever is providing DHCP on the lb-mgmt-net provided (be > >> it neutron or some other resource on a provider network). > >> The lb-mgmt-net is simply a neutron network that the amphora > >> management address is on. It is routable and does not require external > >> access. The only tricky part to it is the worker, health manager, and > >> housekeeping processes need to be reachable from the amphora, and the > >> controllers need to reach the amphora over the network(s). There are > >> many ways to accomplish this. > >> > >> 2. See my above answer. Fundamentally the lb-mgmt-net is just a > >> neutron network that nova can use to attach an interface to the > >> amphora instances for command and control traffic. As long as the > >> controllers can reach TCP 9433 on the amphora, and the amphora can > >> send UDP 5555 back to the health manager endpoints, it will work fine. > >> > >> 3. Octavia, with the amphora driver, does not require any special > >> configuration in Neutron (beyond the advanced services RBAC policy > >> being available for the neutron service account used in your octavia > >> configuration file). The neutron_lbaas.conf and services_lbaas.conf > >> are legacy configuration files/settings that were used for > >> neutron-lbaas which is now end of life. See the wiki page for > >> information on the deprecation of neutron-lbaas: > >> https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. > >> > >> Michael > >> > >> On Thu, May 6, 2021 at 12:30 PM Luke Camilleri > >> wrote: > >> > >> Hi Michael and thanks a lot for your help on this, after following your > >> steps the agent got deployed successfully in the amphora-image. > >> > >> I have some other queries that I would like to ask mainly related to the > >> health-manager/load-balancer network setup and IP assignment. First of > >> all let me point out that I am using a manual installation process, and > >> it might help others to understand the underlying infrastructure > >> required to make this component work as expected. > >> > >> 1- The installation procedure contains this step: > >> > >> $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia > >> > >> which is later on called to assign the IP to the o-hm0 interface which > >> is connected to the lb-management network as shown below: > >> > >> $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia > >> > >> Apart from having a dhcp config for a single IP seems a bit of an > >> overkill, using these steps is injecting an additional routing table > >> into the default namespace as shown below in my case: > >> > >> # route -n > >> Kernel IP routing table > >> Destination Gateway Genmask Flags Metric Ref Use > >> Iface > >> 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 > >> 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX > >> 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX > >> 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 > >> 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 > >> > >> Since the load-balancer management network does not need any external > >> connectivity (but only communication between health-manager service and > >> amphora-agent), why is a gateway required and why isn't the IP address > >> allocated as part of the interface creation script which is called when > >> the service is started or stopped (example below)? > >> > >> --- > >> > >> #!/bin/bash > >> > >> set -ex > >> > >> MAC=$MGMT_PORT_MAC > >> BRNAME=$BRNAME > >> > >> if [ "$1" == "start" ]; then > >> ip link add o-hm0 type veth peer name o-bhm0 > >> brctl addif $BRNAME o-bhm0 > >> ip link set o-bhm0 up > >> ip link set dev o-hm0 address $MAC > >> *** ip addr add 172.16.0.2/12 dev o-hm0 > >> ***ip link set o-hm0 mtu 1500 > >> ip link set o-hm0 up > >> iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT > >> elif [ "$1" == "stop" ]; then > >> ip link del o-hm0 > >> else > >> brctl show $BRNAME > >> ip a s dev o-hm0 > >> fi > >> > >> --- > >> > >> 2- Is there a possibility to specify a fixed vlan outside of tenant > >> range for the load balancer management network? > >> > >> 3- Are the configuration changes required only in neutron.conf or also > >> in additional config files like neutron_lbaas.conf and > >> services_lbaas.conf, similar to the vpnaas configuration? > >> > >> Thanks in advance for any assistance, but its like putting together a > >> puzzle of information :-) > >> > >> On 05/05/2021 20:25, Michael Johnson wrote: > >> > >> Hi Luke. > >> > >> Yes, the amphora-agent will listen on 9443 in the amphorae instances. > >> It uses TLS mutual authentication, so you can get a TLS response, but > >> it will not let you into the API without a valid certificate. A simple > >> "openssl s_client" is usually enough to prove that it is listening and > >> requesting the client certificate. > >> > >> I can't talk to the "openstack-octavia-diskimage-create" package you > >> found in centos, but I can discuss how to build an amphora image using > >> the OpenStack tools. > >> > >> If you get Octavia from git or via a release tarball, we provide a > >> script to build the amphora image. This is how we build our images for > >> the testing gates, etc. and is the recommended way (at least from the > >> OpenStack Octavia community) to create amphora images. > >> > >> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create > >> > >> For CentOS 8, the command would be: > >> > >> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 > >> is the minimum disk size for centos images, you may want more if you > >> are not offloading logs) > >> > >> I just did a run on a fresh centos 8 instance: > >> git clone https://opendev.org/openstack/octavia > >> python3 -m venv dib > >> source dib/bin/activate > >> pip3 install diskimage-builder PyYAML six > >> sudo dnf install yum-utils > >> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 > >> > >> This built an image. > >> > >> Off and on we have had issues building CentOS images due to issues in > >> the tools we rely on. If you run into issues with this image, drop us > >> a note back. > >> > >> Michael > >> > >> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri > >> wrote: > >> > >> Hi there, i am trying to get Octavia running on a Victoria deployment on > >> CentOS 8. It was a bit rough getting to the point to launch an instance > >> mainly due to the load-balancer management network and the lack of > >> documentation > >> (https://docs.openstack.org/octavia/victoria/install/install.html) to > >> deploy this oN CentOS. I will try to fix this once I have my deployment > >> up and running to help others on the way installing and configuring this :-) > >> > >> At this point a LB can be launched by the tenant and the instance is > >> spawned in the Octavia project and I can ping and SSH into the amphora > >> instance from the Octavia node where the octavia-health-manager service > >> is running using the IP within the same subnet of the amphoras > >> (172.16.0.0/12). > >> > >> Unfortunately I keep on getting these errors in the log file of the > >> worker log (/var/log/octavia/worker.log): > >> > >> 2021-05-05 01:54:49.368 14521 WARNING > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > >> to instance. Retrying.: requests.exceptions.ConnectionError: > >> HTTPSConnectionPool(host='172.16.4.46', p > >> ort=9443): Max retries exceeded with url: // (Caused by > >> NewConnectionError(' >> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] > >> Connection ref > >> used',)) > >> > >> 2021-05-05 01:54:54.374 14521 ERROR > >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries > >> (currently set to 120) exhausted. The amphora is unavailable. Reason: > >> HTTPSConnectionPool(host='172.16 > >> .4.46', port=9443): Max retries exceeded with url: // (Caused by > >> NewConnectionError(' >> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne > >> ction refused',)) > >> > >> 2021-05-05 01:54:54.374 14521 ERROR > >> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora > >> compute instance failed to become reachable. This either means the > >> compute driver failed to fully boot the > >> instance inside the timeout interval or the instance is not reachable > >> via the lb-mgmt-net.: > >> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: > >> contacting the amphora timed out > >> > >> obviously the instance is deleted then and the task fails from the > >> tenant's perspective. > >> > >> The main issue here is that there is no service running on port 9443 on > >> the amphora instance. I am assuming that this is in fact the > >> amphora-agent service that is running on the instance which should be > >> listening on this port 9443 but the service does not seem to be up or > >> not installed at all. > >> > >> To create the image I have installed the CentOS package > >> "openstack-octavia-diskimage-create" which provides the utility > >> disk-image-create but from what I can conclude the amphora-agent is not > >> being installed (thought this was done automatically by default :-( ) > >> > >> Can anyone let me know if the amphora-agent is what gets queried on port > >> 9443 ? > >> > >> If the agent is not installed/injected by default when building the > >> amphora image? > >> > >> The command to inject the amphora-agent into the amphora image when > >> using the disk-image-create command? > >> > >> Thanks in advance for any assistance > >> > >> From johnsomor at gmail.com Thu May 13 16:37:36 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 13 May 2021 09:37:36 -0700 Subject: [octavia] victoria - loadbalancer works but its operational status is offline In-Reply-To: <96ed78a8-48e0-793b-59d6-4128bdf83987@zylacomputing.com> References: <96ed78a8-48e0-793b-59d6-4128bdf83987@zylacomputing.com> Message-ID: It is documented here: https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.heartbeat_key It is also listed in the quick start guide here: https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#configuring-octavia Your deployment tool should be configuring that for you. Michael On Thu, May 13, 2021 at 6:50 AM Luke Camilleri wrote: > > HI Gregory, I have the same issue as described here by Piotr and on checking the config I also noticed that the key is missing from the config file. > > What is the heartbeat_key string though, following the docs to install and configure, where is this key located? > > On 28/03/2021 20:32, apps at mossakowski.ch wrote: > > Yes, that was it: missing [health_manager].heartbeat_key in octavia.conf > It is not present in openstack victoria octavia docs, I'll push it together with my installation guide for centos8. > Thanks for your accurate hint Gregory. > It is always crucial to ask the right guy:) > > Regards, > Piotr Mossakowski > Sent from ProtonMail mobile > > > > -------- Original Message -------- > On 22 Mar 2021, 09:10, Gregory Thiemonge < gthiemonge at redhat.com> wrote: > > > Hi, > > Most of the OFFLINE operational status issues are caused by communication problems between the amphorae and the Octavia health-manager. > > In your case, the "Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode'" log message shows that the health-manager receives the heartbeat packets from the amphorae but it is unable to decode them. Those packets are encrypted JSON messages and it seems that the key ([health_manager].heartbeat_key see https://docs.openstack.org/octavia/latest/configuration/configref.html#health-manager) used to encrypt those messages is not defined in your configuration file. So I would suggest configuring it and restarting the Octavia services, then you can re-create or failover the load balancers (you cannot change this parameter in a running load balancer). > > Gregory > > On Sun, Mar 21, 2021 at 6:17 PM wrote: >> >> Hello, >> I have stable/victoria baremetal openstack with octavia installed on centos8 using openvswitch mechanism driver: octavia api on controller, health-manager,housekeeping,worker on 3 compute/network nodes. >> Official docs include only ubuntu with linuxbridge mechanism but I used https://github.com/prastamaha/openstack-octavia as a reference to get it working on centos8 with ovs. >> I will push those docs instructions for centos8 soon: https://github.com/openstack/octavia/tree/master/doc/source/install. >> I created basic http scenario using https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer. >> Loadbalancer works but its operational status is offline (openstack_loadbalancer_outputs.txt). >> On all octavia workers I see the same warning message in health_manager.log: >> Health Manager experienced an exception processing a heartbeat message from ('172.31.255.233', 1907). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' >> I've searched for related active bug but all I found is this not related in my opinion: https://storyboard.openstack.org/#!/story/2008615 >> I'm attaching all info I've gathered: >> >> octavia.conf and health_manager debug logs (octavia_config_and_health_manager_logs.txt) >> tcpdump from amphora VM (tcpdump_from_amphora_vm.txt) >> tcpdump from octavia worker (tcpdump_from_octavia_worker.txt) >> debug amphora-agent.log from amphora VM (amphora-agent.log) >> >> Can you point me to the right direction what I have missed? >> Thanks! >> Piotr Mossakowski >> https://github.com/moss2k13 >> >> >> From dms at danplanet.com Thu May 13 17:01:56 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 13 May 2021 10:01:56 -0700 Subject: [all][infra][qa] Critical call for help: Retiring Logstash, Elasticsearch, Elastic-recheck References: <39d813ed-4e26-49a9-a371-591b07d51a89@www.fastmail.com> Message-ID: I'm creating a sub-thread of this discussion, specifically to highlight the impact of retiring Logstash and Elasticsearch, the functionality we will lose as a result, and to put out the call for resources to help. I will trim Clark's original email to just the critical bits of infrastructure related to these services. > Xenial has recently reached the end of its life. Our > logstash+kibana+elasticsearch and subunit2sql+health data crunching > services all run on Xenial. Even without the distro platform EOL > concerns these services are growing old and haven't received the care > they need to keep running reliably. > > Additionally these services represent a large portion of our resource consumption: > > * 6 x 16 vcpu + 60GB RAM + 1TB disk Elasticsearch servers > * 20 x 4 vcpu + 4GB RAM logstash-worker servers > * 1 x 2 vcpu + 2GB RAM logstash/kibana central server > * 2 x 8 vcpu + 8GB RAM subunit-worker servers > * 64GB RAM + 500GB disk subunit2sql trove db server > To put things in perspective, they account for more than a quarter of > our control plane servers, occupying over a third of our block storage > and in excess of half the total memory footprint. > > The OpenDev/OpenStack Infra team(s) don't seem to have the time > available currently to do the major lifting required to bring these > services up to date. I would like to propose that we simply turn them > off. All of these services operate off of public data that will not be > going away (specifically job log content). If others are interested in > taking this on they can hook into this data and run their own > processing pipelines. Just to clarify for people that aren't familiar with what these services do for us, I want to explain their importance and the impact of not having them in a future where we have to decommission them. We run a lot of jobs in CI, across a lot of projects and varying configurations. Ideally these would all work all of the time, and never have spurious and non-deterministic failures. However, that's not how the real world works, and in reality many jobs are not consistently well-behaved. Since many of our jobs run tests against many projects to ensure that the whole stack works at any given point, spurious failures in one project's tests can impact developers' ability to land patches in a large number of projects. Indeed, it takes a surprisingly low failure rate to significantly impact the amount of work that can be done across the ecosystem. Because of this, collecting information from "the firehose" about job failures is critical. It helps us figure out how much impact a given spurious failure is having, and across how wide of a swath of projects. Further, fixing the problem becomes one of determining the actual bug (of course) which can be vastly improved by gathering lots of examples of failures and looking for commonalities. These services (collectively called ELK) digest the logs and data from these test runs and provide a way to mine details when chasing down a failure. There is even a service, built by openstack people, which uses ELK to automate the identification of common failures to help determine which are having the most serious impact in order to focus human debugging attention. It's called elastic-recheck, which you've probably heard of, and is visible here: http://status.openstack.org/elastic-recheck/ Unfortunately, a select few developers actually work on these problems. They're difficult to tackle and often require multiple people across projects to nail down a cause and solution. If you've ever just run "recheck" on your patch a bunch of times until the tests are green, you have felt the pain that spurious job failures bring. Actually fixing those are the only way to make things better, and ignoring them causes them to collect over time. At some point, enough of these types of failures will keep anything from merging. Because a small number of heroes generally work on these problems, it's possible that they are the only ones that understand the value of these services. I think it's important for everyone to understand how critical ELK and associated services are to chasing these down. Without it, debugging the spurious failures (which are often real bugs, by the way!) will become even more laborious and likely happen less and less. I'm summarizing this situation in hopes that some of the entities that depend on OpenStack, who are looking for a way to help, and which may have resources (carbon- and silicon-based) that apply here can step up to help make an impact. Thanks! --Dan From alifshit at redhat.com Thu May 13 17:07:16 2021 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 13 May 2021 13:07:16 -0400 Subject: [Nova] Meeting time poll Message-ID: Hey all, As discussed during the IRC meeting today, the Red Hat Nova team would like to know if it's possible to shift the IRC meeting to a different day and/or time. This would facilitate our own internal calls, but I want to be very clear that we'll structure our internal calls around upstream, not the other way around. So please do not perceive any pressure to change, this is just a question :) To help us figure this out, I've created a Doodle poll [1]. I believe the regular attendees of the IRC meeting are spread between Central Europe and NA West Coast, so I've tried to list times that kinda make sense in both of those places, with a couple of hours on each side as a safety margin. Please vote on when you'd like the Nova IRC meeting to take place. Ignore the actual dates (like May 10), the important bits are the days of the week (Monday, Tuesday, etc). This is obviously a recurring meeting, something that Doodle doesn't seem to understand. I've not included Mondays and Wednesdays in the list of possibilities, as they would not work for Red Hat Nova. You can also vote to keep the status quo :) The times are listed in UTC, like the current meeting time, so unfortunately you have to be mindful of the effects of daylight savings time :( Thanks in advance! [1] https://doodle.com/poll/45ptnyn85iuw7pxz From alifshit at redhat.com Thu May 13 17:10:35 2021 From: alifshit at redhat.com (Artom Lifshitz) Date: Thu, 13 May 2021 13:10:35 -0400 Subject: [Nova] Meeting time poll In-Reply-To: References: Message-ID: On Thu, May 13, 2021 at 1:07 PM Artom Lifshitz wrote: > > Hey all, > > As discussed during the IRC meeting today, the Red Hat Nova team would > like to know if it's possible to shift the IRC meeting to a different > day and/or time. This would facilitate our own internal calls, but I > want to be very clear that we'll structure our internal calls around > upstream, not the other way around. So please do not perceive any > pressure to change, this is just a question :) > > To help us figure this out, I've created a Doodle poll [1]. I believe > the regular attendees of the IRC meeting are spread between Central > Europe and NA West Coast, so I've tried to list times that kinda make > sense in both of those places, with a couple of hours on each side as > a safety margin. > > Please vote on when you'd like the Nova IRC meeting to take place. > Ignore the actual dates (like May 10), the important bits are the days > of the week (Monday, Tuesday, etc). This is obviously a recurring > meeting, something that Doodle doesn't seem to understand. > > I've not included Mondays and Wednesdays in the list of possibilities, > as they would not work for Red Hat Nova. You can also vote to keep the > status quo :) > > The times are listed in UTC, like the current meeting time, so > unfortunately you have to be mindful of the effects of daylight > savings time :( > > Thanks in advance! > > [1] https://doodle.com/poll/45ptnyn85iuw7pxz And I just noticed that there's a calendar view [2] that you can use to convert to your own time zone. Nifty! (You'll still have to be mindful of daylight saving time though). [2] https://doodle.com/poll/45ptnyn85iuw7pxz#calendar From james.slagle at gmail.com Thu May 13 18:38:36 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 13 May 2021 14:38:36 -0400 Subject: [TripleO] Opting out of global-requirements.txt Message-ID: I'd like to propose that TripleO opt out of dependency management by removing tripleo-common from global-requirements.txt. I do not feel that the global dependency management brings any advantages or anything needed for TripleO. I can't think of any reason to enforce the ability to be globally pip installable with the rest of OpenStack. Two of our most critical projects, tripleoclient and tripleo-common do not even put many of their data files in the right place where our code expects them when they are pip installed. So, I feel fairly confident that no one is pip installing TripleO and relying on global requirements enforcement. One potential advantage of not being in global-requirements.txt is that our unit tests and functional tests could actually test the same code. As things stand today, our unit tests in projects that depend on tripleo-common are pinned to the version in global-requirements.txt, while our functional tests currently run with tripleo-common from master (or included depends-on). The changes needed would be (aiui): - Remove tripleo repos from projects.txt - Remove check-requirements jobs from those same repos - Remove tripleo-common from global-requirements.txt I think we should also plan to backport these changes to Wallaby. Let me know any concerns or feedback, or anything I might be overlooking. Thanks. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu May 13 18:47:12 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 13 May 2021 12:47:12 -0600 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: On Thu, May 13, 2021 at 12:41 PM James Slagle wrote: > I'd like to propose that TripleO opt out of dependency management by > removing tripleo-common from global-requirements.txt. I do not feel that > the global dependency management brings any advantages or anything needed > for TripleO. I can't think of any reason to enforce the ability to be > globally pip installable with the rest of OpenStack. > > Two of our most critical projects, tripleoclient and tripleo-common do not > even put many of their data files in the right place where our code expects > them when they are pip installed. So, I feel fairly confident that no one > is pip installing TripleO and relying on global requirements enforcement. > > One potential advantage of not being in global-requirements.txt is that > our unit tests and functional tests could actually test the same code. As > things stand today, our unit tests in projects that depend on > tripleo-common are pinned to the version in global-requirements.txt, while > our functional tests currently run with tripleo-common from master (or > included depends-on). > > The changes needed would be (aiui): > - Remove tripleo repos from projects.txt > - Remove check-requirements jobs from those same repos > - Remove tripleo-common from global-requirements.txt > > I think we should also plan to backport these changes to Wallaby. > > Let me know any concerns or feedback, or anything I might be overlooking. > Thanks. > +1 thanks for sending this out James! > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Thu May 13 19:06:50 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Thu, 13 May 2021 21:06:50 +0200 Subject: [octavia] victoria - loadbalancer works but its operational status is offline In-Reply-To: References: <96ed78a8-48e0-793b-59d6-4128bdf83987@zylacomputing.com> Message-ID: <79a44d9c-5f63-5cfd-e30d-e6bea1f9fa42@zylacomputing.com> Actually I figured it out a few minutes after I had sent the message :-) Seems like it is just a string of characters that need to be the same between the [health-manager] section in the amphora-agent on the amphora instance and the corresponding section in octavia.conf on the controller node. This should be sorted now, thanks On 13/05/2021 18:37, Michael Johnson wrote: > It is documented here: > https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.heartbeat_key > It is also listed in the quick start guide here: > https://docs.openstack.org/octavia/latest/contributor/guides/dev-quick-start.html#configuring-octavia > > Your deployment tool should be configuring that for you. > > Michael > > On Thu, May 13, 2021 at 6:50 AM Luke Camilleri > wrote: >> HI Gregory, I have the same issue as described here by Piotr and on checking the config I also noticed that the key is missing from the config file. >> >> What is the heartbeat_key string though, following the docs to install and configure, where is this key located? >> >> On 28/03/2021 20:32, apps at mossakowski.ch wrote: >> >> Yes, that was it: missing [health_manager].heartbeat_key in octavia.conf >> It is not present in openstack victoria octavia docs, I'll push it together with my installation guide for centos8. >> Thanks for your accurate hint Gregory. >> It is always crucial to ask the right guy:) >> >> Regards, >> Piotr Mossakowski >> Sent from ProtonMail mobile >> >> >> >> -------- Original Message -------- >> On 22 Mar 2021, 09:10, Gregory Thiemonge < gthiemonge at redhat.com> wrote: >> >> >> Hi, >> >> Most of the OFFLINE operational status issues are caused by communication problems between the amphorae and the Octavia health-manager. >> >> In your case, the "Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode'" log message shows that the health-manager receives the heartbeat packets from the amphorae but it is unable to decode them. Those packets are encrypted JSON messages and it seems that the key ([health_manager].heartbeat_key see https://docs.openstack.org/octavia/latest/configuration/configref.html#health-manager) used to encrypt those messages is not defined in your configuration file. So I would suggest configuring it and restarting the Octavia services, then you can re-create or failover the load balancers (you cannot change this parameter in a running load balancer). >> >> Gregory >> >> On Sun, Mar 21, 2021 at 6:17 PM wrote: >>> Hello, >>> I have stable/victoria baremetal openstack with octavia installed on centos8 using openvswitch mechanism driver: octavia api on controller, health-manager,housekeeping,worker on 3 compute/network nodes. >>> Official docs include only ubuntu with linuxbridge mechanism but I used https://github.com/prastamaha/openstack-octavia as a reference to get it working on centos8 with ovs. >>> I will push those docs instructions for centos8 soon: https://github.com/openstack/octavia/tree/master/doc/source/install. >>> I created basic http scenario using https://docs.openstack.org/octavia/victoria/user/guides/basic-cookbook.html#deploy-a-basic-http-load-balancer. >>> Loadbalancer works but its operational status is offline (openstack_loadbalancer_outputs.txt). >>> On all octavia workers I see the same warning message in health_manager.log: >>> Health Manager experienced an exception processing a heartbeat message from ('172.31.255.233', 1907). Ignoring this packet. Exception: 'NoneType' object has no attribute 'encode' >>> I've searched for related active bug but all I found is this not related in my opinion: https://storyboard.openstack.org/#!/story/2008615 >>> I'm attaching all info I've gathered: >>> >>> octavia.conf and health_manager debug logs (octavia_config_and_health_manager_logs.txt) >>> tcpdump from amphora VM (tcpdump_from_amphora_vm.txt) >>> tcpdump from octavia worker (tcpdump_from_octavia_worker.txt) >>> debug amphora-agent.log from amphora VM (amphora-agent.log) >>> >>> Can you point me to the right direction what I have missed? >>> Thanks! >>> Piotr Mossakowski >>> https://github.com/moss2k13 >>> >>> >>> From ignaziocassano at gmail.com Thu May 13 19:26:54 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 May 2021 21:26:54 +0200 Subject: Train upgrade from centos 7 to centos 8 Message-ID: Hello Guys, anyone have tried to upgrade a train based on centos 7 to centos 8 without reinstalling ? Thks Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Thu May 13 19:48:23 2021 From: dsneddon at redhat.com (Dan Sneddon) Date: Thu, 13 May 2021 12:48:23 -0700 Subject: [xena][neutron][ovn] Follow up to BGP with OVN PTG discussions Message-ID: <27a90b1b-19b4-286b-0e9b-9bb04a44a7a4@redhat.com> Thank you all who attended the discussions at the Xena PTG regarding BGP dynamic routing with Neutron using OVN. Here's a brief summary of the important points covered, and some background information. Red Hat has begun gathering a team of engineers to add OpenStack support for BGP dynamic routing using the Free Range Routing (FRR) set of daemons. Acting as a technical lead for the project, I led one session in the TripleO room to discuss the installer components and two sessions in the Neutron room to discuss BGP routing with OVN, and BGP EVPN with OVN. There was some feedback during the Neutron sessions that little has been done to engage the greater OpenStack/Neutron community thus far, or to utilize the existing RFE process for Neutron. This feedback was correct and was received. Initial work was done with a design that didn't require changes on Neutron/core OVN to start with. The intention is to create Neutron RFEs and to work with others working along similar lines now that we have some more clarity as to the direction of further efforts. There will likely be opportunities to leverage APIs and contribute to existing work being done with Neutron Dynamic Routing, BGPVPN, and other work being done to implement BGP EVPN. We would like to collaborate with Ericsson and others and come up with a solution that fits us all! The first steps involved greater changes to the deployment tooling, and this was proposed and reviewed upstream in the TripleO project: _ https://specs.openstack.org/openstack/tripleo-specs/specs/wallaby/triplo-bgp-frrouter.html There are several use cases for using BGP, and in fact there are separate efforts underway to utilize BGP for the control plane and data plane. BGP may be used for equal-cost multipath (ECMP) load balancing of outbound links, and bi-directional forwarding detection (BFD) for resiliency to ensure that a path provides connectivity. For outbound connectivity BGP will learn routes from BGP peers. There is separate work being done to add BFD support for Neutron and Neutron Dynamic Routing, but this does not provide BFD support for host communication at the hypervisor layer. Using FRR at the host level provides this BFD support. BGP may be used for advertising routes to API endpoints. In this model HAProxy will listen on an IP address and FRR will advertise routes to that IP to BGP peers. High availability for HAProxy is provided via other means such as Pacemaker, and FRR will simply advertise the virtual IP address when it is active on an API controller. BGP may also be used for routing inbound traffic to provider network IPs or floating IPs for instance connectivity. The Compute nodes will run FRR to advertise routes to the local VM IPs or floating IPs hosted on the node. FRR has a daemon named Zebra that is responsible for exchanging routes between routing daemons such as BGP and the kernel. The redistribute connected statement in the FRR configuration will cause local IP addresses on the host to be advertised via BGP. Floating IP addresses are attached to a loopback interface in a namespace, so they will be redistributed using this method. Changes to OVN will be required to ensure provider network IPs assigned to VMs will be assigned to a loopback interface in a namespace in a similar fashion. FRR was selected for integration into TripleO and OVN for several reasons: using FRR leverages a proven production-grade routing solution that gives us BGP, BFD (bi-directional forwarding detection), VRFs for different namespaces for multitenancy, integration with kernel routing, and potentially other features such as OSPF, RPKI, route monitoring/mirroring, and more. FRR has a very complete feature set and is very robust, although there are other BGP speakers available such as ExaBGP or BIRD, and os-ken, with varying feature sets. OVN will need to be modified to enable the Compute node to assign VM provider network IPs to a loopback interface inside a namespace. These IP address will not be used for sending or receiving traffic, only for redistributing routes to the IPs to BGP peers. Traffic which is sent to those IP addresses will be forwarded to the VM using OVS flows on the hypervisor. An example agent for OVN has been written to demonstrate how to monitor the southbound OVN DB and create loopback IP addresses when a VM is started on a Compute node. The OVN changes will be detailed in a separate OVN spec. Demonstration code is available on Github: _ https://github.com/luis5tb/bgp-agent BGP EVPN with multitenancy will require separate VRFs per tenant. This will allow separate routing tables to be maintained, and allow for overlapping IP addresses for different Neutron tenant networks. FRR may have the capability to utilize a single BGP peering session to combine advertisements for all these VRFs, but there is still work to be done to prototype this design. This may result in more efficient BGP dynamic updates, and could potentially make troubleshooting more straightforward. As suggested in the PTG discussions, we are investigating the BGPVPN API. It appears that this API will work well for this use case. Hopefully we can make significant progress during the Xena development cycle, and we will be able to define what needs to be done in subsequent cycles. Any thoughts, suggestions, and contributions are appreciated. If anyone would like to review the work that we've already published, there is a series of blog posts that Luis Tomas Bolivar made related to how to use it on OpenStack and how it works: - OVN-BGP agent introduction: https://ltomasbo.wordpress.com/2021/02/04/openstack-networking-with-bgp/ - How to set ip up on DevStack Environment: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ - In-depth traffic flow inspection: https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-in-depth-traffic-flow-inspection/ Here are some relevant links to posts written by Luis Tomas Bolivar on the ovs-discuss mailing list: https://mail.openvswitch.org/pipermail/ovs-discuss/2021-March/051029.html https://mail.openvswitch.org/pipermail/ovs-discuss/2021-March/051031.html https://mail.openvswitch.org/pipermail/ovs-discuss/2021-March/051033.html -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From kevin at cloudnull.com Thu May 13 21:06:16 2021 From: kevin at cloudnull.com (Carter, Kevin) Date: Thu, 13 May 2021 16:06:16 -0500 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: +1 I 100% agree that we should remove tripleo from global requirements and backport these changes to W. -- Kevin Carter IRC: Cloudnull On Thu, May 13, 2021 at 1:54 PM Wesley Hayutin wrote: > > > On Thu, May 13, 2021 at 12:41 PM James Slagle > wrote: > >> I'd like to propose that TripleO opt out of dependency management by >> removing tripleo-common from global-requirements.txt. I do not feel that >> the global dependency management brings any advantages or anything needed >> for TripleO. I can't think of any reason to enforce the ability to be >> globally pip installable with the rest of OpenStack. >> >> Two of our most critical projects, tripleoclient and tripleo-common do >> not even put many of their data files in the right place where our code >> expects them when they are pip installed. So, I feel fairly confident that >> no one is pip installing TripleO and relying on global requirements >> enforcement. >> >> One potential advantage of not being in global-requirements.txt is that >> our unit tests and functional tests could actually test the same code. As >> things stand today, our unit tests in projects that depend on >> tripleo-common are pinned to the version in global-requirements.txt, while >> our functional tests currently run with tripleo-common from master (or >> included depends-on). >> >> The changes needed would be (aiui): >> - Remove tripleo repos from projects.txt >> - Remove check-requirements jobs from those same repos >> - Remove tripleo-common from global-requirements.txt >> >> I think we should also plan to backport these changes to Wallaby. >> >> Let me know any concerns or feedback, or anything I might be overlooking. >> Thanks. >> > > +1 thanks for sending this out James! > >> >> -- >> -- James Slagle >> -- >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suzhengwei at inspur.com Fri May 14 01:23:28 2021 From: suzhengwei at inspur.com (=?gb2312?B?U2FtIFN1ICjL1dX9zrAp?=) Date: Fri, 14 May 2021 01:23:28 +0000 Subject: [Nova] Meeting time poll Message-ID: A non-text attachment was scrubbed... Name: smime.p7m Type: application/pkcs7-mime Size: 5728 bytes Desc: not available URL: From tkajinam at redhat.com Fri May 14 05:38:22 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 14 May 2021 14:38:22 +0900 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? Message-ID: Hi team, As you know, we currently have TripleO jobs in some of the puppet repos to ensure a change in puppet side doesn't break TripleO which consumes some of the modules. Because these jobs hugely depend on the job definitions in TripleO repos, I'm wondering whether we can invite a few cores from the TripleO CI team to the puppet-openstack core group to maintain these jobs. I expect the scope here is very limited to tripleo job definitions and doesn't expect any +2 for other parts. I'd be nice if I can hear any thoughts on this topic. Thank you, Takashi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri May 14 06:02:06 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 May 2021 08:02:06 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> <95ccfc366d4b497c8af232f38d07559f@binero.com> <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> Message-ID: Hello, I am trying to apply suggested patches but after applying some python error codes do not allow neutron services to start. Since my python skill is poor, I wonder if that patches are for python3. I am on centos 7 and (probably???) patches are for centos 8. I also wonder if it possible to upgrade a centos 7 train to centos 8 train without reinstalling all nodes. This could be important for next release upgrade (ussuri, victoria and so on). Ignazio Il giorno ven 12 mar 2021 alle ore 23:44 Sean Mooney ha scritto: > On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > > Hello, > > > > If it's the same as us, then yes, the issue occurs on Train and is not > completely solved yet. > there is a downstream bug trackker for this > > https://bugzilla.redhat.com/show_bug.cgi?id=1917675 > > its fixed by a combination of 3 enturon patches and i think 1 nova one > > https://review.opendev.org/c/openstack/neutron/+/766277/ > https://review.opendev.org/c/openstack/neutron/+/753314/ > https://review.opendev.org/c/openstack/neutron/+/640258/ > > and > https://review.opendev.org/c/openstack/nova/+/770745 > > the first tree neutron patches would fix the evauate case but break live > migration > the nova patch means live migration will work too although to fully fix > the related > live migration packet loss issues you need > > https://review.opendev.org/c/openstack/nova/+/747454/4 > https://review.opendev.org/c/openstack/nova/+/742180/12 > to fix live migration with network abckend that dont suppor tmultiple port > binding > and > https://review.opendev.org/c/openstack/nova/+/602432 (the only one not > merged yet.) > for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall > driver, noop or ovn instead of ml2/ovs. > > multiple port binding was not actully the reason for this there was a race > in neutorn itslef that would have haapend > even without multiple port binding between the dhcp agent and l2 agent. > > some of those patches have been backported already and all shoudl > eventually make ti to train the could be brought to stine potentially > if peopel are open to backport/review them. > > > > > > > > Best regards > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri May 14 06:11:52 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 14 May 2021 08:11:52 +0200 Subject: [Nova] Meeting time poll In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) wrote: > From: Sam Su (苏正伟) > Sent: Friday, May 14, 2021 03:23 > To: alifshit at redhat.com > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Nova] Meeting time poll > > Hi, Nova team: Hi Sam! > There are many asian developers for Openstack community. I > found the current > IRC time of Nova is not friendly to them, especially to East Asian. > If they > can take part in the IRC meeting, the Nova may have more developers. > Of > cource, Central Europe and NA West Coast is firstly considerable. If > the team > could schedule the meeting once per month, time suitable for asians, > more > people would participate in the meeting discussion. You have a point. In the past Nova had alternating meeting time slots one for EU+NA and one for the NA+Asia timezones. Our experience was that the NA+Asia meeting time slot was mostly lacking participants. So we merged the two slots. But I can imagine that the situation has changed since and there might be need for an alternating meeting again. We can try what you suggest and do an Asia friendly meeting once a month. The next question is what time you would like to have that meeting. Or more specifically which part of the nova team you would like to meet more? * Do a meeting around 8:00 UTC to meet Nova devs from the EU * Do a meeting around 0:00 UTC to meet Nova devs from North America If we go for the 0:00 UTC time slot then I need somebody to chair that meeting as I'm from the EU. Alternatively to having a formal meeting I can offer to hold a free style office hour each Thursday 8:00 UTC in #openstack-nova. I made the same offer when we moved the nova meeting to be a non alternating one. But honestly I don't remember ever having discussion happening specifically due to that office hour in #openstack-nova. Cheers, gibi p.s.: the smime in your mail is not really mailing list friendly. Your mail does not appear properly in the archive. From jesse at odyssey4.me Fri May 14 07:49:37 2021 From: jesse at odyssey4.me (Jesse Pretorius) Date: Fri, 14 May 2021 07:49:37 +0000 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: <70F86497-351C-43EC-89A4-D93D30DD5C37@odyssey4.me> > On 13 May 2021, at 19:38, James Slagle wrote: > > I'd like to propose that TripleO opt out of dependency management by removing tripleo-common from global-requirements.txt. I do not feel that the global dependency management brings any advantages or anything needed for TripleO. I can't think of any reason to enforce the ability to be globally pip installable with the rest of OpenStack. How would this affect the RDO build process? Doesn’t it use a common set of deps defined by global requirements to build everything on a stable branch? Would this mean that TripleO would be built using a different set of deps from the OpenStack services it deploys? I’m all for doing this, but there may be a hidden cost here which needs to be considered. From ramishra at redhat.com Fri May 14 08:30:56 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Fri, 14 May 2021 14:00:56 +0530 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 12:13 AM James Slagle wrote: > I'd like to propose that TripleO opt out of dependency management by > removing tripleo-common from global-requirements.txt. I do not feel that > the global dependency management brings any advantages or anything needed > for TripleO. I can't think of any reason to enforce the ability to be > globally pip installable with the rest of OpenStack. > > Two of our most critical projects, tripleoclient and tripleo-common do not > even put many of their data files in the right place where our code expects > them when they are pip installed. So, I feel fairly confident that no one > is pip installing TripleO and relying on global requirements enforcement. > > One potential advantage of not being in global-requirements.txt is that > our unit tests and functional tests could actually test the same code. As > things stand today, our unit tests in projects that depend on > tripleo-common are pinned to the version in global-requirements.txt, while > our functional tests currently run with tripleo-common from master (or > included depends-on). > > The changes needed would be (aiui): > - Remove tripleo repos from projects.txt > - Remove check-requirements jobs from those same repos > - Remove tripleo-common from global-requirements.txt > > I think it's fine as long as we manage the project requirements properly and ensure that we don't bump to some broken library versions (Tripleo CI may or may not catch those but other projects CIs possibly can), as we sync those requirements to rdo spec files regularly. > I think we should also plan to backport these changes to Wallaby. > > Let me know any concerns or feedback, or anything I might be overlooking. > Thanks. > > -- > -- James Slagle > -- > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 14 09:13:09 2021 From: zigo at debian.org (Thomas Goirand) Date: Fri, 14 May 2021 11:13:09 +0200 Subject: [release-announce] oslo.log 4.5.0 In-Reply-To: References: Message-ID: On 5/14/21 11:01 AM, no-reply at openstack.org wrote: > We enthusiastically announce the release of: > > oslo.log 4.5.0: oslo.log library > > The source is available from: > > https://opendev.org/openstack/oslo.log Hi, Something broke the subject line, as it's missing the "(xena)" thingy at the end... Can someone repair it, so my filters can properly work? Cheers, Thomas Goirand (zigo) From moguimar at redhat.com Fri May 14 09:17:17 2021 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Fri, 14 May 2021 11:17:17 +0200 Subject: [release-announce] oslo.log 4.5.0 In-Reply-To: References: Message-ID: Hey Thomas, I might not be corect on this one, but many Oslo libraries moved to the independent release model so this might have to do with it. [ ]'s Moisés On Fri, May 14, 2021 at 11:13 AM Thomas Goirand wrote: > On 5/14/21 11:01 AM, no-reply at openstack.org wrote: > > We enthusiastically announce the release of: > > > > oslo.log 4.5.0: oslo.log library > > > > The source is available from: > > > > https://opendev.org/openstack/oslo.log > > Hi, > > Something broke the subject line, as it's missing the "(xena)" thingy at > the end... Can someone repair it, so my filters can properly work? > > Cheers, > > Thomas Goirand (zigo) > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri May 14 09:18:50 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 14 May 2021 11:18:50 +0200 Subject: [release-announce] oslo.log 4.5.0 In-Reply-To: References: Message-ID: Oslo.log is cycle-independent now [1] [1] https://opendev.org/openstack/releases/src/branch/master/deliverables/_independent/oslo.log.yaml -yoctozepto On Fri, May 14, 2021 at 11:14 AM Thomas Goirand wrote: > On 5/14/21 11:01 AM, no-reply at openstack.org wrote: > > We enthusiastically announce the release of: > > > > oslo.log 4.5.0: oslo.log library > > > > The source is available from: > > > > https://opendev.org/openstack/oslo.log > > Hi, > > Something broke the subject line, as it's missing the "(xena)" thingy at > the end... Can someone repair it, so my filters can properly work? > > Cheers, > > Thomas Goirand (zigo) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo.lanciano at sns.it Fri May 14 09:51:22 2021 From: giacomo.lanciano at sns.it (Giacomo Lanciano) Date: Fri, 14 May 2021 11:51:22 +0200 Subject: [Senlin][Octavia] Health policy - LB_STATUS_POLLING detection mode Message-ID: <9815228e-07b2-540b-761b-62e26d0c2c45@sns.it> Hi folks, I'd like to know what is the status of making the LB_STATUS_POLLING detection mode available for the Senlin Health Policy [1]. According to the docs and this patch [2], the implementation of this feature is blocked by some issue on LBaaS/Octavia side, but I could not find any details on what this issue really is. As the docs state, it would be really useful to have this detection mode available, as it is much more reliable than the others at evaluating the status of an application. If I can be of any help, I would be willing to contribute. Thanks in advance. Kind regards. Giacomo [1] https://docs.openstack.org/senlin/latest/contributor/policies/health_v1.html#failure-detection [2] https://review.opendev.org/c/openstack/senlin/+/423012 -- Giacomo Lanciano Ph.D. Student in Data Science Scuola Normale Superiore, Pisa, Italy https://www.linkedin.com/in/giacomolanciano From jean-francois.taltavull at elca.ch Fri May 14 10:19:51 2021 From: jean-francois.taltavull at elca.ch (Taltavull Jean-Francois) Date: Fri, 14 May 2021 10:19:51 +0000 Subject: [Octavia dashboard] Impossible to fill the subnet field when creating a load balancer In-Reply-To: References: <37fc26ac7ba043418540c022c6d30e1b@elca.ch> Message-ID: <21e93b4027d24131a9f6dddddef7acd3@elca.ch> Hi Michael, I can actually see a subnet in the networking section of horizon but no drop-down list appears on the load balancer create screen. Instead, the blue ribbon keeps on turning. > -----Original Message----- > From: Michael Johnson > Sent: jeudi, 13 mai 2021 01:52 > To: Taltavull Jean-Francois > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Octavia dashboard] Impossible to fill the subnet field when creating > a load balancer > > Hi Jean-François, > > The Subnet drop-down on the load balancer create screen in horizon should be > populated with the neutron subnets your project has access to. > > Can you check that you can see at least one subnet (or the specific subnet you > are looking for) in the networking section of the horizon UI? > > The field should be a drop-down listing the available neutron subnets. > You can see a demo of that in our Boston presentation here: > https://youtu.be/BBgP3_qhJ00?t=935 > > Michael > > On Wed, May 12, 2021 at 8:18 AM Taltavull Jean-Francois francois.taltavull at elca.ch> wrote: > > > > Hi All, > > > > On my OSA Victoria deployment (on Ubuntu 20.04), the "Create Load > Balancer" form does not allow me to fill the mandatory "subnet" field and > therefore I can't create a load balancer with Horizon. > > > > On the other hand, Octavia works fine and I can create load balancers with the > CLI, Terraform, etc. > > > > Has one of you already faced such a situation ? > > > > > > Best regards, > > > > Jean-François > > > > > > From marios at redhat.com Fri May 14 10:41:39 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 14 May 2021 13:41:39 +0300 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: On Thu, May 13, 2021 at 9:41 PM James Slagle wrote: > > I'd like to propose that TripleO opt out of dependency management by removing tripleo-common from global-requirements.txt. I do not feel that the global dependency management brings any advantages or anything needed for TripleO. I can't think of any reason to enforce the ability to be globally pip installable with the rest of OpenStack. To add a bit more context as to how this discussion came about, we tried to remove tripleo-common from global-requirements and the check-requirements jobs in tripleoclient caused [1] as a result. So we need to decide whether to continue to be part of that requirements contract [2] or if we will do what is proposed here and remove ourselves altogether. If we decide _not_ to implement this proposal then we will also have to add the requirements-check jobs in tripleo-ansible [3] and tripleo-validations [4] as they are currently missing. > > Two of our most critical projects, tripleoclient and tripleo-common do not even put many of their data files in the right place where our code expects them when they are pip installed. So, I feel fairly confident that no one is pip installing TripleO and relying on global requirements enforcement. I don't think this is _just_ about pip installs. It is generally about the contents of each project requirements.txt. As part of the requirements contract, it means that those repos with which we are participating (the ones in projects.txt [5]) are protected against other projects making any breaking changes in _their_ requirements.txt. Don't the contents of requirements.txt also end up in the .spec file from which we are building rpm e.g. [6] for tht? In which case if we remove this and just stop catching any breaking changes in the check/gate check-requirements jobs, I suspect we will just move the problem to the rpm build and it will fail there. > > One potential advantage of not being in global-requirements.txt is that our unit tests and functional tests could actually test the same code. As things stand today, our unit tests in projects that depend on tripleo-common are pinned to the version in global-requirements.txt, while our functional tests currently run with tripleo-common from master (or included depends-on). I don't think one in particular is a very valid point though - as things currently stand in global-requirements we aren't 'pinning' all we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid (I assume) some bad release we made. One main advantage that isn't mentioned here is that by removing ourselves from the requirements contract then we are no longer bound to release tripleo-common first when making a new stable/branch. This should make it easier to keep gates green on new releases. > > The changes needed would be (aiui): > - Remove tripleo repos from projects.txt > - Remove check-requirements jobs from those same repos > - Remove tripleo-common from global-requirements.txt yes that list looks right if we decide to go ahead with that. As noted above my main reservation is that we may end up seeing more incompatibilities in requirements (and later during package building, instead of earlier during the check/gate check-requirements jobs). regards, marios [1] https://bugs.launchpad.net/tripleo/+bug/1928190 [2] https://docs.openstack.org/project-team-guide/dependency-management.html#enforcement-in-projects [3] https://opendev.org/openstack/tripleo-ansible/src/commit/5b160e8d02acbaaec67df66a5a204e0c0d08366b/zuul.d/layout.yaml#L3 [4] https://opendev.org/openstack/tripleo-validations/src/commit/edf9ee1f34c92e3b8c70bf3d81c72de5ddc7cba0/zuul.d/layout.yaml#L3 [5] https://github.com/openstack/requirements/blob/ce19462764940a4ce99dae4ac2ec7a004c68e9a4/projects.txt#L245-L249 [6] https://review.rdoproject.org/r/gitweb?p=openstack/tripleo-heat-templates-distgit.git;a=blob;f=openstack-tripleo-heat-templates.spec;h=36ee20fb8042bbdd254b4eab6ab1f3272bd8a6c5;hb=HEAD [7] https://github.com/openstack/requirements/blob/f00ad51c3f4de9d956605d81db4ce34fa9a3ba1c/global-requirements.txt#L337 > > I think we should also plan to backport these changes to Wallaby. > > Let me know any concerns or feedback, or anything I might be overlooking. Thanks. > > -- > -- James Slagle > -- From marios at redhat.com Fri May 14 11:09:53 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 14 May 2021 14:09:53 +0300 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami wrote: > > Hi team, > Hi Takashi > > As you know, we currently have TripleO jobs in some of the puppet repos > to ensure a change in puppet side doesn't break TripleO which consumes > some of the modules. in case it isn't clear and for anyone else reading, you are referring to things like [1]. > > Because these jobs hugely depend on the job definitions in TripleO repos, > I'm wondering whether we can invite a few cores from the TripleO CI team > to the puppet-openstack core group to maintain these jobs. > I expect the scope here is very limited to tripleo job definitions and doesn't > expect any +2 for other parts. > > I'd be nice if I can hear any thoughts on this topic. Main question is what kind of maintenance do you have in mind? Is it that these jobs are breaking often and they need fixes in the puppet-repos themselves so we need more cores there? (though... I would expect the fixes to be needed in tripleo-ci where the job definitions are, unless the repos are overriding those definitions)? Or is it that you don't have enough folks to get fixes merged so this is mostly about growing the pool of reviewers? I think limiting the scope to just the contents of zuul.d/ or .zuul.yaml can work; we already have a trust based system in TripleO with some cores only expected to exercise their voting rights in particular repos even though they have full voting rights across all tripleo repos). Are you able to join our next tripleo-ci community call? It is on Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, perhaps we can work something out depending on what you need. Otherwise no problem let's continue to discuss here regards, marios [1] https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker [2] https://meet.google.com/bqx-xwht-wky [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both > > Thank you, > Takashi > From apetrich at redhat.com Fri May 14 11:18:33 2021 From: apetrich at redhat.com (Adriano Petrich) Date: Fri, 14 May 2021 13:18:33 +0200 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: As far as I remember that was added there as a requirement because python-mistralclient was there also and that was broken CI ages ago. Now that we don't require more mistal removing it is likely a good idea. +1 On Fri, 14 May 2021 at 12:42, Marios Andreou wrote: > On Thu, May 13, 2021 at 9:41 PM James Slagle > wrote: > > > > I'd like to propose that TripleO opt out of dependency management by > removing tripleo-common from global-requirements.txt. I do not feel that > the global dependency management brings any advantages or anything needed > for TripleO. I can't think of any reason to enforce the ability to be > globally pip installable with the rest of OpenStack. > > To add a bit more context as to how this discussion came about, we > tried to remove tripleo-common from global-requirements and the > check-requirements jobs in tripleoclient caused [1] as a result. > > So we need to decide whether to continue to be part of that > requirements contract [2] or if we will do what is proposed here and > remove ourselves altogether. > If we decide _not_ to implement this proposal then we will also have > to add the requirements-check jobs in tripleo-ansible [3] and > tripleo-validations [4] as they are currently missing. > > > > > Two of our most critical projects, tripleoclient and tripleo-common do > not even put many of their data files in the right place where our code > expects them when they are pip installed. So, I feel fairly confident that > no one is pip installing TripleO and relying on global requirements > enforcement. > > I don't think this is _just_ about pip installs. It is generally about > the contents of each project requirements.txt. As part of the > requirements contract, it means that those repos with which we are > participating (the ones in projects.txt [5]) are protected against > other projects making any breaking changes in _their_ > requirements.txt. Don't the contents of requirements.txt also end up > in the .spec file from which we are building rpm e.g. [6] for tht? In > which case if we remove this and just stop catching any breaking > changes in the check/gate check-requirements jobs, I suspect we will > just move the problem to the rpm build and it will fail there. > > > > > One potential advantage of not being in global-requirements.txt is that > our unit tests and functional tests could actually test the same code. As > things stand today, our unit tests in projects that depend on > tripleo-common are pinned to the version in global-requirements.txt, while > our functional tests currently run with tripleo-common from master (or > included depends-on). > > I don't think one in particular is a very valid point though - as > things currently stand in global-requirements we aren't 'pinning' all > we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid > (I assume) some bad release we made. > > One main advantage that isn't mentioned here is that by removing > ourselves from the requirements contract then we are no longer bound > to release tripleo-common first when making a new stable/branch. This > should make it easier to keep gates green on new releases. > > > > > The changes needed would be (aiui): > > - Remove tripleo repos from projects.txt > > - Remove check-requirements jobs from those same repos > > - Remove tripleo-common from global-requirements.txt > > yes that list looks right if we decide to go ahead with that. As noted > above my main reservation is that we may end up seeing more > incompatibilities in requirements (and later during package building, > instead of earlier during the check/gate check-requirements jobs). > > regards, marios > > [1] https://bugs.launchpad.net/tripleo/+bug/1928190 > [2] > https://docs.openstack.org/project-team-guide/dependency-management.html#enforcement-in-projects > [3] > https://opendev.org/openstack/tripleo-ansible/src/commit/5b160e8d02acbaaec67df66a5a204e0c0d08366b/zuul.d/layout.yaml#L3 > [4] > https://opendev.org/openstack/tripleo-validations/src/commit/edf9ee1f34c92e3b8c70bf3d81c72de5ddc7cba0/zuul.d/layout.yaml#L3 > [5] > https://github.com/openstack/requirements/blob/ce19462764940a4ce99dae4ac2ec7a004c68e9a4/projects.txt#L245-L249 > [6] > https://review.rdoproject.org/r/gitweb?p=openstack/tripleo-heat-templates-distgit.git;a=blob;f=openstack-tripleo-heat-templates.spec;h=36ee20fb8042bbdd254b4eab6ab1f3272bd8a6c5;hb=HEAD > [7] > https://github.com/openstack/requirements/blob/f00ad51c3f4de9d956605d81db4ce34fa9a3ba1c/global-requirements.txt#L337 > > > > > > I think we should also plan to backport these changes to Wallaby. > > > > Let me know any concerns or feedback, or anything I might be > overlooking. Thanks. > > > > -- > > -- James Slagle > > -- > > > -- Adriano Vieira Petrich Software Engineer He/Him/His Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri May 14 11:33:52 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 14 May 2021 12:33:52 +0100 Subject: [docs] Double headings on every page In-Reply-To: References: Message-ID: On Tue, 2021-05-11 at 11:14 -0400, Peter Matulis wrote: > Hi, I'm hitting an oddity in one of my projects where the titles of all pages > show up twice. >   > Example: > > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/wallaby/app-nova-cells.html > > Source file is here: > > https://opendev.org/openstack/charm-deployment-guide/src/branch/master/deploy-guide/source/app-nova-cells.rst > > Does anyone see what can be causing this? It appears to happen only for the > current stable release ('wallaby') and 'latest'. > > Thanks, > Peter I suspect you're bumping into issues introduced by a new version of Sphinx or docutils (new versions of both were released recently). Comparing the current nova docs [1] to what you have, I see the duplicate

element is present but hidden by the following CSS rule: .docs-body .section h1 {     display: none; } That works because we have the following HTML in the nova docs:
   

Extra Specs

    ...
while the docs you linked are using the HTML5 semantic '
' tag:

Nova Cells

...
So to fix this, we'll have to update the openstackdocstheme to handle these changes. I can try to take a look at this next week but I really wouldn't mind if someone beat me to it. Stephen [1] https://docs.openstack.org/nova/latest/configuration/extra-specs.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Fri May 14 11:46:35 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Fri, 14 May 2021 20:46:35 +0900 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: Hi Marios, On Fri, May 14, 2021 at 8:10 PM Marios Andreou wrote: > On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami > wrote: > > > > Hi team, > > > > Hi Takashi > > > > As you know, we currently have TripleO jobs in some of the puppet repos > > to ensure a change in puppet side doesn't break TripleO which consumes > > some of the modules. > > in case it isn't clear and for anyone else reading, you are referring > to things like [1]. > This is a nitfixing but puppet-pacemaker is a repo under the TripleO project. I intend a job like https://zuul.opendev.org/t/openstack/builds?job_name=puppet-nova-tripleo-standalone&project=openstack/puppet-nova which is maintained under puppet repos. > > > > > Because these jobs hugely depend on the job definitions in TripleO repos, > > I'm wondering whether we can invite a few cores from the TripleO CI team > > to the puppet-openstack core group to maintain these jobs. > > I expect the scope here is very limited to tripleo job definitions and > doesn't > > expect any +2 for other parts. > > > > I'd be nice if I can hear any thoughts on this topic. > > Main question is what kind of maintenance do you have in mind? Is it > that these jobs are breaking often and they need fixes in the > puppet-repos themselves so we need more cores there? (though... I > would expect the fixes to be needed in tripleo-ci where the job > definitions are, unless the repos are overriding those definitions)? > We define our own base tripleo-puppet-ci-centos-8-standalone job[4] and each puppet module defines their own tripleo job[5] by overriding the base job, so that we can define some basic items like irellevant files or voting status for all puppet modules in a single place. [4] https://github.com/openstack/puppet-openstack-integration/blob/master/zuul.d/tripleo.yaml [5] https://github.com/openstack/puppet-nova/blob/master/.zuul.yaml > Or is it that you don't have enough folks to get fixes merged so this > is mostly about growing the pool of reviewers? > Yes. My main intention is to have more reviewers so that we can fix our CI jobs timely. Actually the proposal came to my mind when I was implementing the following changes to solve very frequent job timeouts which we currently observe in puppet-nova wallaby. IMO these changes need more attention from TripleO's perspective rather than puppet's perspective. https://review.opendev.org/q/topic:%22tripleo-tempest%22+(status:open) In the past when we introduced content provider jobs, we ended up with a bunch of patches submitted to both tripleo jobs and puppet jobs. Having some people from TripleO team would help moving forward such a transition more smoothly. In the past we have had three people (Alex, Emilien and I) involved in both TripleO and puppet but since Emilien has shifted this focus, we have now 2 activities left. Additional one or two people would help us move patches forward more efficiently. (Since I can't approve my own patch.) I think limiting the scope to just the contents of zuul.d/ or > .zuul.yaml can work; we already have a trust based system in TripleO > with some cores only expected to exercise their voting rights in > particular repos even though they have full voting rights across all > tripleo repos). > > Are you able to join our next tripleo-ci community call? It is on > Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, > perhaps we can work something out depending on what you need. > Otherwise no problem let's continue to discuss here > Sure. I can join and bring up this topic. I'll keep this thread to hear some opinions from the puppet side as well. > > regards, marios > > [1] > https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker > [2] https://meet.google.com/bqx-xwht-wky > [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both > > > > > > > Thank you, > > Takashi > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri May 14 11:57:12 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 14 May 2021 14:57:12 +0300 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 2:46 PM Takashi Kajinami wrote: > > Hi Marios, > > On Fri, May 14, 2021 at 8:10 PM Marios Andreou wrote: >> >> On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami wrote: >> > >> > Hi team, >> > >> >> Hi Takashi >> >> >> > As you know, we currently have TripleO jobs in some of the puppet repos >> > to ensure a change in puppet side doesn't break TripleO which consumes >> > some of the modules. >> >> in case it isn't clear and for anyone else reading, you are referring >> to things like [1]. > > This is a nitfixing but puppet-pacemaker is a repo under the TripleO project. > I intend a job like > https://zuul.opendev.org/t/openstack/builds?job_name=puppet-nova-tripleo-standalone&project=openstack/puppet-nova > which is maintained under puppet repos. > ack thanks for the clarification ;) makes more sense now >> >> >> > >> > Because these jobs hugely depend on the job definitions in TripleO repos, >> > I'm wondering whether we can invite a few cores from the TripleO CI team >> > to the puppet-openstack core group to maintain these jobs. >> > I expect the scope here is very limited to tripleo job definitions and doesn't >> > expect any +2 for other parts. >> > >> > I'd be nice if I can hear any thoughts on this topic. >> >> Main question is what kind of maintenance do you have in mind? Is it >> that these jobs are breaking often and they need fixes in the >> puppet-repos themselves so we need more cores there? (though... I >> would expect the fixes to be needed in tripleo-ci where the job >> definitions are, unless the repos are overriding those definitions)? > > > We define our own base tripleo-puppet-ci-centos-8-standalone job[4] and > each puppet module defines their own tripleo job[5] by overriding the base job, > so that we can define some basic items like irellevant files or voting status > for all puppet modules in a single place. > > [4] https://github.com/openstack/puppet-openstack-integration/blob/master/zuul.d/tripleo.yaml > [5] https://github.com/openstack/puppet-nova/blob/master/.zuul.yaml > > >> >> Or is it that you don't have enough folks to get fixes merged so this >> is mostly about growing the pool of reviewers? > > > Yes. My main intention is to have more reviewers so that we can fix our CI jobs timely. > > Actually the proposal came to my mind when I was implementing the following changes > to solve very frequent job timeouts which we currently observe in puppet-nova wallaby. > IMO these changes need more attention from TripleO's perspective rather than puppet's > perspective. > https://review.opendev.org/q/topic:%22tripleo-tempest%22+(status:open) > > In the past when we introduced content provider jobs, we ended up with a bunch of patches > submitted to both tripleo jobs and puppet jobs. Having some people from TripleO team > would help moving forward such a transition more smoothly. > > In the past we have had three people (Alex, Emilien and I) involved in both TripleO and puppet > but since Emilien has shifted this focus, we have now 2 activities left. > Additional one or two people would help us move patches forward more efficiently. > (Since I can't approve my own patch.) > >> I think limiting the scope to just the contents of zuul.d/ or >> .zuul.yaml can work; we already have a trust based system in TripleO >> with some cores only expected to exercise their voting rights in >> particular repos even though they have full voting rights across all >> tripleo repos). >> >> Are you able to join our next tripleo-ci community call? It is on >> Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, >> perhaps we can work something out depending on what you need. >> Otherwise no problem let's continue to discuss here > > > Sure. I can join and bring up this topic. > I'll keep this thread to hear some opinions from the puppet side as well. > > ok thanks look forward to discussing on Tuesday then, regards, marios >> >> >> regards, marios >> >> [1] https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker >> [2] https://meet.google.com/bqx-xwht-wky >> [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both >> >> >> >> > >> > Thank you, >> > Takashi >> > >> From tobias.urdin at binero.com Fri May 14 12:54:08 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Fri, 14 May 2021 12:54:08 +0000 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: <987B8C72-C316-4D8B-A388-5EC5EE23E97E@binero.com> Hello Takashi, >From an Puppet OpenStack core perspective being outside of RedHat I have no problem with it since the mentioned trust from the TripleO side is to only approve patches directly related to it. Hopefully the person becomes interested and starts working on all the Puppet parts as well :) Best regards Tobias > On 14 May 2021, at 07:38, Takashi Kajinami wrote: > > Hi team, > > > As you know, we currently have TripleO jobs in some of the puppet repos > to ensure a change in puppet side doesn't break TripleO which consumes > some of the modules. > > Because these jobs hugely depend on the job definitions in TripleO repos, > I'm wondering whether we can invite a few cores from the TripleO CI team > to the puppet-openstack core group to maintain these jobs. > I expect the scope here is very limited to tripleo job definitions and doesn't > expect any +2 for other parts. > > I'd be nice if I can hear any thoughts on this topic. > > Thank you, > Takashi > From manchandavishal143 at gmail.com Fri May 14 13:26:52 2021 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Fri, 14 May 2021 18:56:52 +0530 Subject: [horizon][stable] [release] Proposal to make stable/ocata and stable/pike as EOL Message-ID: Hi all, As discussed in the last horizon weekly meeting[1], [2] on 2021-05-05, I would like to announce that the horizon team decided to make stable/ocata and stable/pike branches as EOL. Consider this mail as an official announcement for that. I would like to know if anyone still would like to keep those branches to be open otherwise, I will purpose an EOL patch for these branches. Thanks & Regards, Vishal Manchanda [1] http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-alt/%23openstack-meeting-alt.2021-05-05.log.html#t2021-05-05T15:55:59 [2] http://eavesdrop.openstack.org/irclogs/%23openstack-horizon/%23openstack-horizon.2021-05-05.log.html#t2021-05-05T16:03:28 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Fri May 14 13:28:21 2021 From: james.slagle at gmail.com (James Slagle) Date: Fri, 14 May 2021 09:28:21 -0400 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 6:41 AM Marios Andreou wrote: > On Thu, May 13, 2021 at 9:41 PM James Slagle > wrote: > > > > I'd like to propose that TripleO opt out of dependency management by > removing tripleo-common from global-requirements.txt. I do not feel that > the global dependency management brings any advantages or anything needed > for TripleO. I can't think of any reason to enforce the ability to be > globally pip installable with the rest of OpenStack. > > To add a bit more context as to how this discussion came about, we > tried to remove tripleo-common from global-requirements and the > check-requirements jobs in tripleoclient caused [1] as a result. > > So we need to decide whether to continue to be part of that > requirements contract [2] or if we will do what is proposed here and > remove ourselves altogether. > If we decide _not_ to implement this proposal then we will also have > to add the requirements-check jobs in tripleo-ansible [3] and > tripleo-validations [4] as they are currently missing. > > > > > Two of our most critical projects, tripleoclient and tripleo-common do > not even put many of their data files in the right place where our code > expects them when they are pip installed. So, I feel fairly confident that > no one is pip installing TripleO and relying on global requirements > enforcement. > > I don't think this is _just_ about pip installs. It is generally about > the contents of each project requirements.txt. As part of the > requirements contract, it means that those repos with which we are > participating (the ones in projects.txt [5]) are protected against > other projects making any breaking changes in _their_ > requirements.txt. Don't the contents of requirements.txt also end up > in the .spec file from which we are building rpm e.g. [6] for tht? In > which case if we remove this and just stop catching any breaking > changes in the check/gate check-requirements jobs, I suspect we will > just move the problem to the rpm build and it will fail there. > I don't see that in the spec file. Unless there is some other automation somewhere that regenerates all of the BuildRequires/Requires and modifies them to match requirements.txt/test-requirements.txt? > > > > One potential advantage of not being in global-requirements.txt is that > our unit tests and functional tests could actually test the same code. As > things stand today, our unit tests in projects that depend on > tripleo-common are pinned to the version in global-requirements.txt, while > our functional tests currently run with tripleo-common from master (or > included depends-on). > > I don't think one in particular is a very valid point though - as > things currently stand in global-requirements we aren't 'pinning' all > we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid > (I assume) some bad release we made. > tripleo-common is pinned to the latest release when it's pip installed in the venv, instead of using latest git (and including depends-on). You're right that it's probably what we want to keep doing, and this is probably not related to opting out of g-r. Especially since we don't want to require latest git of a dependency when running unit tests locally. However it is worth noting that our unit tests (tox) and functional tests (tripleo-ci) use different code for the dependencies. That was not obvious to me and others on the surface. Perhaps we could add additional tox jobs that do require the latest tripleo-common from git to also cover that scenario. Here's an example: https://review.opendev.org/c/openstack/python-tripleoclient/+/787907 https://review.opendev.org/c/openstack/tripleo-common/+/787906 The tripleo-common patch fails unit tests as expected, the tripleoclient which depends-on the tripleo-common patch passes unit tests, but fails functional. I'd rather see that failure caught by a unit test as well. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Fri May 14 15:05:55 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Fri, 14 May 2021 11:05:55 -0400 Subject: threat to freenode (where openstack irc hangs out) Message-ID: https://twitter.com/dmsimard/status/1393203159770804225?s=20 https://p.haavard.me/407 I have no independent validation of this. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri May 14 15:14:25 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 May 2021 15:14:25 +0000 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: <20210514151425.zxgeym4lnhmbdrt3@yuggoth.org> On 2021-05-14 11:05:55 -0400 (-0400), Chris Morgan wrote: > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > https://p.haavard.me/407 > > I have no independent validation of this. It seems like that may not be the whole story. Regardless, the infra team have registered a small foothold of copies of our critical channels on OFTC years ago, and have always considered that a reasonable place to relocate in the event something happens to Freenode which makes it no longer suitable. Letting people know to switch the IRC server name in their clients and updating lots of documentation mentioning Freenode would be the hardest part, honestly. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From luke.camilleri at zylacomputing.com Fri May 14 15:15:11 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Fri, 14 May 2021 17:15:11 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> Message-ID: Hi Michael, thanks as always for the below, I have watched the video and configured all the requirements as shown in those guides. Working great right now. I have noticed the following points and would like to know if you can give me some feedback please: 1. In the Octavia project at the networks screen --> lb-mgmt-net --> Ports --> octavia-health-manager-listen-port (the IP bound to the health-manager service) has its status Down. It does not create any sort of issue but was wondering if this was normal behavior? 2. Similarly to the above point, in the tenant networks screen, every port that is "Attached Device - Octavia" has its status reported as "Down" ( these are the VIP addresses assigned to the amphora). Just need to confirm that this is normal behaviour 3. Creating a health monitor of type ping fails to get the operating status of the nodes and the nodes are in error (horizon) and the amphora reports that there are no backends and hence it is not working (I am using the same backend nodes with another loadbalancer but with an HTTP check and it is working fine. a security group is setup to allow ping from 0.0.0.0/0 and from the amphora-haproxy network namespace on the amphora instance I can ping both nodes without issues ). Below the amphora's haproxy.log May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! Thanks in advance On 13/05/2021 18:33, Michael Johnson wrote: > You are correct that two IPs are being allocated for the VIP, one is a > secondary IP which neutron implements as an "allowed address pairs" > port. We do this to allow failover of the amphora instance should nova > fail the service VM. We hold the VIP IP in a special port so the IP is > not lost while we rebuild the service VM. > If you are using the active/standby topology (or an Octavia flavor > with active/standby enabled), this failover is accelerated with nearly > no visible impact to the flows through the load balancer. > Active/Standby has been an Octavia feature since the Mitaka release. I > gave a demo of it at the Tokoyo summit here: > https://youtu.be/8n7FGhtOiXk?t=1420 > > You can enable active/standby as the default by setting the > "loadbalancer_topology" setting in the configuration file > (https://docs.openstack.org/octavia/latest/configuration/configref.html#controller_worker.loadbalancer_topology) > or by creating an Octavia flavor that creates the load balancer with > an active/standby topology > (https://docs.openstack.org/octavia/latest/admin/flavors.html). > > Michael > > On Thu, May 13, 2021 at 4:23 AM Luke Camilleri > wrote: >> HI Michael, thanks a lot for the below information it is very helpful. I >> ended up setting the o-hm0 interface statically in the >> octavia-interface.sh script which is called by the service and also >> added a delay to make sure that the bridges are up before trying to >> create a veth pair and connect the endpoints. >> >> Also I edited the unit section of the health-manager service and at the >> after option I added octavia-interface.service or else on startup the >> health manager will not bind to the lb-mgmt-net since it would not be up yet >> >> The floating IPs part was a bit tricky until I understood what was >> really going on with the VIP concept and how better and more flexible it >> is to set the VIP on the tenant network and then associate with public >> ip to the VIP. >> >> With this being said I noticed that 2 IPs are being assigned to the >> amphora instance and that the actual port assigned to the instance has >> an allowed pair with the VIP port. I checked online and it seems that >> there is an active/standby project going on with VRRP/keepalived and in >> fact the keepalived daemon is running in the amphora instance. >> >> Am I on the right track with the active/standby feature and if so do you >> have any installation/project links to share please so that I can test it? >> >> Regards >> >> On 12/05/2021 08:37, Michael Johnson wrote: >>> Answers inline below. >>> >>> Michael >>> >>> On Mon, May 10, 2021 at 5:15 PM Luke Camilleri >>> wrote: >>>> Hi Michael and thanks a lot for the detailed answer below. >>>> >>>> I believe I have got most of this sorted out apart from some small issues below: >>>> >>>> If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? >>> The amphorae do not have issues with DHCP and gateways as we control >>> the DHCP client configuration inside the amphora. It does only have >>> one IP on the lb-mgmt-net, it will honor gateways provided by neutron >>> for the lb-mgmt-net traffic, but a gateway is not required on the >>> lb-mgmt-network unless you are routing the lb-mgmt-net traffic across >>> subnets. >>> >>>> How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? >>> The worker and housekeeping processes only create connections to the >>> amphora, they do not receive connections from them. The amphora send a >>> heartbeat packet to the health manager endpoints every ten seconds by >>> default. The list of valid health manager endpoints is included in the >>> amphora agent configuration file that is injected into the service VM >>> at boot time. It can be updated using the Octavia admin API for >>> refreshing the amphora agent configuration. >>> >>>> Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? >>> Octavia does not use floating IPs. Users can create and assign >>> floating IPs via neutron if they would like, but they are not >>> necessary. Octavia VIPs can be created directly on neutron "external" >>> networks, avoiding the NAT overhead of floating IPs. >>> There is no practical reason to assign a floating IP to a port on the >>> lb-mgmt-net as tenant traffic is never on or accessible from that >>> network. >>> >>>> Thanks in advance for any feedback >>>> >>>> On 06/05/2021 22:46, Michael Johnson wrote: >>>> >>>> Hi Luke, >>>> >>>> 1. I agree that DHCP is technically unnecessary for the o-hm0 >>>> interface if you can manage your address allocation on the network you >>>> are using for the lb-mgmt-net. >>>> I don't have detailed information about the Ubuntu install >>>> instructions, but I suspect it was done to simplify the IPAM to be >>>> managed by whatever is providing DHCP on the lb-mgmt-net provided (be >>>> it neutron or some other resource on a provider network). >>>> The lb-mgmt-net is simply a neutron network that the amphora >>>> management address is on. It is routable and does not require external >>>> access. The only tricky part to it is the worker, health manager, and >>>> housekeeping processes need to be reachable from the amphora, and the >>>> controllers need to reach the amphora over the network(s). There are >>>> many ways to accomplish this. >>>> >>>> 2. See my above answer. Fundamentally the lb-mgmt-net is just a >>>> neutron network that nova can use to attach an interface to the >>>> amphora instances for command and control traffic. As long as the >>>> controllers can reach TCP 9433 on the amphora, and the amphora can >>>> send UDP 5555 back to the health manager endpoints, it will work fine. >>>> >>>> 3. Octavia, with the amphora driver, does not require any special >>>> configuration in Neutron (beyond the advanced services RBAC policy >>>> being available for the neutron service account used in your octavia >>>> configuration file). The neutron_lbaas.conf and services_lbaas.conf >>>> are legacy configuration files/settings that were used for >>>> neutron-lbaas which is now end of life. See the wiki page for >>>> information on the deprecation of neutron-lbaas: >>>> https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. >>>> >>>> Michael >>>> >>>> On Thu, May 6, 2021 at 12:30 PM Luke Camilleri >>>> wrote: >>>> >>>> Hi Michael and thanks a lot for your help on this, after following your >>>> steps the agent got deployed successfully in the amphora-image. >>>> >>>> I have some other queries that I would like to ask mainly related to the >>>> health-manager/load-balancer network setup and IP assignment. First of >>>> all let me point out that I am using a manual installation process, and >>>> it might help others to understand the underlying infrastructure >>>> required to make this component work as expected. >>>> >>>> 1- The installation procedure contains this step: >>>> >>>> $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia >>>> >>>> which is later on called to assign the IP to the o-hm0 interface which >>>> is connected to the lb-management network as shown below: >>>> >>>> $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia >>>> >>>> Apart from having a dhcp config for a single IP seems a bit of an >>>> overkill, using these steps is injecting an additional routing table >>>> into the default namespace as shown below in my case: >>>> >>>> # route -n >>>> Kernel IP routing table >>>> Destination Gateway Genmask Flags Metric Ref Use >>>> Iface >>>> 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 >>>> 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX >>>> 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX >>>> 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 >>>> 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 >>>> >>>> Since the load-balancer management network does not need any external >>>> connectivity (but only communication between health-manager service and >>>> amphora-agent), why is a gateway required and why isn't the IP address >>>> allocated as part of the interface creation script which is called when >>>> the service is started or stopped (example below)? >>>> >>>> --- >>>> >>>> #!/bin/bash >>>> >>>> set -ex >>>> >>>> MAC=$MGMT_PORT_MAC >>>> BRNAME=$BRNAME >>>> >>>> if [ "$1" == "start" ]; then >>>> ip link add o-hm0 type veth peer name o-bhm0 >>>> brctl addif $BRNAME o-bhm0 >>>> ip link set o-bhm0 up >>>> ip link set dev o-hm0 address $MAC >>>> *** ip addr add 172.16.0.2/12 dev o-hm0 >>>> ***ip link set o-hm0 mtu 1500 >>>> ip link set o-hm0 up >>>> iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT >>>> elif [ "$1" == "stop" ]; then >>>> ip link del o-hm0 >>>> else >>>> brctl show $BRNAME >>>> ip a s dev o-hm0 >>>> fi >>>> >>>> --- >>>> >>>> 2- Is there a possibility to specify a fixed vlan outside of tenant >>>> range for the load balancer management network? >>>> >>>> 3- Are the configuration changes required only in neutron.conf or also >>>> in additional config files like neutron_lbaas.conf and >>>> services_lbaas.conf, similar to the vpnaas configuration? >>>> >>>> Thanks in advance for any assistance, but its like putting together a >>>> puzzle of information :-) >>>> >>>> On 05/05/2021 20:25, Michael Johnson wrote: >>>> >>>> Hi Luke. >>>> >>>> Yes, the amphora-agent will listen on 9443 in the amphorae instances. >>>> It uses TLS mutual authentication, so you can get a TLS response, but >>>> it will not let you into the API without a valid certificate. A simple >>>> "openssl s_client" is usually enough to prove that it is listening and >>>> requesting the client certificate. >>>> >>>> I can't talk to the "openstack-octavia-diskimage-create" package you >>>> found in centos, but I can discuss how to build an amphora image using >>>> the OpenStack tools. >>>> >>>> If you get Octavia from git or via a release tarball, we provide a >>>> script to build the amphora image. This is how we build our images for >>>> the testing gates, etc. and is the recommended way (at least from the >>>> OpenStack Octavia community) to create amphora images. >>>> >>>> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create >>>> >>>> For CentOS 8, the command would be: >>>> >>>> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 >>>> is the minimum disk size for centos images, you may want more if you >>>> are not offloading logs) >>>> >>>> I just did a run on a fresh centos 8 instance: >>>> git clone https://opendev.org/openstack/octavia >>>> python3 -m venv dib >>>> source dib/bin/activate >>>> pip3 install diskimage-builder PyYAML six >>>> sudo dnf install yum-utils >>>> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 >>>> >>>> This built an image. >>>> >>>> Off and on we have had issues building CentOS images due to issues in >>>> the tools we rely on. If you run into issues with this image, drop us >>>> a note back. >>>> >>>> Michael >>>> >>>> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri >>>> wrote: >>>> >>>> Hi there, i am trying to get Octavia running on a Victoria deployment on >>>> CentOS 8. It was a bit rough getting to the point to launch an instance >>>> mainly due to the load-balancer management network and the lack of >>>> documentation >>>> (https://docs.openstack.org/octavia/victoria/install/install.html) to >>>> deploy this oN CentOS. I will try to fix this once I have my deployment >>>> up and running to help others on the way installing and configuring this :-) >>>> >>>> At this point a LB can be launched by the tenant and the instance is >>>> spawned in the Octavia project and I can ping and SSH into the amphora >>>> instance from the Octavia node where the octavia-health-manager service >>>> is running using the IP within the same subnet of the amphoras >>>> (172.16.0.0/12). >>>> >>>> Unfortunately I keep on getting these errors in the log file of the >>>> worker log (/var/log/octavia/worker.log): >>>> >>>> 2021-05-05 01:54:49.368 14521 WARNING >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >>>> to instance. Retrying.: requests.exceptions.ConnectionError: >>>> HTTPSConnectionPool(host='172.16.4.46', p >>>> ort=9443): Max retries exceeded with url: // (Caused by >>>> NewConnectionError('>>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] >>>> Connection ref >>>> used',)) >>>> >>>> 2021-05-05 01:54:54.374 14521 ERROR >>>> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries >>>> (currently set to 120) exhausted. The amphora is unavailable. Reason: >>>> HTTPSConnectionPool(host='172.16 >>>> .4.46', port=9443): Max retries exceeded with url: // (Caused by >>>> NewConnectionError('>>> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne >>>> ction refused',)) >>>> >>>> 2021-05-05 01:54:54.374 14521 ERROR >>>> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora >>>> compute instance failed to become reachable. This either means the >>>> compute driver failed to fully boot the >>>> instance inside the timeout interval or the instance is not reachable >>>> via the lb-mgmt-net.: >>>> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: >>>> contacting the amphora timed out >>>> >>>> obviously the instance is deleted then and the task fails from the >>>> tenant's perspective. >>>> >>>> The main issue here is that there is no service running on port 9443 on >>>> the amphora instance. I am assuming that this is in fact the >>>> amphora-agent service that is running on the instance which should be >>>> listening on this port 9443 but the service does not seem to be up or >>>> not installed at all. >>>> >>>> To create the image I have installed the CentOS package >>>> "openstack-octavia-diskimage-create" which provides the utility >>>> disk-image-create but from what I can conclude the amphora-agent is not >>>> being installed (thought this was done automatically by default :-( ) >>>> >>>> Can anyone let me know if the amphora-agent is what gets queried on port >>>> 9443 ? >>>> >>>> If the agent is not installed/injected by default when building the >>>> amphora image? >>>> >>>> The command to inject the amphora-agent into the amphora image when >>>> using the disk-image-create command? >>>> >>>> Thanks in advance for any assistance >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Fri May 14 15:15:23 2021 From: marios at redhat.com (Marios Andreou) Date: Fri, 14 May 2021 18:15:23 +0300 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 4:28 PM James Slagle wrote: > > > > On Fri, May 14, 2021 at 6:41 AM Marios Andreou wrote: >> >> On Thu, May 13, 2021 at 9:41 PM James Slagle wrote: >> > >> > I'd like to propose that TripleO opt out of dependency management by removing tripleo-common from global-requirements.txt. I do not feel that the global dependency management brings any advantages or anything needed for TripleO. I can't think of any reason to enforce the ability to be globally pip installable with the rest of OpenStack. >> >> To add a bit more context as to how this discussion came about, we >> tried to remove tripleo-common from global-requirements and the >> check-requirements jobs in tripleoclient caused [1] as a result. >> >> So we need to decide whether to continue to be part of that >> requirements contract [2] or if we will do what is proposed here and >> remove ourselves altogether. >> If we decide _not_ to implement this proposal then we will also have >> to add the requirements-check jobs in tripleo-ansible [3] and >> tripleo-validations [4] as they are currently missing. >> >> > >> > Two of our most critical projects, tripleoclient and tripleo-common do not even put many of their data files in the right place where our code expects them when they are pip installed. So, I feel fairly confident that no one is pip installing TripleO and relying on global requirements enforcement. >> >> I don't think this is _just_ about pip installs. It is generally about >> the contents of each project requirements.txt. As part of the >> requirements contract, it means that those repos with which we are >> participating (the ones in projects.txt [5]) are protected against >> other projects making any breaking changes in _their_ >> requirements.txt. Don't the contents of requirements.txt also end up >> in the .spec file from which we are building rpm e.g. [6] for tht? In >> which case if we remove this and just stop catching any breaking >> changes in the check/gate check-requirements jobs, I suspect we will >> just move the problem to the rpm build and it will fail there. > > > I don't see that in the spec file. Unless there is some other automation somewhere that regenerates all of the BuildRequires/Requires and modifies them to match requirements.txt/test-requirements.txt? > I was looking at that in particular: "tripleo-common>=7.1.0 # Apache-2.0" at https://opendev.org/openstack/tripleo-heat-templates/src/commit/fe2373225f039d795970b70fe9b2f28e0e7cd6a4/requirements.txt#L8 and in the specfile: " 42 Requires: openstack-tripleo-common >= 7.1.0" at https://review.rdoproject.org/r/gitweb?p=openstack/tripleo-heat-templates-distgit.git;a=blob;f=openstack-tripleo-heat-templates.spec;h=36ee20fb8042bbdd254b4eab6ab1f3272bd8a6c5;hb=HEAD#l42 I assumed there was correlation there with the value originally coming from the project requirements.txt - but perhaps that is wrong; I realise now that the specfile is referencing 'openstack-tripleo-common' the built rpm, not 'tripleo-common'. Well OK, that was/is my biggest concern; i.e. we are getting value from having this requirements compatibility check, and if we remove it we will just end up having those compatibility problems further down the chain. On the flip side, it just doesn't seem to me to be that big a cost for us to stay inside this check. We aren't running check-requirements job on every posted change it is only ever triggered if you submit a change to requirements.txt https://opendev.org/openstack/requirements/src/commit/ce19462764940a4ce99dae4ac2ec7a004c68e9a4/.zuul.d/project-template.yaml#L16 . > >> >> > >> > One potential advantage of not being in global-requirements.txt is that our unit tests and functional tests could actually test the same code. As things stand today, our unit tests in projects that depend on tripleo-common are pinned to the version in global-requirements.txt, while our functional tests currently run with tripleo-common from master (or included depends-on). >> >> I don't think one in particular is a very valid point though - as >> things currently stand in global-requirements we aren't 'pinning' all >> we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid >> (I assume) some bad release we made. > > > tripleo-common is pinned to the latest release when it's pip installed in the venv, instead of using latest git (and including depends-on). You're right that it's probably what we want to keep doing, and this is probably not related to opting out of g-r. Especially since we don't want to require latest git of a dependency when running unit tests locally. However it is worth noting that our unit tests (tox) and functional tests (tripleo-ci) use different code for the dependencies. That was not obvious to me and others on the surface. Perhaps we could add additional tox jobs that do require the latest tripleo-common from git to also cover that scenario. ack I see what you mean now about pip install => tagged release vs dlrn current => git master/latest commit and also agree that is not directly related to this discussion about the check-requirements jobs. regards, marios > > Here's an example: > https://review.opendev.org/c/openstack/python-tripleoclient/+/787907 > https://review.opendev.org/c/openstack/tripleo-common/+/787906 > > The tripleo-common patch fails unit tests as expected, the tripleoclient which depends-on the tripleo-common patch passes unit tests, but fails functional. I'd rather see that failure caught by a unit test as well. > > -- > -- James Slagle > -- From jpena at redhat.com Fri May 14 15:33:25 2021 From: jpena at redhat.com (Javier Pena) Date: Fri, 14 May 2021 11:33:25 -0400 (EDT) Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: Message-ID: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> > On Fri, May 14, 2021 at 6:41 AM Marios Andreou < marios at redhat.com > wrote: > > On Thu, May 13, 2021 at 9:41 PM James Slagle < james.slagle at gmail.com > > > wrote: > > > > > > > > I'd like to propose that TripleO opt out of dependency management by > > > removing tripleo-common from global-requirements.txt. I do not feel that > > > the global dependency management brings any advantages or anything needed > > > for TripleO. I can't think of any reason to enforce the ability to be > > > globally pip installable with the rest of OpenStack. > > > To add a bit more context as to how this discussion came about, we > > > tried to remove tripleo-common from global-requirements and the > > > check-requirements jobs in tripleoclient caused [1] as a result. > > > So we need to decide whether to continue to be part of that > > > requirements contract [2] or if we will do what is proposed here and > > > remove ourselves altogether. > > > If we decide _not_ to implement this proposal then we will also have > > > to add the requirements-check jobs in tripleo-ansible [3] and > > > tripleo-validations [4] as they are currently missing. > > > > > > > > Two of our most critical projects, tripleoclient and tripleo-common do > > > not > > > even put many of their data files in the right place where our code > > > expects them when they are pip installed. So, I feel fairly confident > > > that > > > no one is pip installing TripleO and relying on global requirements > > > enforcement. > > > I don't think this is _just_ about pip installs. It is generally about > > > the contents of each project requirements.txt. As part of the > > > requirements contract, it means that those repos with which we are > > > participating (the ones in projects.txt [5]) are protected against > > > other projects making any breaking changes in _their_ > > > requirements.txt. Don't the contents of requirements.txt also end up > > > in the .spec file from which we are building rpm e.g. [6] for tht? In > > > which case if we remove this and just stop catching any breaking > > > changes in the check/gate check-requirements jobs, I suspect we will > > > just move the problem to the rpm build and it will fail there. > > I don't see that in the spec file. Unless there is some other automation > somewhere that regenerates all of the BuildRequires/Requires and modifies > them to match requirements.txt/test-requirements.txt? We run periodic syncs once every cycle, and try our best to make spec requirements match requirements.txt/test-requirements.txt for the project. See https://review.rdoproject.org/r/c/openstack/tripleoclient-distgit/+/33367 for a recent example on tripleoclient. I don't have a special opinion on keeping tripleo-common inside global-requirements.txt or not. However, all TripleO projects still need to be co-installable with other OpenStack projects, otherwise we will not be able to build packages for them due to all the dependency issues that could arise. I'm not sure if that was implied in the original post. Regards, Javier > > > > > > > One potential advantage of not being in global-requirements.txt is that > > > our > > > unit tests and functional tests could actually test the same code. As > > > things stand today, our unit tests in projects that depend on > > > tripleo-common are pinned to the version in global-requirements.txt, > > > while > > > our functional tests currently run with tripleo-common from master (or > > > included depends-on). > > > I don't think one in particular is a very valid point though - as > > > things currently stand in global-requirements we aren't 'pinning' all > > > we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid > > > (I assume) some bad release we made. > > tripleo-common is pinned to the latest release when it's pip installed in the > venv, instead of using latest git (and including depends-on). You're right > that it's probably what we want to keep doing, and this is probably not > related to opting out of g-r. Especially since we don't want to require > latest git of a dependency when running unit tests locally. However it is > worth noting that our unit tests (tox) and functional tests (tripleo-ci) use > different code for the dependencies. That was not obvious to me and others > on the surface. Perhaps we could add additional tox jobs that do require the > latest tripleo-common from git to also cover that scenario. > Here's an example: > https://review.opendev.org/c/openstack/python-tripleoclient/+/787907 > https://review.opendev.org/c/openstack/tripleo-common/+/787906 > The tripleo-common patch fails unit tests as expected, the tripleoclient > which depends-on the tripleo-common patch passes unit tests, but fails > functional. I'd rather see that failure caught by a unit test as well. > -- > -- James Slagle > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri May 14 05:03:55 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 May 2021 07:03:55 +0200 Subject: [stein][neutron] gratuitous arp In-Reply-To: <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> References: <9ac105e8b7176ecc085f57ec84d891afa927c637.camel@redhat.com> <7de015a7292674b4ed5aa4926f01de760d133de9.camel@redhat.com> <4fa3e29a7e654e74bc96ac67db0e755c@binero.com> <95ccfc366d4b497c8af232f38d07559f@binero.com> <35985fecc7b7658d70446aa816d8ed612f942115.camel@redhat.com> Message-ID: Hello, I am trying to apply suggested patches but after applying some python error codes do not allow neutron services to start. Since my python skill is poor, I wonder if that patches are for python3. I am on centos 7 and (probably???) patches are for centos 8. I also wonder if it possible to upgrade a centos 7 train to centos 8 train without reinstalling all nodes. This could be important for next release upgrade (ussuri, victoria and so on). Ignazio Il Ven 12 Mar 2021, 23:44 Sean Mooney ha scritto: > On Fri, 2021-03-12 at 08:13 +0000, Tobias Urdin wrote: > > Hello, > > > > If it's the same as us, then yes, the issue occurs on Train and is not > completely solved yet. > there is a downstream bug trackker for this > > https://bugzilla.redhat.com/show_bug.cgi?id=1917675 > > its fixed by a combination of 3 enturon patches and i think 1 nova one > > https://review.opendev.org/c/openstack/neutron/+/766277/ > https://review.opendev.org/c/openstack/neutron/+/753314/ > https://review.opendev.org/c/openstack/neutron/+/640258/ > > and > https://review.opendev.org/c/openstack/nova/+/770745 > > the first tree neutron patches would fix the evauate case but break live > migration > the nova patch means live migration will work too although to fully fix > the related > live migration packet loss issues you need > > https://review.opendev.org/c/openstack/nova/+/747454/4 > https://review.opendev.org/c/openstack/nova/+/742180/12 > to fix live migration with network abckend that dont suppor tmultiple port > binding > and > https://review.opendev.org/c/openstack/nova/+/602432 (the only one not > merged yet.) > for live migrateon with ovs and hybridg plug=false (e.g. ovs firewall > driver, noop or ovn instead of ml2/ovs. > > multiple port binding was not actully the reason for this there was a race > in neutorn itslef that would have haapend > even without multiple port binding between the dhcp agent and l2 agent. > > some of those patches have been backported already and all shoudl > eventually make ti to train the could be brought to stine potentially > if peopel are open to backport/review them. > > > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano > > Sent: Friday, March 12, 2021 7:43:22 AM > > To: Tobias Urdin > > Cc: openstack-discuss > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello Tobias, the result is the same as your. > > I do not know what happens in depth to evaluate if the behavior is the > same. > > I solved on stein with patch suggested by Sean : force_legacy_port_bind > workaround. > > So I am asking if the problem exists also on train. > > Ignazio > > > > Il Gio 11 Mar 2021, 19:27 Tobias Urdin tobias.urdin at binero.com>> ha scritto: > > > > Hello, > > > > > > Not sure if you are having the same issue as us, but we are following > https://bugs.launchpad.net/neutron/+bug/1901707 but > > > > are patching it with something similar to > https://review.opendev.org/c/openstack/nova/+/741529 to workaround the > issue until it's completely solved. > > > > > > Best regards > > > > ________________________________ > > From: Ignazio Cassano ignaziocassano at gmail.com>> > > Sent: Wednesday, March 10, 2021 7:57:21 AM > > To: Sean Mooney > > Cc: openstack-discuss; Slawek Kaplonski > > Subject: Re: [stein][neutron] gratuitous arp > > > > Hello All, > > please, are there news about bug 1815989 ? > > On stein I modified code as suggested in the patches. > > I am worried when I will upgrade to train: wil this bug persist ? > > On which openstack version this bug is resolved ? > > Ignazio > > > > > > > > Il giorno mer 18 nov 2020 alle ore 07:16 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > Hello, I tried to update to last stein packages on yum and seems this > bug still exists. > > Before the yum update I patched some files as suggested and and ping to > vm worked fine. > > After yum update the issue returns. > > Please, let me know If I must patch files by hand or some new parameters > in configuration can solve and/or the issue is solved in newer openstack > versions. > > Thanks > > Ignazio > > > > > > Il Mer 29 Apr 2020, 19:49 Sean Mooney smooney at redhat.com>> ha scritto: > > On Wed, 2020-04-29 at 17:10 +0200, Ignazio Cassano wrote: > > > Many thanks. > > > Please keep in touch. > > here are the two patches. > > the first https://review.opendev.org/#/c/724386/ is the actual change > to add the new config opition > > this needs a release note and some tests but it shoudl be functional > hence the [WIP] > > i have not enable the workaround in any job in this patch so the ci run > will assert this does not break > > anything in the default case > > > > the second patch is https://review.opendev.org/#/c/724387/ which > enables the workaround in the multi node ci jobs > > and is testing that live migration exctra works when the workaround is > enabled. > > > > this should work as it is what we expect to happen if you are using a > moderne nova with an old neutron. > > its is marked [DNM] as i dont intend that patch to merge but if the > workaround is useful we migth consider enableing > > it for one of the jobs to get ci coverage but not all of the jobs. > > > > i have not had time to deploy a 2 node env today but ill try and test > this locally tomorow. > > > > > > > > > Ignazio > > > > > > Il giorno mer 29 apr 2020 alle ore 16:55 Sean Mooney < > smooney at redhat.com> > > > ha scritto: > > > > > > > so bing pragmatic i think the simplest path forward given my other > patches > > > > have not laned > > > > in almost 2 years is to quickly add a workaround config option to > disable > > > > mulitple port bindign > > > > which we can backport and then we can try and work on the actual fix > after. > > > > acording to https://bugs.launchpad.net/neutron/+bug/1815989 that > shoudl > > > > serve as a workaround > > > > for thos that hav this issue but its a regression in functionality. > > > > > > > > i can create a patch that will do that in an hour or so and submit a > > > > followup DNM patch to enabel the > > > > workaound in one of the gate jobs that tests live migration. > > > > i have a meeting in 10 mins and need to finish the pacht im > currently > > > > updating but ill submit a poc once that is done. > > > > > > > > im not sure if i will be able to spend time on the actul fix which i > > > > proposed last year but ill see what i can do. > > > > > > > > > > > > On Wed, 2020-04-29 at 16:37 +0200, Ignazio Cassano wrote: > > > > > PS > > > > > I have testing environment on queens,rocky and stein and I can > make test > > > > > as you need. > > > > > Ignazio > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 16:19 Ignazio Cassano < > > > > > ignaziocassano at gmail.com> ha > scritto: > > > > > > > > > > > Hello Sean, > > > > > > the following is the configuration on my compute nodes: > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep libvirt > > > > > > libvirt-daemon-driver-storage-iscsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-kvm-4.5.0-33.el7.x86_64 > > > > > > libvirt-libs-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-network-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nodedev-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-gluster-4.5.0-33.el7.x86_64 > > > > > > libvirt-client-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-core-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-logical-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-secret-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-scsi-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-rbd-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-config-nwfilter-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-disk-4.5.0-33.el7.x86_64 > > > > > > libvirt-bash-completion-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-4.5.0-33.el7.x86_64 > > > > > > libvirt-python-4.5.0-1.el7.x86_64 > > > > > > libvirt-daemon-driver-interface-4.5.0-33.el7.x86_64 > > > > > > libvirt-daemon-driver-storage-mpath-4.5.0-33.el7.x86_64 > > > > > > [root at podiscsivc-kvm01 network-scripts]# rpm -qa|grep qemu > > > > > > qemu-kvm-common-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > qemu-kvm-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > libvirt-daemon-driver-qemu-4.5.0-33.el7.x86_64 > > > > > > centos-release-qemu-ev-1.0-4.el7.centos.noarch > > > > > > ipxe-roms-qemu-20180825-2.git133f4c.el7.noarch > > > > > > qemu-img-ev-2.12.0-44.1.el7_8.1.x86_64 > > > > > > > > > > > > > > > > > > As far as firewall driver > > > > > > > > /etc/neutron/plugins/ml2/openvswitch_agent.ini: > > > > > > > > > > > > firewall_driver = iptables_hybrid > > > > > > > > > > > > I have same libvirt/qemu version on queens, on rocky and on stein > > > > > > > > testing > > > > > > environment and the > > > > > > same firewall driver. > > > > > > Live migration on provider network on queens works fine. > > > > > > It does not work fine on rocky and stein (vm lost connection > after it > > > > > > > > is > > > > > > migrated and start to respond only when the vm send a network > packet , > > > > > > > > for > > > > > > example when chrony pools the time server). > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > Il giorno mer 29 apr 2020 alle ore 14:36 Sean Mooney < > > > > > > > > smooney at redhat.com> > > > > > > ha scritto: > > > > > > > > > > > > > On Wed, 2020-04-29 at 10:39 +0200, Ignazio Cassano wrote: > > > > > > > > Hello, some updated about this issue. > > > > > > > > I read someone has got same issue as reported here: > > > > > > > > > > > > > > > > https://bugs.launchpad.net/neutron/+bug/1866139 > > > > > > > > > > > > > > > > If you read the discussion, someone tells that the garp must > be > > > > > > > > sent by > > > > > > > > qemu during live miration. > > > > > > > > If this is true, this means on rocky/stein the qemu/libvirt > are > > > > > > > > bugged. > > > > > > > > > > > > > > it is not correct. > > > > > > > qemu/libvir thas alsway used RARP which predates GARP to serve > as > > > > > > > > its mac > > > > > > > learning frames > > > > > > > instead > > > > > > > > https://en.wikipedia.org/wiki/Reverse_Address_Resolution_Protocol > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2009-10/msg01457.html > > > > > > > however it looks like this was broken in 2016 in qemu 2.6.0 > > > > > > > > https://lists.gnu.org/archive/html/qemu-devel/2016-07/msg04645.html > > > > > > > but was fixed by > > > > > > > > > > > > > > > > https://github.com/qemu/qemu/commit/ca1ee3d6b546e841a1b9db413eb8fa09f13a061b > > > > > > > can you confirm you are not using the broken 2.6.0 release and > are > > > > > > > > using > > > > > > > 2.7 or newer or 2.4 and older. > > > > > > > > > > > > > > > > > > > > > > So I tried to use stein and rocky with the same version of > > > > > > > > libvirt/qemu > > > > > > > > packages I installed on queens (I updated compute and > controllers > > > > > > > > node > > > > > > > > > > > > > > on > > > > > > > > queens for obtaining same libvirt/qemu version deployed on > rocky > > > > > > > > and > > > > > > > > > > > > > > stein). > > > > > > > > > > > > > > > > On queens live migration on provider network continues to > work > > > > > > > > fine. > > > > > > > > On rocky and stein not, so I think the issue is related to > > > > > > > > openstack > > > > > > > > components . > > > > > > > > > > > > > > on queens we have only a singel prot binding and nova blindly > assumes > > > > > > > that the port binding details wont > > > > > > > change when it does a live migration and does not update the > xml for > > > > > > > > the > > > > > > > netwrok interfaces. > > > > > > > > > > > > > > the port binding is updated after the migration is complete in > > > > > > > post_livemigration > > > > > > > in rocky+ neutron optionally uses the multiple port bindings > flow to > > > > > > > prebind the port to the destiatnion > > > > > > > so it can update the xml if needed and if post copy live > migration is > > > > > > > enable it will asyconsly activate teh dest port > > > > > > > binding before post_livemigration shortenting the downtime. > > > > > > > > > > > > > > if you are using the iptables firewall os-vif will have > precreated > > > > > > > > the > > > > > > > ovs port and intermediate linux bridge before the > > > > > > > migration started which will allow neutron to wire it up (put > it on > > > > > > > > the > > > > > > > correct vlan and install security groups) before > > > > > > > the vm completes the migraton. > > > > > > > > > > > > > > if you are using the ovs firewall os-vif still precreates teh > ovs > > > > > > > > port > > > > > > > but libvirt deletes it and recreats it too. > > > > > > > as a result there is a race when using openvswitch firewall > that can > > > > > > > result in the RARP packets being lost. > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > Ignazio Cassano > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Il giorno lun 27 apr 2020 alle ore 19:50 Sean Mooney < > > > > > > > > > > > > > > smooney at redhat.com> > > > > > > > > ha scritto: > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 18:19 +0200, Ignazio Cassano wrote: > > > > > > > > > > Hello, I have this problem with rocky or newer with > > > > > > > > iptables_hybrid > > > > > > > > > > firewall. > > > > > > > > > > So, can I solve using post copy live migration ??? > > > > > > > > > > > > > > > > > > so this behavior has always been how nova worked but rocky > the > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/neutron-new-port-binding-api.html > > > > > > > > > spec intoduced teh ablity to shorten the outage by pre > biding the > > > > > > > > > > > > > > port and > > > > > > > > > activating it when > > > > > > > > > the vm is resumed on the destiation host before we get to > pos > > > > > > > > live > > > > > > > > > > > > > > migrate. > > > > > > > > > > > > > > > > > > this reduces the outage time although i cant be fully > elimiated > > > > > > > > as > > > > > > > > > > > > > > some > > > > > > > > > level of packet loss is > > > > > > > > > always expected when you live migrate. > > > > > > > > > > > > > > > > > > so yes enabliy post copy live migration should help but be > aware > > > > > > > > that > > > > > > > > > > > > > > if a > > > > > > > > > network partion happens > > > > > > > > > during a post copy live migration the vm will crash and > need to > > > > > > > > be > > > > > > > > > restarted. > > > > > > > > > it is generally safe to use and will imporve the migration > > > > > > > > performace > > > > > > > > > > > > > > but > > > > > > > > > unlike pre copy migration if > > > > > > > > > the guess resumes on the dest and the mempry page has not > been > > > > > > > > copied > > > > > > > > > > > > > > yet > > > > > > > > > then it must wait for it to be copied > > > > > > > > > and retrive it form the souce host. if the connection too > the > > > > > > > > souce > > > > > > > > > > > > > > host > > > > > > > > > is intrupted then the vm cant > > > > > > > > > do that and the migration will fail and the instance will > crash. > > > > > > > > if > > > > > > > > > > > > > > you > > > > > > > > > are using precopy migration > > > > > > > > > if there is a network partaion during the migration the > > > > > > > > migration will > > > > > > > > > fail but the instance will continue > > > > > > > > > to run on the source host. > > > > > > > > > > > > > > > > > > so while i would still recommend using it, i it just good > to be > > > > > > > > aware > > > > > > > > > > > > > > of > > > > > > > > > that behavior change. > > > > > > > > > > > > > > > > > > > Thanks > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > Il Lun 27 Apr 2020, 17:57 Sean Mooney < > smooney at redhat.com> ha > > > > > > > > > > > > > > scritto: > > > > > > > > > > > > > > > > > > > > > On Mon, 2020-04-27 at 17:06 +0200, Ignazio Cassano > wrote: > > > > > > > > > > > > Hello, I have a problem on stein neutron. When a vm > migrate > > > > > > > > > > > > > > from one > > > > > > > > > > > > > > > > > > node > > > > > > > > > > > > to another I cannot ping it for several minutes. If > in the > > > > > > > > vm I > > > > > > > > > > > > > > put a > > > > > > > > > > > > script that ping the gateway continously, the live > > > > > > > > migration > > > > > > > > > > > > > > works > > > > > > > > > > > > > > > > > > fine > > > > > > > > > > > > > > > > > > > > > > and > > > > > > > > > > > > I can ping it. Why this happens ? I read something > about > > > > > > > > > > > > > > gratuitous > > > > > > > > > > > > > > > > > > arp. > > > > > > > > > > > > > > > > > > > > > > qemu does not use gratuitous arp but instead uses an > older > > > > > > > > > > > > > > protocal > > > > > > > > > > > > > > > > > > called > > > > > > > > > > > RARP > > > > > > > > > > > to do mac address learning. > > > > > > > > > > > > > > > > > > > > > > what release of openstack are you using. and are you > using > > > > > > > > > > > > > > iptables > > > > > > > > > > > firewall of openvswitch firewall. > > > > > > > > > > > > > > > > > > > > > > if you are using openvswtich there is is nothing we > can do > > > > > > > > until > > > > > > > > > > > > > > we > > > > > > > > > > > finally delegate vif pluging to os-vif. > > > > > > > > > > > currently libvirt handels interface plugging for > kernel ovs > > > > > > > > when > > > > > > > > > > > > > > using > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > openvswitch firewall driver > > > > > > > > > > > https://review.opendev.org/#/c/602432/ would adress > that > > > > > > > > but it > > > > > > > > > > > > > > and > > > > > > > > > > > > > > > > > > the > > > > > > > > > > > neutron patch are > > > > > > > > > > > https://review.opendev.org/#/c/640258 rather out > dated. > > > > > > > > while > > > > > > > > > > > > > > libvirt > > > > > > > > > > > > > > > > > > is > > > > > > > > > > > pluging the vif there will always be > > > > > > > > > > > a race condition where the RARP packets sent by qemu > and > > > > > > > > then mac > > > > > > > > > > > > > > > > > > learning > > > > > > > > > > > packets will be lost. > > > > > > > > > > > > > > > > > > > > > > if you are using the iptables firewall and you have > opnestack > > > > > > > > > > > > > > rock or > > > > > > > > > > > later then if you enable post copy live migration > > > > > > > > > > > it should reduce the downtime. in this conficution we > do not > > > > > > > > have > > > > > > > > > > > > > > the > > > > > > > > > > > > > > > > > > race > > > > > > > > > > > betwen neutron and libvirt so the rarp > > > > > > > > > > > packets should not be lost. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Please, help me ? > > > > > > > > > > > > Any workaround , please ? > > > > > > > > > > > > > > > > > > > > > > > > Best Regards > > > > > > > > > > > > Ignazio > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From SSelf at performair.com Fri May 14 17:29:52 2021 From: SSelf at performair.com (SSelf at performair.com) Date: Fri, 14 May 2021 17:29:52 +0000 Subject: [victoria] Windows Server 2008 SP2 - No bootble device Message-ID: All; I'm having some trouble getting a Windows Server 2008 SP2 image to boot. On instance start I am greeted with the following: Booting from Hard Disk... Boot failed: not a bootable disk No bootable device. I've followed the instructions, referenced below, with a minor exception; the "Red Hat VirtIO SCSI controller" driver was installed after the OS had been installed as the Pre-installation Environment had troubles recognizing them (yay 2008?). Any recommendations would be welcome. Reference: https://superuser.openstack.org/articles/how-to-deploy-windows-on-openstack/ Thank you, Stephen Self IT Manager Perform Air International Inc. sself at performair.com www.performair.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri May 14 18:12:23 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 14 May 2021 11:12:23 -0700 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> Message-ID: Hi Luke, 1. The "octavia-health-manager listent-port"(s) should be "ACTIVE" in neutron. Something may have gone wrong in the deployment tooling or in neutron for those ports. 2. As for the VIP ports on the amphora, the base port should be "ACTIVE", but the VIP port we use to store the VIP IP address should be "DOWN". The "ACTIVE" base ports will list the VIP IP as it's "allowed-address-pairs" port/ip. 3. On the issue with the health monitor of type PING, it's rarely used outside of some of our tests as it's a poor gauge of the health of an endpoint (https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html#other-health-monitors). That said, it should be working. It's an external test in the amphora provider that runs "ping/ping6", so it's interesting that you can ping from the netns successfully. Inside the amphora network namespace, can you try running the ping script directly? export HAPROXY_SERVER_ADDR= /var/lib/octavia/ping-wrapper.sh echo $? An answer of 0 means the ping was successful, 1 a failure. Michael On Fri, May 14, 2021 at 8:15 AM Luke Camilleri wrote: > > Hi Michael, thanks as always for the below, I have watched the video and configured all the requirements as shown in those guides. Working great right now. > > I have noticed the following points and would like to know if you can give me some feedback please: > > In the Octavia project at the networks screen --> lb-mgmt-net --> Ports --> octavia-health-manager-listen-port (the IP bound to the health-manager service) has its status Down. It does not create any sort of issue but was wondering if this was normal behavior? > Similarly to the above point, in the tenant networks screen, every port that is "Attached Device - Octavia" has its status reported as "Down" ( these are the VIP addresses assigned to the amphora). Just need to confirm that this is normal behaviour > Creating a health monitor of type ping fails to get the operating status of the nodes and the nodes are in error (horizon) and the amphora reports that there are no backends and hence it is not working (I am using the same backend nodes with another loadbalancer but with an HTTP check and it is working fine. a security group is setup to allow ping from 0.0.0.0/0 and from the amphora-haproxy network namespace on the amphora instance I can ping both nodes without issues ). Below the amphora's haproxy.log > > May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. > > May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. > > May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! > > May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! > > Thanks in advance > > On 13/05/2021 18:33, Michael Johnson wrote: > > You are correct that two IPs are being allocated for the VIP, one is a > secondary IP which neutron implements as an "allowed address pairs" > port. We do this to allow failover of the amphora instance should nova > fail the service VM. We hold the VIP IP in a special port so the IP is > not lost while we rebuild the service VM. > If you are using the active/standby topology (or an Octavia flavor > with active/standby enabled), this failover is accelerated with nearly > no visible impact to the flows through the load balancer. > Active/Standby has been an Octavia feature since the Mitaka release. I > gave a demo of it at the Tokoyo summit here: > https://youtu.be/8n7FGhtOiXk?t=1420 > > You can enable active/standby as the default by setting the > "loadbalancer_topology" setting in the configuration file > (https://docs.openstack.org/octavia/latest/configuration/configref.html#controller_worker.loadbalancer_topology) > or by creating an Octavia flavor that creates the load balancer with > an active/standby topology > (https://docs.openstack.org/octavia/latest/admin/flavors.html). > > Michael > > On Thu, May 13, 2021 at 4:23 AM Luke Camilleri > wrote: > > HI Michael, thanks a lot for the below information it is very helpful. I > ended up setting the o-hm0 interface statically in the > octavia-interface.sh script which is called by the service and also > added a delay to make sure that the bridges are up before trying to > create a veth pair and connect the endpoints. > > Also I edited the unit section of the health-manager service and at the > after option I added octavia-interface.service or else on startup the > health manager will not bind to the lb-mgmt-net since it would not be up yet > > The floating IPs part was a bit tricky until I understood what was > really going on with the VIP concept and how better and more flexible it > is to set the VIP on the tenant network and then associate with public > ip to the VIP. > > With this being said I noticed that 2 IPs are being assigned to the > amphora instance and that the actual port assigned to the instance has > an allowed pair with the VIP port. I checked online and it seems that > there is an active/standby project going on with VRRP/keepalived and in > fact the keepalived daemon is running in the amphora instance. > > Am I on the right track with the active/standby feature and if so do you > have any installation/project links to share please so that I can test it? > > Regards > > On 12/05/2021 08:37, Michael Johnson wrote: > > Answers inline below. > > Michael > > On Mon, May 10, 2021 at 5:15 PM Luke Camilleri > wrote: > > Hi Michael and thanks a lot for the detailed answer below. > > I believe I have got most of this sorted out apart from some small issues below: > > If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? > > The amphorae do not have issues with DHCP and gateways as we control > the DHCP client configuration inside the amphora. It does only have > one IP on the lb-mgmt-net, it will honor gateways provided by neutron > for the lb-mgmt-net traffic, but a gateway is not required on the > lb-mgmt-network unless you are routing the lb-mgmt-net traffic across > subnets. > > How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? > > The worker and housekeeping processes only create connections to the > amphora, they do not receive connections from them. The amphora send a > heartbeat packet to the health manager endpoints every ten seconds by > default. The list of valid health manager endpoints is included in the > amphora agent configuration file that is injected into the service VM > at boot time. It can be updated using the Octavia admin API for > refreshing the amphora agent configuration. > > Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? > > Octavia does not use floating IPs. Users can create and assign > floating IPs via neutron if they would like, but they are not > necessary. Octavia VIPs can be created directly on neutron "external" > networks, avoiding the NAT overhead of floating IPs. > There is no practical reason to assign a floating IP to a port on the > lb-mgmt-net as tenant traffic is never on or accessible from that > network. > > Thanks in advance for any feedback > > On 06/05/2021 22:46, Michael Johnson wrote: > > Hi Luke, > > 1. I agree that DHCP is technically unnecessary for the o-hm0 > interface if you can manage your address allocation on the network you > are using for the lb-mgmt-net. > I don't have detailed information about the Ubuntu install > instructions, but I suspect it was done to simplify the IPAM to be > managed by whatever is providing DHCP on the lb-mgmt-net provided (be > it neutron or some other resource on a provider network). > The lb-mgmt-net is simply a neutron network that the amphora > management address is on. It is routable and does not require external > access. The only tricky part to it is the worker, health manager, and > housekeeping processes need to be reachable from the amphora, and the > controllers need to reach the amphora over the network(s). There are > many ways to accomplish this. > > 2. See my above answer. Fundamentally the lb-mgmt-net is just a > neutron network that nova can use to attach an interface to the > amphora instances for command and control traffic. As long as the > controllers can reach TCP 9433 on the amphora, and the amphora can > send UDP 5555 back to the health manager endpoints, it will work fine. > > 3. Octavia, with the amphora driver, does not require any special > configuration in Neutron (beyond the advanced services RBAC policy > being available for the neutron service account used in your octavia > configuration file). The neutron_lbaas.conf and services_lbaas.conf > are legacy configuration files/settings that were used for > neutron-lbaas which is now end of life. See the wiki page for > information on the deprecation of neutron-lbaas: > https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. > > Michael > > On Thu, May 6, 2021 at 12:30 PM Luke Camilleri > wrote: > > Hi Michael and thanks a lot for your help on this, after following your > steps the agent got deployed successfully in the amphora-image. > > I have some other queries that I would like to ask mainly related to the > health-manager/load-balancer network setup and IP assignment. First of > all let me point out that I am using a manual installation process, and > it might help others to understand the underlying infrastructure > required to make this component work as expected. > > 1- The installation procedure contains this step: > > $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia > > which is later on called to assign the IP to the o-hm0 interface which > is connected to the lb-management network as shown below: > > $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia > > Apart from having a dhcp config for a single IP seems a bit of an > overkill, using these steps is injecting an additional routing table > into the default namespace as shown below in my case: > > # route -n > Kernel IP routing table > Destination Gateway Genmask Flags Metric Ref Use > Iface > 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 > 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX > 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX > 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 > 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 > > Since the load-balancer management network does not need any external > connectivity (but only communication between health-manager service and > amphora-agent), why is a gateway required and why isn't the IP address > allocated as part of the interface creation script which is called when > the service is started or stopped (example below)? > > --- > > #!/bin/bash > > set -ex > > MAC=$MGMT_PORT_MAC > BRNAME=$BRNAME > > if [ "$1" == "start" ]; then > ip link add o-hm0 type veth peer name o-bhm0 > brctl addif $BRNAME o-bhm0 > ip link set o-bhm0 up > ip link set dev o-hm0 address $MAC > *** ip addr add 172.16.0.2/12 dev o-hm0 > ***ip link set o-hm0 mtu 1500 > ip link set o-hm0 up > iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT > elif [ "$1" == "stop" ]; then > ip link del o-hm0 > else > brctl show $BRNAME > ip a s dev o-hm0 > fi > > --- > > 2- Is there a possibility to specify a fixed vlan outside of tenant > range for the load balancer management network? > > 3- Are the configuration changes required only in neutron.conf or also > in additional config files like neutron_lbaas.conf and > services_lbaas.conf, similar to the vpnaas configuration? > > Thanks in advance for any assistance, but its like putting together a > puzzle of information :-) > > On 05/05/2021 20:25, Michael Johnson wrote: > > Hi Luke. > > Yes, the amphora-agent will listen on 9443 in the amphorae instances. > It uses TLS mutual authentication, so you can get a TLS response, but > it will not let you into the API without a valid certificate. A simple > "openssl s_client" is usually enough to prove that it is listening and > requesting the client certificate. > > I can't talk to the "openstack-octavia-diskimage-create" package you > found in centos, but I can discuss how to build an amphora image using > the OpenStack tools. > > If you get Octavia from git or via a release tarball, we provide a > script to build the amphora image. This is how we build our images for > the testing gates, etc. and is the recommended way (at least from the > OpenStack Octavia community) to create amphora images. > > https://opendev.org/openstack/octavia/src/branch/master/diskimage-create > > For CentOS 8, the command would be: > > diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 > is the minimum disk size for centos images, you may want more if you > are not offloading logs) > > I just did a run on a fresh centos 8 instance: > git clone https://opendev.org/openstack/octavia > python3 -m venv dib > source dib/bin/activate > pip3 install diskimage-builder PyYAML six > sudo dnf install yum-utils > ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 > > This built an image. > > Off and on we have had issues building CentOS images due to issues in > the tools we rely on. If you run into issues with this image, drop us > a note back. > > Michael > > On Wed, May 5, 2021 at 9:37 AM Luke Camilleri > wrote: > > Hi there, i am trying to get Octavia running on a Victoria deployment on > CentOS 8. It was a bit rough getting to the point to launch an instance > mainly due to the load-balancer management network and the lack of > documentation > (https://docs.openstack.org/octavia/victoria/install/install.html) to > deploy this oN CentOS. I will try to fix this once I have my deployment > up and running to help others on the way installing and configuring this :-) > > At this point a LB can be launched by the tenant and the instance is > spawned in the Octavia project and I can ping and SSH into the amphora > instance from the Octavia node where the octavia-health-manager service > is running using the IP within the same subnet of the amphoras > (172.16.0.0/12). > > Unfortunately I keep on getting these errors in the log file of the > worker log (/var/log/octavia/worker.log): > > 2021-05-05 01:54:49.368 14521 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect > to instance. Retrying.: requests.exceptions.ConnectionError: > HTTPSConnectionPool(host='172.16.4.46', p > ort=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] > Connection ref > used',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries > (currently set to 120) exhausted. The amphora is unavailable. Reason: > HTTPSConnectionPool(host='172.16 > .4.46', port=9443): Max retries exceeded with url: // (Caused by > NewConnectionError(' at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne > ction refused',)) > > 2021-05-05 01:54:54.374 14521 ERROR > octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora > compute instance failed to become reachable. This either means the > compute driver failed to fully boot the > instance inside the timeout interval or the instance is not reachable > via the lb-mgmt-net.: > octavia.amphorae.driver_exceptions.exceptions.TimeOutException: > contacting the amphora timed out > > obviously the instance is deleted then and the task fails from the > tenant's perspective. > > The main issue here is that there is no service running on port 9443 on > the amphora instance. I am assuming that this is in fact the > amphora-agent service that is running on the instance which should be > listening on this port 9443 but the service does not seem to be up or > not installed at all. > > To create the image I have installed the CentOS package > "openstack-octavia-diskimage-create" which provides the utility > disk-image-create but from what I can conclude the amphora-agent is not > being installed (thought this was done automatically by default :-( ) > > Can anyone let me know if the amphora-agent is what gets queried on port > 9443 ? > > If the agent is not installed/injected by default when building the > amphora image? > > The command to inject the amphora-agent into the amphora image when > using the disk-image-create command? > > Thanks in advance for any assistance > > From elod.illes at est.tech Fri May 14 20:09:17 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 14 May 2021 22:09:17 +0200 Subject: [all][stable] Delete $series-eol tagged branches Message-ID: <96e1cb5a-99c5-7a05-01c0-1635743d9c1d@est.tech> Hi, As I wrote previously [1] the long-waited deletion of $series-eol tagged branches started with the removal of ocata-eol tagged ones first (for the list of deleted branches, see: [2]) Then I also sent out a warning [3] about the next step, to delete pike-eol tagged branches, which finally happened today (for the list of deleted branches, see: [4]). So now I'm sending out a *warning* again, that as a 3rd step, the deletion of already tagged but still open branches will continue in 1 or 2 weeks time frame. If everything works as expected, then branches with *queens-eol*, *rocky-eol* and *stein-eol* tags can be processed in one batch. Also I would like to ask the teams who have $series-eol tagged branches to abandon all open patches on those branches, otherwise the branch cannot be deleted. Thanks, Előd [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021949.html [2] http://paste.openstack.org/show/804953/ [3] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022173.html [4] http://paste.openstack.org/show/805404/ From elod.illes at est.tech Fri May 14 20:18:18 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 14 May 2021 22:18:18 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <6767170.WaEBY35tY3@p1> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <6767170.WaEBY35tY3@p1> Message-ID: <5d31de6f-162c-56c8-eee1-efeb212df05d@est.tech> Hi, The patch was merged, so the ocata-eol tags were created for neutron projects. After the successful tagging I have executed the branch deletion, which has the following result: Branch stable/ocata successfully deleted from openstack/networking-ovn! Branch stable/ocata successfully deleted from openstack/neutron-dynamic-routing! Branch stable/ocata successfully deleted from openstack/neutron-fwaas! Branch stable/ocata successfully deleted from openstack/neutron-lbaas! Branch stable/ocata successfully deleted from openstack/neutron-lib! Branch stable/ocata successfully deleted from openstack/neutron! Thanks, Előd On 2021. 05. 12. 9:22, Slawek Kaplonski wrote: > > Hi, > > > Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > > > Hi, > > > > > > Ocata is unfortunately unmaintained for a long time as some general test > > > jobs are broken there, so as a stable-maint-core member I support to tag > > > neutron's stable/ocata as End of Life. After the branch is tagged, > > > please ping me and I can arrange the deletion of the branch. > > > > > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > > > still keep that offer, however I am clearly not enough to keep it > > > maintained, besides backports are not arriving for stable/pike in > > > neutron. Anyway, if the gate is functional there, then I say we could > > > keep it open (but as far as I see how gate situation is worsen now, as > > > more and more things go wrong, I don't expect that will take long). If > > > not, then I only ask that let's do the EOL'ing first with Ocata and when > > > it is done, then continue with neutron's stable/pike. > > > > > > For the process please follow the steps here: > > > > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > > (with the only exception, that in the last step, instead of infra team, > > > please turn to me/release team - patch for the documentation change is > > > on the way: > > > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > > Thx. I just proposed patch > https://review.opendev.org/c/openstack/releases/+/790904 >  to make > ocata-eol in all neutron projects. > > > > > > > Thanks, > > > > > > Előd > > > > > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > > > > > I checked today that stable/ocata and stable/pike branches in both > > > > Neutron and neutron stadium projects are pretty inactive since > long time. > > > > > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > > > was in July 2020 and in ocata October 2019, > > > > > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > > > > > > > According to [3] and [4] there are no opened patches for any of those > > > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > > > > > > > So based on that info I want to propose that we will close both those > > > > branches are EOL now and before doing that, I would like to know if > > > > anyone would like to keep those branches to be open still. > > > > > > > > > > > > [1] > > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > > > > > > > [2] > > > > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > > > [3] > > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > > > > > > > [4] > > > > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klemen at psi-net.si Fri May 14 20:26:31 2021 From: klemen at psi-net.si (Klemen Pogacnik) Date: Fri, 14 May 2021 22:26:31 +0200 Subject: [kolla] Reorganization of kolla-ansible documentation Message-ID: Hello! I promised to prepare my view as a user of kolla-ansible on its documentation. In my opinion the division between admin guides and user guides is artificial, as the user of kolla-ansible is actually the cloud administrator. Maybe it would be good to think about reorganizing the structure of documentation. Many good chapters are already written, they only have to be positioned in the right place to be found more easily. So here is my proposal of kolla-ansible doc's structure: 1. Introduction 1.1. mission 1.2. benefits 1.3. support matrix 2. Architecture 2.1. basic architecture 2.2. HA architecture 2.3. network architecture 2.4. storage architecture 3. Workflows 3.1. preparing the surroundings (networking, docker registry, ...) 3.2. preparing servers (packages installation) 3.3. configuration (of kolla-ansible and description of basic logic for configuration of Openstack modules) 3.4. 1st day procedures (bootstrap, deploy, destroy) 3.5. 2nd day procedures (reconfigure, upgrade, add, remove nodes ...) 3.6. multiple regions 3.7. multiple cloud 3.8. security 3.9. troubleshooting (how to check, if cloud works, what to do, if it doesn't) 4. Use Cases 4.1. all-in-one 4.2. basic vm multinode 4.3. some production use cases 5. Reference guide Mostly the same structure as already is. Except it would be desirable that description of each module has: - purpose of the module - configuration of the module - how to use it with links to module docs - basic troubleshooting 6. Contributor guide The documentation also needs figures, pictures, diagrams to be more understandable. So at least in the first chapters some of them shall be added. I'm also thinking about convergence of documentation of kayobe, kolla and kolla-ansible projects. It's true that there's no strict connection between kayobe and other two and kolla containers can be used without kolla-ansible playbooks. But the real benefit the user can get is to use all three projects together. But let's leave that for the second phase. So please comment on this proposal. Do you think it's going in the right direction? If yes, I can refine it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri May 14 20:39:30 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 14 May 2021 22:39:30 +0200 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem Message-ID: Hi teams in $SUBJECT, during the deletion of $series-eol tagged branches it turned out that the below listed branches / repositories contains merged patches on top of $series-eol tag. The issue is with this that whenever the branch is deleted only the $series-eol (and other) tags can be checked out, so the changes that were merged after the eol tags, will be *lost*. There are two options now: 1. Create another tag (something like: "$series-eol-extra"), so that the extra patches will not be lost completely, because they can be checked out with the newly created tags 2. Delete the branch anyway and don't care about the lost patch(es) Here are the list of such branches, please consider which option is good for the team and reply to this mail: openstack/octavia * stable/stein has patches on top of the stein-eol tag * stable/queens has patches on top of the queens-eol tag openstack/kolla * stable/pike has patches on top of the pike-eol tag * stable/ocata has patches on top of the ocata-eol tag openstack/tripleo-common * stable/rocky has patches on top of the rocky-eol tag openstack/os-apply-config * stable/pike has patches on top of the pike-eol tag * stable/ocata has patches on top of the ocata-eol tag openstack/os-cloud-config stable/ocata has patches on top of the ocata-eol tag Thanks, Előd From sebastian.luna.valero at gmail.com Sat May 15 08:08:24 2021 From: sebastian.luna.valero at gmail.com (Sebastian Luna Valero) Date: Sat, 15 May 2021 10:08:24 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost> <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> Message-ID: Hi All, Thanks for your inputs so far. I am also trying to help Manu with this issue. The "cinder-volume" service was working properly with the existing configuration. However, after a power outage the service is no longer reported as "up". Looking at the source code, the service status is reported as "down" by "cinder-scheduler" in here: https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L618 With message: "WARNING cinder.scheduler.host_manager [req-<>- default default] volume service is down. (host: rbd:volumes at ceph-rbd)" I printed out the "service" tuple https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L615 and we get: "2021-05-15 09:57:24.918 7 WARNING cinder.scheduler.host_manager [<> - default default] Service(active_backend_id=None,availability_zone='nova',binary='cinder-volume',cluster=,cluster_name=None,created_at=2020-06-12T07:53:42Z,deleted=False,deleted_at=None,disabled=False,disabled_reason=None,frozen=False,host='rbd:volumes at ceph-rbd ',id=12,modified_at=None,object_current_version='1.38',replication_status='disabled',report_count=8067424,rpc_current_version='3.16',topic='cinder-volume',updated_at=2021-05-12T15:37:52Z,uuid='604668e8-c2e7-46ed-a2b8-086e588079ac')" Cinder is configured with a Ceph RBD backend, as explained in https://github.com/openstack/kolla-ansible/blob/stable/train/doc/source/reference/storage/external-ceph-guide.rst#cinder That's where the "backend_host=rbd:volumes" configuration is coming from. We are using 3 controller nodes for OpenStack and 3 monitor nodes for Ceph. The Ceph cluster doesn't report any error. The "cinder-volume" containers don't report any error. Moreover, when we go inside the "cinder-volume" container we are able to list existing volumes with: rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls So the connection to the Ceph cluster works. Why is "cinder-scheduler" reporting the that the backend Ceph cluster is down? Many thanks, Sebastian On Thu, 13 May 2021 at 13:12, Tobias Urdin wrote: > Hello, > > I just saw that you are running Ceph Octopus with Train release and wanted > to let you know that we saw issues with the os-brick version shipped with > Train not supporting client version of Ceph Octopus. > > So for our Ceph cluster running Octopus we had to keep the client version > on Nautilus until upgrading to Victoria which included a newer version of > os-brick. > > Maybe this is unrelated to your issue but just wanted to put it out there. > > Best regards > Tobias > > > On 13 May 2021, at 12:55, ManuParra wrote: > > > > Hello Gorka, not yet, let me update cinder configuration, add the > option, restart cinder and I’ll update the status. > > Do you recommend other things to try for this cycle? > > Regards. > > > >> On 13 May 2021, at 09:37, Gorka Eguileor wrote: > >> > >>> On 13/05, ManuParra wrote: > >>> Hi Gorka again, yes, the first thing is to know why you can't connect > to that host (Ceph is actually set up for HA) so that's the way to do it. I > tell you this because previously from the beginning of the setup of our > setup it has always been like that, with that hostname and there has been > no problem. > >>> > >>> As for the errors, the strangest thing is that in Monasca I have not > found any error log, only warning on “volume service is down. (host: > rbd:volumes at ceph-rbd)" and info, which is even stranger. > >> > >> Have you tried the configuration change I recommended? > >> > >> > >>> > >>> Regards. > >>> > >>>> On 12 May 2021, at 23:34, Gorka Eguileor wrote: > >>>> > >>>> On 12/05, ManuParra wrote: > >>>>> Hi Gorka, let me show the cinder config: > >>>>> > >>>>> [ceph-rbd] > >>>>> rbd_ceph_conf = /etc/ceph/ceph.conf > >>>>> rbd_user = cinder > >>>>> backend_host = rbd:volumes > >>>>> rbd_pool = cinder.volumes > >>>>> volume_backend_name = ceph-rbd > >>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver > >>>>> … > >>>>> > >>>>> So, using rbd_exclusive_cinder_pool=True it will be used just for > volumes? but the log is saying no connection to the backend_host. > >>>> > >>>> Hi, > >>>> > >>>> Your backend_host doesn't have a valid hostname, please set a proper > >>>> hostname in that configuration option. > >>>> > >>>> Then the next thing you need to have is the cinder-volume service > >>>> running correctly before making any requests. > >>>> > >>>> I would try adding rbd_exclusive_cinder_pool=true then tailing the > >>>> volume logs, and restarting the service. > >>>> > >>>> See if the logs show any ERROR level entries. > >>>> > >>>> I would also check the service-list output right after the service is > >>>> restarted, if it's up then I would check it again after 2 minutes. > >>>> > >>>> Cheers, > >>>> Gorka. > >>>> > >>>> > >>>>> > >>>>> Regards. > >>>>> > >>>>> > >>>>>> On 12 May 2021, at 11:49, Gorka Eguileor > wrote: > >>>>>> > >>>>>> On 12/05, ManuParra wrote: > >>>>>>> Thanks, I have restarted the service and I see that after a few > minutes then cinder-volume service goes down again when I check it with the > command openstack volume service list. > >>>>>>> The host/service that contains the cinder-volumes is > rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come > from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the > volumes. I have checked Ceph and the status of everything is correct, no > errors or warnings. > >>>>>>> The error I have is that cinder can’t connect to > rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. > >>>>>>> Kind regards. > >>>>>>> > >>>>>> > >>>>>> Hi, > >>>>>> > >>>>>> You are most likely using an older release, have a high number of > cinder > >>>>>> RBD volumes, and have not changed configuration option > >>>>>> "rbd_exclusive_cinder_pool" from its default "false" value. > >>>>>> > >>>>>> Please add to your driver's section in cinder.conf the following: > >>>>>> > >>>>>> rbd_exclusive_cinder_pool = true > >>>>>> > >>>>>> > >>>>>> And restart the service. > >>>>>> > >>>>>> Cheers, > >>>>>> Gorka. > >>>>>> > >>>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: > >>>>>>>> > >>>>>>>> Hi, > >>>>>>>> > >>>>>>>> so restart the volume service;-) > >>>>>>>> > >>>>>>>> systemctl restart openstack-cinder-volume.service > >>>>>>>> > >>>>>>>> > >>>>>>>> Zitat von ManuParra : > >>>>>>>> > >>>>>>>>> Dear OpenStack community, > >>>>>>>>> > >>>>>>>>> I have encountered a problem a few days ago and that is that > when creating new volumes with: > >>>>>>>>> > >>>>>>>>> "openstack volume create --size 20 testmv" > >>>>>>>>> > >>>>>>>>> the volume creation status shows an error. If I go to the error > log detail it indicates: > >>>>>>>>> > >>>>>>>>> "Schedule allocate volume: Could not find any available weighted > backend". > >>>>>>>>> > >>>>>>>>> Indeed then I go to the cinder log and it indicates: > >>>>>>>>> > >>>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. > >>>>>>>>> > >>>>>>>>> I check with: > >>>>>>>>> > >>>>>>>>> "openstack volume service list” in which state are the services > and I see that indeed this happens: > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down > | 2021-04-29T09:48:42.000000 | > >>>>>>>>> > >>>>>>>>> And stopped since 2021-04-29 ! > >>>>>>>>> > >>>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there are > no problems with the Ceph BackEnd, everything is apparently working. > >>>>>>>>> > >>>>>>>>> This happened after an uncontrolled outage.So my question is how > do I restart only cinder-volumes (I also have cinder-backup, > cinder-scheduler but they are ok). > >>>>>>>>> > >>>>>>>>> Thank you very much in advance. Regards. > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>> > >>>>>>> > >>>>>> > >>>>>> > >>>>> > >>>> > >>> > >> > >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat May 15 17:39:57 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 15 May 2021 13:39:57 -0400 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost> <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> Message-ID: That is a bit strange. I don't use the Ceph backend so I don't know any magic tricks. - I'm surprised that the Debug logging level doesn't add anything else. Is there any other lines besides the "connecting" one? - Can we narrow down the port/IP destination for the Ceph RBD traffic? - Can we failover the cinder-volume service to another controller and check the status of the volume service? - Did the power outage impact the Ceph cluster + network gear + all the controllers? - Does the content of /etc/ceph/ceph.conf appear to be valid inside the container? Looking at the code - https://github.com/openstack/cinder/blob/stable/train/cinder/volume/drivers/rbd.py#L432 It should raise an exception if there is a timeout when the connection client is built. except self.rados.Error: msg = _("Error connecting to ceph cluster.") LOG.exception(msg) client.shutdown() raise exception.VolumeBackendAPIException(data=msg) On Sat, May 15, 2021 at 4:16 AM Sebastian Luna Valero < sebastian.luna.valero at gmail.com> wrote: > > Hi All, > > Thanks for your inputs so far. I am also trying to help Manu with this > issue. > > The "cinder-volume" service was working properly with the existing > configuration. However, after a power outage the service is no longer > reported as "up". > > Looking at the source code, the service status is reported as "down" by > "cinder-scheduler" in here: > > > https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L618 > > With message: "WARNING cinder.scheduler.host_manager [req-<>- default > default] volume service is down. (host: rbd:volumes at ceph-rbd)" > > I printed out the "service" tuple > https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L615 > and we get: > > "2021-05-15 09:57:24.918 7 WARNING cinder.scheduler.host_manager [<> - > default default] > Service(active_backend_id=None,availability_zone='nova',binary='cinder-volume',cluster=,cluster_name=None,created_at=2020-06-12T07:53:42Z,deleted=False,deleted_at=None,disabled=False,disabled_reason=None,frozen=False,host='rbd:volumes at ceph-rbd > ',id=12,modified_at=None,object_current_version='1.38',replication_status='disabled',report_count=8067424,rpc_current_version='3.16',topic='cinder-volume',updated_at=2021-05-12T15:37:52Z,uuid='604668e8-c2e7-46ed-a2b8-086e588079ac')" > > Cinder is configured with a Ceph RBD backend, as explained in > https://github.com/openstack/kolla-ansible/blob/stable/train/doc/source/reference/storage/external-ceph-guide.rst#cinder > > That's where the "backend_host=rbd:volumes" configuration is coming from. > > We are using 3 controller nodes for OpenStack and 3 monitor nodes for Ceph. > > The Ceph cluster doesn't report any error. The "cinder-volume" containers > don't report any error. Moreover, when we go inside the "cinder-volume" > container we are able to list existing volumes with: > > rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring > ls > > So the connection to the Ceph cluster works. > > Why is "cinder-scheduler" reporting the that the backend Ceph cluster is > down? > > Many thanks, > Sebastian > > > On Thu, 13 May 2021 at 13:12, Tobias Urdin > wrote: > >> Hello, >> >> I just saw that you are running Ceph Octopus with Train release and >> wanted to let you know that we saw issues with the os-brick version shipped >> with Train not supporting client version of Ceph Octopus. >> >> So for our Ceph cluster running Octopus we had to keep the client version >> on Nautilus until upgrading to Victoria which included a newer version of >> os-brick. >> >> Maybe this is unrelated to your issue but just wanted to put it out there. >> >> Best regards >> Tobias >> >> > On 13 May 2021, at 12:55, ManuParra wrote: >> > >> > Hello Gorka, not yet, let me update cinder configuration, add the >> option, restart cinder and I’ll update the status. >> > Do you recommend other things to try for this cycle? >> > Regards. >> > >> >> On 13 May 2021, at 09:37, Gorka Eguileor wrote: >> >> >> >>> On 13/05, ManuParra wrote: >> >>> Hi Gorka again, yes, the first thing is to know why you can't connect >> to that host (Ceph is actually set up for HA) so that's the way to do it. I >> tell you this because previously from the beginning of the setup of our >> setup it has always been like that, with that hostname and there has been >> no problem. >> >>> >> >>> As for the errors, the strangest thing is that in Monasca I have not >> found any error log, only warning on “volume service is down. (host: >> rbd:volumes at ceph-rbd)" and info, which is even stranger. >> >> >> >> Have you tried the configuration change I recommended? >> >> >> >> >> >>> >> >>> Regards. >> >>> >> >>>> On 12 May 2021, at 23:34, Gorka Eguileor >> wrote: >> >>>> >> >>>> On 12/05, ManuParra wrote: >> >>>>> Hi Gorka, let me show the cinder config: >> >>>>> >> >>>>> [ceph-rbd] >> >>>>> rbd_ceph_conf = /etc/ceph/ceph.conf >> >>>>> rbd_user = cinder >> >>>>> backend_host = rbd:volumes >> >>>>> rbd_pool = cinder.volumes >> >>>>> volume_backend_name = ceph-rbd >> >>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver >> >>>>> … >> >>>>> >> >>>>> So, using rbd_exclusive_cinder_pool=True it will be used just for >> volumes? but the log is saying no connection to the backend_host. >> >>>> >> >>>> Hi, >> >>>> >> >>>> Your backend_host doesn't have a valid hostname, please set a proper >> >>>> hostname in that configuration option. >> >>>> >> >>>> Then the next thing you need to have is the cinder-volume service >> >>>> running correctly before making any requests. >> >>>> >> >>>> I would try adding rbd_exclusive_cinder_pool=true then tailing the >> >>>> volume logs, and restarting the service. >> >>>> >> >>>> See if the logs show any ERROR level entries. >> >>>> >> >>>> I would also check the service-list output right after the service is >> >>>> restarted, if it's up then I would check it again after 2 minutes. >> >>>> >> >>>> Cheers, >> >>>> Gorka. >> >>>> >> >>>> >> >>>>> >> >>>>> Regards. >> >>>>> >> >>>>> >> >>>>>> On 12 May 2021, at 11:49, Gorka Eguileor >> wrote: >> >>>>>> >> >>>>>> On 12/05, ManuParra wrote: >> >>>>>>> Thanks, I have restarted the service and I see that after a few >> minutes then cinder-volume service goes down again when I check it with the >> command openstack volume service list. >> >>>>>>> The host/service that contains the cinder-volumes is >> rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come >> from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the >> volumes. I have checked Ceph and the status of everything is correct, no >> errors or warnings. >> >>>>>>> The error I have is that cinder can’t connect to >> rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >> >>>>>>> Kind regards. >> >>>>>>> >> >>>>>> >> >>>>>> Hi, >> >>>>>> >> >>>>>> You are most likely using an older release, have a high number of >> cinder >> >>>>>> RBD volumes, and have not changed configuration option >> >>>>>> "rbd_exclusive_cinder_pool" from its default "false" value. >> >>>>>> >> >>>>>> Please add to your driver's section in cinder.conf the following: >> >>>>>> >> >>>>>> rbd_exclusive_cinder_pool = true >> >>>>>> >> >>>>>> >> >>>>>> And restart the service. >> >>>>>> >> >>>>>> Cheers, >> >>>>>> Gorka. >> >>>>>> >> >>>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >> >>>>>>>> >> >>>>>>>> Hi, >> >>>>>>>> >> >>>>>>>> so restart the volume service;-) >> >>>>>>>> >> >>>>>>>> systemctl restart openstack-cinder-volume.service >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> Zitat von ManuParra : >> >>>>>>>> >> >>>>>>>>> Dear OpenStack community, >> >>>>>>>>> >> >>>>>>>>> I have encountered a problem a few days ago and that is that >> when creating new volumes with: >> >>>>>>>>> >> >>>>>>>>> "openstack volume create --size 20 testmv" >> >>>>>>>>> >> >>>>>>>>> the volume creation status shows an error. If I go to the >> error log detail it indicates: >> >>>>>>>>> >> >>>>>>>>> "Schedule allocate volume: Could not find any available >> weighted backend". >> >>>>>>>>> >> >>>>>>>>> Indeed then I go to the cinder log and it indicates: >> >>>>>>>>> >> >>>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >> >>>>>>>>> >> >>>>>>>>> I check with: >> >>>>>>>>> >> >>>>>>>>> "openstack volume service list” in which state are the >> services and I see that indeed this happens: >> >>>>>>>>> >> >>>>>>>>> >> >>>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down >> | 2021-04-29T09:48:42.000000 | >> >>>>>>>>> >> >>>>>>>>> And stopped since 2021-04-29 ! >> >>>>>>>>> >> >>>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there >> are no problems with the Ceph BackEnd, everything is apparently working. >> >>>>>>>>> >> >>>>>>>>> This happened after an uncontrolled outage.So my question is >> how do I restart only cinder-volumes (I also have cinder-backup, >> cinder-scheduler but they are ok). >> >>>>>>>>> >> >>>>>>>>> Thank you very much in advance. Regards. >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>> >> >>>>>> >> >>>>> >> >>>> >> >>> >> >> >> >> >> > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat May 15 17:41:24 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 May 2021 19:41:24 +0200 Subject: [stein][neutron] l3 agent error Message-ID: Hello Guys, I've just upgraded from openstack queens to rocky and then to stein on centos 7. In my configuration I have router high availability. After the upgrade and rebooting each controller one by one I get the following errors on all my 3 controllers under /var/log/neutron/l3-agent.log http://paste.openstack.org/show/805407/ If I run openstack router show for for one of uuid in the log: http://paste.openstack.org/show/805408/ Namespace for router in present on all 3 controllers. After the controllers reboot,some routers lost their routing tables, but restarting l3-agent they went ok. Is it possible router ha stopped working? Any idea,please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From luke.camilleri at zylacomputing.com Sun May 16 19:36:13 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Sun, 16 May 2021 21:36:13 +0200 Subject: [Octavia][Victoria] No service listening on port 9443 in the amphora instance In-Reply-To: References: <038c74c9-1365-0c08-3b5b-93b4d175dcb3@zylacomputing.com> <326471ef-287b-d937-a174-0b1ccbbd6273@zylacomputing.com> <8ce76cbd-ca2c-033a-5406-5a1557d84302@zylacomputing.com> Message-ID: <7fe049df-098f-1378-d65e-d52817da729d@zylacomputing.com> HI Michael thanks as always for your input, below please find my replies: 1- I do not know why is this status being show as DOWN, the deployment for the entire cloud platform is a manual one (or bare metal installation as sometimes I have seen this being called). There must be a script somewhere that is checking the status of this port. Although I have no apparent issue from this I would like to see how I can get the status of this port in an ACTIVE state so any pointers would be helpful. 2- Agreed and that is in fact how it is setup, thanks. 3- Below please find commands run from amphora (192.168.1.11 is an ubuntu web server configured as a member server): # export HAPROXY_SERVER_ADDR=192.168.1.11 # ip netns exec amphora-haproxy /var/lib/octavia/ping-wrapper.sh # ip netns exec amphora-haproxy echo $? 0 # ip netns exec amphora-haproxy /usr/sbin/ping -q -n -w 1 -c 1 $HAPROXY_SERVER_ADDR PING 192.168.1.11 (192.168.1.11) 56(84) bytes of data. --- 192.168.1.11 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms And yet I still receive the below: (openstack) loadbalancer healthmonitor list | 34e1e2cf-826d-41b4-98b7-7866b680eea6 | hm3-ping | 35a0fa65de1741619709485c5f6d989b | PING | True (openstack) loadbalancer member list pool3 +--------------------------------------+----------+----------------------------------+---------------------+---------------+---------------+------------------+--------+ | id                                   | name     | project_id                       | provisioning_status | address       | protocol_port | operating_status | weight | +--------------------------------------+----------+----------------------------------+---------------------+---------------+---------------+------------------+--------+ | 2ce664e2-c84c-4e71-a903-d5f650b0f0e7 | ubuntu-2 | 35a0fa65de1741619709485c5f6d989b | ACTIVE              | 192.168.1.11  |            80 | ERROR            |      1 | | 376070b6-d290-455f-a718-aa957864e456 | ubuntu-1 | 35a0fa65de1741619709485c5f6d989b | ACTIVE              | 192.168.1.235 |            80 | ERROR            |      1 | +--------------------------------------+----------+----------------------------------+---------------------+---------------+---------------+------------------+--------+ Ping is no rocket science and the amphora interface does not even need any routing to reach the member since they are on the same layer-2 network. Below you can also see the MAC addresses of both members from the amphora: # ip netns exec amphora-haproxy arp -n Address                  HWtype  HWaddress           Flags Mask            Iface 192.168.1.235            ether   fa:16:3e:70:70:8e C                     eth1 192.168.1.11             ether   fa:16:3e:cc:03:05 C                     eth1 Deleting the health-monitor of type ping and adding an HTTP healthcheck on "/" works immediately (which clearly shows reachability to the member nodes). I agree with you 100% that a ping health-check in this day and age is something that one should not even consider but I want to make sure that I am not excluding a bigger issue here.... Thanks in advance On 14/05/2021 20:12, Michael Johnson wrote: > Hi Luke, > > 1. The "octavia-health-manager listent-port"(s) should be "ACTIVE" in > neutron. Something may have gone wrong in the deployment tooling or in > neutron for those ports. > 2. As for the VIP ports on the amphora, the base port should be > "ACTIVE", but the VIP port we use to store the VIP IP address should > be "DOWN". The "ACTIVE" base ports will list the VIP IP as it's > "allowed-address-pairs" port/ip. > 3. On the issue with the health monitor of type PING, it's rarely used > outside of some of our tests as it's a poor gauge of the health of an > endpoint (https://docs.openstack.org/octavia/latest/user/guides/basic-cookbook.html#other-health-monitors). > That said, it should be working. It's an external test in the amphora > provider that runs "ping/ping6", so it's interesting that you can ping > from the netns successfully. > Inside the amphora network namespace, can you try running the ping > script directly? > export HAPROXY_SERVER_ADDR= > /var/lib/octavia/ping-wrapper.sh > echo $? > An answer of 0 means the ping was successful, 1 a failure. > > Michael > > On Fri, May 14, 2021 at 8:15 AM Luke Camilleri > wrote: >> Hi Michael, thanks as always for the below, I have watched the video and configured all the requirements as shown in those guides. Working great right now. >> >> I have noticed the following points and would like to know if you can give me some feedback please: >> >> In the Octavia project at the networks screen --> lb-mgmt-net --> Ports --> octavia-health-manager-listen-port (the IP bound to the health-manager service) has its status Down. It does not create any sort of issue but was wondering if this was normal behavior? >> Similarly to the above point, in the tenant networks screen, every port that is "Attached Device - Octavia" has its status reported as "Down" ( these are the VIP addresses assigned to the amphora). Just need to confirm that this is normal behaviour >> Creating a health monitor of type ping fails to get the operating status of the nodes and the nodes are in error (horizon) and the amphora reports that there are no backends and hence it is not working (I am using the same backend nodes with another loadbalancer but with an HTTP check and it is working fine. a security group is setup to allow ping from 0.0.0.0/0 and from the amphora-haproxy network namespace on the amphora instance I can ping both nodes without issues ). Below the amphora's haproxy.log >> >> May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. >> >> May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: Server c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769/f04824b7-6fdf-46dc-bc83-b98b3b9f5be0 is DOWN, reason: Socket error, info: "Resource temporarily unavailable", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue. >> >> May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! >> >> May 14 15:00:50 amphora-9658d9ec-3bf1-407f-a134-86304899c015 haproxy[1984]: backend c0092bf4-d2a2-431f-8b7f-9dc3ace52933:e268db93-2d20-4395-bd6f-f6d835bce769 has no server available! >> >> Thanks in advance >> >> On 13/05/2021 18:33, Michael Johnson wrote: >> >> You are correct that two IPs are being allocated for the VIP, one is a >> secondary IP which neutron implements as an "allowed address pairs" >> port. We do this to allow failover of the amphora instance should nova >> fail the service VM. We hold the VIP IP in a special port so the IP is >> not lost while we rebuild the service VM. >> If you are using the active/standby topology (or an Octavia flavor >> with active/standby enabled), this failover is accelerated with nearly >> no visible impact to the flows through the load balancer. >> Active/Standby has been an Octavia feature since the Mitaka release. I >> gave a demo of it at the Tokoyo summit here: >> https://youtu.be/8n7FGhtOiXk?t=1420 >> >> You can enable active/standby as the default by setting the >> "loadbalancer_topology" setting in the configuration file >> (https://docs.openstack.org/octavia/latest/configuration/configref.html#controller_worker.loadbalancer_topology) >> or by creating an Octavia flavor that creates the load balancer with >> an active/standby topology >> (https://docs.openstack.org/octavia/latest/admin/flavors.html). >> >> Michael >> >> On Thu, May 13, 2021 at 4:23 AM Luke Camilleri >> wrote: >> >> HI Michael, thanks a lot for the below information it is very helpful. I >> ended up setting the o-hm0 interface statically in the >> octavia-interface.sh script which is called by the service and also >> added a delay to make sure that the bridges are up before trying to >> create a veth pair and connect the endpoints. >> >> Also I edited the unit section of the health-manager service and at the >> after option I added octavia-interface.service or else on startup the >> health manager will not bind to the lb-mgmt-net since it would not be up yet >> >> The floating IPs part was a bit tricky until I understood what was >> really going on with the VIP concept and how better and more flexible it >> is to set the VIP on the tenant network and then associate with public >> ip to the VIP. >> >> With this being said I noticed that 2 IPs are being assigned to the >> amphora instance and that the actual port assigned to the instance has >> an allowed pair with the VIP port. I checked online and it seems that >> there is an active/standby project going on with VRRP/keepalived and in >> fact the keepalived daemon is running in the amphora instance. >> >> Am I on the right track with the active/standby feature and if so do you >> have any installation/project links to share please so that I can test it? >> >> Regards >> >> On 12/05/2021 08:37, Michael Johnson wrote: >> >> Answers inline below. >> >> Michael >> >> On Mon, May 10, 2021 at 5:15 PM Luke Camilleri >> wrote: >> >> Hi Michael and thanks a lot for the detailed answer below. >> >> I believe I have got most of this sorted out apart from some small issues below: >> >> If the o-hm0 interface gets the IP information from the DHCP server setup by neutron for the lb-mgmt-net, then the management node will always have 2 default gateways and this will bring along issues, the same DHCP settings when deployed to the amphora do not have the same issue since the amphora only has 1 IP assigned on the lb-mgmt-net. Can you please confirm this? >> >> The amphorae do not have issues with DHCP and gateways as we control >> the DHCP client configuration inside the amphora. It does only have >> one IP on the lb-mgmt-net, it will honor gateways provided by neutron >> for the lb-mgmt-net traffic, but a gateway is not required on the >> lb-mgmt-network unless you are routing the lb-mgmt-net traffic across >> subnets. >> >> How does the amphora know where to locate the worker and housekeeping processes or does the traffic originate from the services instead? Maybe the addresses are "injected" from the config file? >> >> The worker and housekeeping processes only create connections to the >> amphora, they do not receive connections from them. The amphora send a >> heartbeat packet to the health manager endpoints every ten seconds by >> default. The list of valid health manager endpoints is included in the >> amphora agent configuration file that is injected into the service VM >> at boot time. It can be updated using the Octavia admin API for >> refreshing the amphora agent configuration. >> >> Can you please confirm if the same floating IP concept runs from public (external) IP to the private (tenant) and from private to lb-mgmt-net please? >> >> Octavia does not use floating IPs. Users can create and assign >> floating IPs via neutron if they would like, but they are not >> necessary. Octavia VIPs can be created directly on neutron "external" >> networks, avoiding the NAT overhead of floating IPs. >> There is no practical reason to assign a floating IP to a port on the >> lb-mgmt-net as tenant traffic is never on or accessible from that >> network. >> >> Thanks in advance for any feedback >> >> On 06/05/2021 22:46, Michael Johnson wrote: >> >> Hi Luke, >> >> 1. I agree that DHCP is technically unnecessary for the o-hm0 >> interface if you can manage your address allocation on the network you >> are using for the lb-mgmt-net. >> I don't have detailed information about the Ubuntu install >> instructions, but I suspect it was done to simplify the IPAM to be >> managed by whatever is providing DHCP on the lb-mgmt-net provided (be >> it neutron or some other resource on a provider network). >> The lb-mgmt-net is simply a neutron network that the amphora >> management address is on. It is routable and does not require external >> access. The only tricky part to it is the worker, health manager, and >> housekeeping processes need to be reachable from the amphora, and the >> controllers need to reach the amphora over the network(s). There are >> many ways to accomplish this. >> >> 2. See my above answer. Fundamentally the lb-mgmt-net is just a >> neutron network that nova can use to attach an interface to the >> amphora instances for command and control traffic. As long as the >> controllers can reach TCP 9433 on the amphora, and the amphora can >> send UDP 5555 back to the health manager endpoints, it will work fine. >> >> 3. Octavia, with the amphora driver, does not require any special >> configuration in Neutron (beyond the advanced services RBAC policy >> being available for the neutron service account used in your octavia >> configuration file). The neutron_lbaas.conf and services_lbaas.conf >> are legacy configuration files/settings that were used for >> neutron-lbaas which is now end of life. See the wiki page for >> information on the deprecation of neutron-lbaas: >> https://wiki.openstack.org/wiki/Neutron/LBaaS/Deprecation. >> >> Michael >> >> On Thu, May 6, 2021 at 12:30 PM Luke Camilleri >> wrote: >> >> Hi Michael and thanks a lot for your help on this, after following your >> steps the agent got deployed successfully in the amphora-image. >> >> I have some other queries that I would like to ask mainly related to the >> health-manager/load-balancer network setup and IP assignment. First of >> all let me point out that I am using a manual installation process, and >> it might help others to understand the underlying infrastructure >> required to make this component work as expected. >> >> 1- The installation procedure contains this step: >> >> $ sudo cp octavia/etc/dhcp/dhclient.conf /etc/dhcp/octavia >> >> which is later on called to assign the IP to the o-hm0 interface which >> is connected to the lb-management network as shown below: >> >> $ sudo dhclient -v o-hm0 -cf /etc/dhcp/octavia >> >> Apart from having a dhcp config for a single IP seems a bit of an >> overkill, using these steps is injecting an additional routing table >> into the default namespace as shown below in my case: >> >> # route -n >> Kernel IP routing table >> Destination Gateway Genmask Flags Metric Ref Use >> Iface >> 0.0.0.0 172.16.0.1 0.0.0.0 UG 0 0 0 o-hm0 >> 0.0.0.0 10.X.X.1 0.0.0.0 UG 100 0 0 ensX >> 10.X.X.0 0.0.0.0 255.255.255.0 U 100 0 0 ensX >> 169.254.169.254 172.16.0.100 255.255.255.255 UGH 0 0 0 o-hm0 >> 172.16.0.0 0.0.0.0 255.240.0.0 U 0 0 0 o-hm0 >> >> Since the load-balancer management network does not need any external >> connectivity (but only communication between health-manager service and >> amphora-agent), why is a gateway required and why isn't the IP address >> allocated as part of the interface creation script which is called when >> the service is started or stopped (example below)? >> >> --- >> >> #!/bin/bash >> >> set -ex >> >> MAC=$MGMT_PORT_MAC >> BRNAME=$BRNAME >> >> if [ "$1" == "start" ]; then >> ip link add o-hm0 type veth peer name o-bhm0 >> brctl addif $BRNAME o-bhm0 >> ip link set o-bhm0 up >> ip link set dev o-hm0 address $MAC >> *** ip addr add 172.16.0.2/12 dev o-hm0 >> ***ip link set o-hm0 mtu 1500 >> ip link set o-hm0 up >> iptables -I INPUT -i o-hm0 -p udp --dport 5555 -j ACCEPT >> elif [ "$1" == "stop" ]; then >> ip link del o-hm0 >> else >> brctl show $BRNAME >> ip a s dev o-hm0 >> fi >> >> --- >> >> 2- Is there a possibility to specify a fixed vlan outside of tenant >> range for the load balancer management network? >> >> 3- Are the configuration changes required only in neutron.conf or also >> in additional config files like neutron_lbaas.conf and >> services_lbaas.conf, similar to the vpnaas configuration? >> >> Thanks in advance for any assistance, but its like putting together a >> puzzle of information :-) >> >> On 05/05/2021 20:25, Michael Johnson wrote: >> >> Hi Luke. >> >> Yes, the amphora-agent will listen on 9443 in the amphorae instances. >> It uses TLS mutual authentication, so you can get a TLS response, but >> it will not let you into the API without a valid certificate. A simple >> "openssl s_client" is usually enough to prove that it is listening and >> requesting the client certificate. >> >> I can't talk to the "openstack-octavia-diskimage-create" package you >> found in centos, but I can discuss how to build an amphora image using >> the OpenStack tools. >> >> If you get Octavia from git or via a release tarball, we provide a >> script to build the amphora image. This is how we build our images for >> the testing gates, etc. and is the recommended way (at least from the >> OpenStack Octavia community) to create amphora images. >> >> https://opendev.org/openstack/octavia/src/branch/master/diskimage-create >> >> For CentOS 8, the command would be: >> >> diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 (3 >> is the minimum disk size for centos images, you may want more if you >> are not offloading logs) >> >> I just did a run on a fresh centos 8 instance: >> git clone https://opendev.org/openstack/octavia >> python3 -m venv dib >> source dib/bin/activate >> pip3 install diskimage-builder PyYAML six >> sudo dnf install yum-utils >> ./diskimage-create.sh -g stable/victoria -i centos-minimal -d 8 -s 3 >> >> This built an image. >> >> Off and on we have had issues building CentOS images due to issues in >> the tools we rely on. If you run into issues with this image, drop us >> a note back. >> >> Michael >> >> On Wed, May 5, 2021 at 9:37 AM Luke Camilleri >> wrote: >> >> Hi there, i am trying to get Octavia running on a Victoria deployment on >> CentOS 8. It was a bit rough getting to the point to launch an instance >> mainly due to the load-balancer management network and the lack of >> documentation >> (https://docs.openstack.org/octavia/victoria/install/install.html) to >> deploy this oN CentOS. I will try to fix this once I have my deployment >> up and running to help others on the way installing and configuring this :-) >> >> At this point a LB can be launched by the tenant and the instance is >> spawned in the Octavia project and I can ping and SSH into the amphora >> instance from the Octavia node where the octavia-health-manager service >> is running using the IP within the same subnet of the amphoras >> (172.16.0.0/12). >> >> Unfortunately I keep on getting these errors in the log file of the >> worker log (/var/log/octavia/worker.log): >> >> 2021-05-05 01:54:49.368 14521 WARNING >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Could not connect >> to instance. Retrying.: requests.exceptions.ConnectionError: >> HTTPSConnectionPool(host='172.16.4.46', p >> ort=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] >> Connection ref >> used',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.amphorae.drivers.haproxy.rest_api_driver [-] Connection retries >> (currently set to 120) exhausted. The amphora is unavailable. Reason: >> HTTPSConnectionPool(host='172.16 >> .4.46', port=9443): Max retries exceeded with url: // (Caused by >> NewConnectionError('> at 0x7f83e0181550>: Failed to establish a new connection: [Errno 111] Conne >> ction refused',)) >> >> 2021-05-05 01:54:54.374 14521 ERROR >> octavia.controller.worker.v1.tasks.amphora_driver_tasks [-] Amphora >> compute instance failed to become reachable. This either means the >> compute driver failed to fully boot the >> instance inside the timeout interval or the instance is not reachable >> via the lb-mgmt-net.: >> octavia.amphorae.driver_exceptions.exceptions.TimeOutException: >> contacting the amphora timed out >> >> obviously the instance is deleted then and the task fails from the >> tenant's perspective. >> >> The main issue here is that there is no service running on port 9443 on >> the amphora instance. I am assuming that this is in fact the >> amphora-agent service that is running on the instance which should be >> listening on this port 9443 but the service does not seem to be up or >> not installed at all. >> >> To create the image I have installed the CentOS package >> "openstack-octavia-diskimage-create" which provides the utility >> disk-image-create but from what I can conclude the amphora-agent is not >> being installed (thought this was done automatically by default :-( ) >> >> Can anyone let me know if the amphora-agent is what gets queried on port >> 9443 ? >> >> If the agent is not installed/injected by default when building the >> amphora image? >> >> The command to inject the amphora-agent into the amphora image when >> using the disk-image-create command? >> >> Thanks in advance for any assistance >> >> From ignaziocassano at gmail.com Sat May 15 17:13:27 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 May 2021 19:13:27 +0200 Subject: [neutron][stein] l3 agent errors Message-ID: Hello Guys, I've just upgraded from openstack queens to rocky and then to stein on centos 7. In my configuration I have router high availability. After the upgrade I get the following errors on all my 3 controllers under /var/log/neutron/l3-agent.log 2021-05-15 19:04:28.599 13219 ERROR neutron.agent.linux.external_process [-] ip_monitor for router with uuid 8c137f17-52e6-47f9-bcaa-0eac94a9acd7 not found. The process should not have died 2021-05-15 19:04:28.600 13219 WARNING neutron.agent.linux.external_process [-] Respawning ip_monitor for uuid 8c137f17-52e6-47f9-bcaa-0eac94a9acd7 2021-05-15 19:04:29.905 13219 ERROR neutron.agent.linux.external_process [-] ip_monitor for router with uuid 303c88e4-1c94-4ebc-b592-91063b371db9 not found. The process should not have died 2021-05-15 19:04:29.905 13219 WARNING neutron.agent.linux.external_process [-] Respawning ip_monitor for uuid 303c88e4-1c94-4ebc-b592-91063b371db9 2021-05-15 19:04:31.267 13219 ERROR neutron.agent.linux.external_process [-] ip_monitor for router with uuid 103f3588-a775-4bca-b7a5-093a05068f64 not found. The process should not have died 2021-05-15 19:04:31.268 13219 WARNING neutron.agent.linux.external_process [-] Respawning ip_monitor for uuid 103f3588-a775-4bca-b7a5-093a05068f64 2021-05-15 19:04:32.684 13219 ERROR neutron.agent.linux.external_process [-] ip_monitor for router with uuid c6136e08-16a3-4d56-9440-59a7fe90d0b0 not found. The process should not have died 2021-05-15 19:04:32.685 13219 WARNING neutron.agent.linux.external_process [-] Respawning ip_monitor for uuid c6136e08-16a3-4d56-9440-59a7fe90d0b0 2021-05-15 19:04:34.140 13219 ERROR neutron.agent.linux.external_process [-] ip_monitor for router with uuid 9abaad3a-2439-4036-8611-5f93a8a3501f not found. The process should not have died 2021-05-15 19:04:34.140 13219 WARNING neutron.agent.linux.external_process [-] Respawning ip_monitor for uuid 9abaad3a-2439-4036-8611-5f93a8a3501f Getting for example the last uuid if I execute "openstack router show openstack router show 9abaad3a-2439-4036-8611-5f93a8a3501f +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2020-12-17T10:11:24Z | | description | | | distributed | False | | external_gateway_info | None | | flavor_id | None | | ha | True | | id | 9abaad3a-2439-4036-8611-5f93a8a3501f | | interfaces_info | [{"subnet_id": "a60b631d-d13c-4d02-a51b-2c86c0b38614", "ip_address": "192.168.201.1", "port_id": "08120fb0-be86-47a0-afd6-932801a2d194"}, {"subnet_id": "0ffcb170-66bb-4e65-9df0-80052deb1299", "ip_address": "169.254.192.11", "port_id": "3aa676aa-91cc-43c4-87ea-3b93340950ad"}, {"subnet_id": "0ffcb170-66bb-4e65-9df0-80052deb1299", "ip_address": "169.254.192.21", "port_id": "d5c0cff2-363d-470a-b77c-155d06e2654b"}, {"subnet_id": "0ffcb170-66bb-4e65-9df0-80052deb1299", "ip_address": "169.254.192.3", "port_id": "dab50b1a-d720-410a-b358-20c53a097bd7"}, {"subnet_id": "ae117b0e-fce9-42ca-88b3-a32ba239d795", "ip_address": "192.168.96.59", "port_id": "e49945aa-3e71-4f38-8c76-1c246cf18ff6"}] | | location | Munch({'project': Munch({'domain_name': None, 'domain_id': None, 'name': None, 'id': u'07573f587ceb4d9da007cec4ebc5dcf8'}), 'cloud': '', 'region_name': '', 'zone': None}) | | name | InternetGateway-a820a62fae-avz714-3-router | | project_id | 07573f587ceb4d9da007cec4ebc5dcf8 | | revision_number | 11 | | routes | destination='0.0.0.0/0', gateway='192.168.96.60' | | | destination='192.168.200.0/24', gateway='192.168.96.60' | | | destination='192.168.202.0/24', gateway='192.168.96.58' | | | destination='192.168.203.0/24', gateway='192.168.96.57' | | status | ACTIVE | | tags | | | updated_at | 2020-12-18T15:22:57Z | +-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ Namespace for router exists in all 3 controllers. Any idea, please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 17 01:06:04 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 16 May 2021 20:06:04 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 14th May, 21: Reading: 5 min Message-ID: <17977dbcf2a.11f3cae81605211.6321792316324805574@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= Project updates: ------------------- ** puppet-glare is retired[1] Other updates: ------------------ ** Merged the 'Y' release naming process schedule[2]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-05-13-15.00.log.html * We will have next week's meeting on May 20th, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Two open reviews for ongoing activities[5]. Starting the 'Y' release naming process --------------------------------------------- * Y release naming process is started[6]. Nomination is open until June 10th feel free to propose names in below wiki ** https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Others --------- * Replacing ATC terminology with AC[7]. As UC is merged into TC, this is to include the AUC into ATC so that they can be eligible for TC election voting. We are having a good amount of discussion on Gerrit, feel free to review the patch if you have any points regarding this. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[8]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [9] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [10] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://review.opendev.org/c/openstack/governance/+/790582 [2] https://review.opendev.org/c/openstack/governance/+/789385 [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html [7] https://review.opendev.org/c/openstack/governance/+/790092 [8] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [9] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [10] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From ignaziocassano at gmail.com Mon May 17 05:43:20 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 17 May 2021 07:43:20 +0200 Subject: [stein][neutron] l3 agent error In-Reply-To: References: Message-ID: Hello All, the l3 agent respawning error, cause neutron to fill controller memory and the controller stops to responding and it is fenced by the others. So the router ha move some jobs to another controller and it fills its memory and so on. I stopped neutron services and I cleaned HA directory under /var/lib/neutron. Restarting neutron services respawning errors disappeared but I had to create again some router static routes (not all). Probably it is a bug ? Ignazio Il Sab 15 Mag 2021, 19:41 Ignazio Cassano ha scritto: > Hello Guys, > I've just upgraded from openstack queens to rocky and then to stein on > centos 7. > In my configuration I have router high availability. > After the upgrade and rebooting each controller one by one I get the > following errors on all my 3 controllers under /var/log/neutron/l3-agent.log > > http://paste.openstack.org/show/805407/ > > If I run openstack router show for for one of uuid in the log: > http://paste.openstack.org/show/805408/ > > Namespace for router in present on all 3 controllers. > After the controllers reboot,some routers lost their routing tables, but > restarting l3-agent they went ok. > Is it possible router ha stopped working? > Any idea,please ? > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshewale at redhat.com Mon May 17 05:50:44 2021 From: bshewale at redhat.com (Bhagyashri Shewale) Date: Mon, 17 May 2021 11:20:44 +0530 Subject: [tripleo] TripleO CI Summary: Unified Sprint 43 Message-ID: Greetings, The TripleO CI team has just completed **Unified Sprint 43** (Apr 8 thru to Apr 28 wed 2021). The following is a summary of completed work during this sprint cycle: - Successfully deployed promoter server for all c8 and c7 releases using new code (next gen) promoter code - Continue to adopt new changes and improvements in the promoter : - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32135 - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32058 - Reduce our upstream resource usage with zuul jobs including optimizations across the zuul layouts in all the tripleo-* repos - https://review.opendev.org/q/topic:tripleo-ci-reduce - Removing Rocky jobs (EOL) - https://review.opendev.org/q/topic:%22tripleo-ci-reduce-rocky%22 - https://review.rdoproject.org/r/q/topic:%22tripleo-ci-reduce-rocky%22 - Content provider jobs across all branches - Wallaby branching -Component / integration rdo pipelines for wallaby - https://hackmd.io/2sxlx1XzTa-Te47_zLv42Q - Repo sanity check as part of the integration lines - https://review.opendev.org/c/openstack/tripleo-quickstart-extras/+/785040 - Correct the tagging in copy-quay tool - the copy and tagging was one single operation, now copy command is copying every hour all the hashes available and tag is running separately now collecting the current-tripleo tag directly from the delorean api. - copy-quay script: https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32478 - Added all the monitoring work items for wallaby in Grafana: - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/32890 - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/33125 - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/33238 - Python version of cloud infra cleanup script - https://review.rdoproject.org/r/c/rdo-infra/ci-config/+/31410 The planned work for the next sprint and leftover previous sprint work as following: - Deploy the promoter server for upstream promotions on vexxhost - Dependency Pipeline updates: - Update RHEL_U_Next to handle RHEL8.4 + ct3.0 + rhos-release scenario - elastic-recheck containerization - https://hackmd.io/dmxF-brbS-yg7tkFB_kxXQ - Openstack health for tripleo - https://hackmd.io/HQ5hyGAOSuG44Le2x6YzUw - Tripleo-repos spec and implementation - Adds tripleo-get-hash module get tripleo-ci hash info from tag - https://review.opendev.org/c/openstack/tripleo-ci/+/784392 - https://hackmd.io/v2jCX9RwSeuP8EEFDHRa8g?view - https://review.opendev.org/c/openstack/tripleo-specs/+/772442 - Tempest skiplist: Started to work on tempest-skiplist allowed list of tests (no patches yet) - Patrole stable release- Started to work on it - Ensure pip installs do not impact the tripleo deployment - https://review.opendev.org/q/topic:%22tqe_requires_cleanup%22 The Ruck and Rover for this sprint are Amol Kahat(akahat) and Pooja Jadhav(pojadhav). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in hackmd. Thanks, Bhagyashri Shewale -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian.luna.valero at gmail.com Mon May 17 06:04:09 2021 From: sebastian.luna.valero at gmail.com (Sebastian Luna Valero) Date: Mon, 17 May 2021 08:04:09 +0200 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost> <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> Message-ID: Thanks, Laurent. Long story short, we have been able to bring the "cinder-volume" service back up. We restarted the "cinder-volume" and "cinder-scheduler" services with "debug=True", got back the same debug message: 2021-05-15 23:15:27.091 31 DEBUG cinder.volume.drivers.rbd [req-f43e30ae-2bdc-4690-9c1b-3e58081fdc9e - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 Then, I had a look at the docs looking for "timeout" configuration options: https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html#driver-options "rados_connect_timeout = -1; (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used." I added it to the "cinder.conf" file for the "cinder-volume" service with: "rados_connect_timeout=15". Before this change the "cinder-volume" logs ended with this message: 2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) After the change: 2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) 2021-05-15 23:04:23.180 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver initialization completed successfully. 2021-05-15 23:04:23.190 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initiating service 12 cleanup 2021-05-15 23:04:23.196 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Service 12 cleanup completed. 2021-05-15 23:04:23.315 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0) 2021-05-15 23:05:10.381 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver post RPC initialization completed successfully. And now the service is reported as "up" in "openstack volume service list" and we can successfully create Ceph volumes now. Many will do more validation tests today to confirm. So it looks like the "cinder-volume" service didn't start up properly in the first place and that's why the service was "down". Why adding "rados_connect_timeout=15" to cinder.conf solved the issue? I honestly don't know and it was a matter of luck to try this out. If anyone knows the reason, we would love to know more. Thank you very much again for your kind help! Best regards, Sebastian On Sat, 15 May 2021 at 19:40, Laurent Dumont wrote: > That is a bit strange. I don't use the Ceph backend so I don't know any > magic tricks. > > - I'm surprised that the Debug logging level doesn't add anything > else. Is there any other lines besides the "connecting" one? > - Can we narrow down the port/IP destination for the Ceph RBD traffic? > - Can we failover the cinder-volume service to another controller and > check the status of the volume service? > - Did the power outage impact the Ceph cluster + network gear + all > the controllers? > - Does the content of /etc/ceph/ceph.conf appear to be valid inside > the container? > > Looking at the code - > https://github.com/openstack/cinder/blob/stable/train/cinder/volume/drivers/rbd.py#L432 > > It should raise an exception if there is a timeout when the connection > client is built. > > except self.rados.Error: > msg = _("Error connecting to ceph cluster.") > LOG.exception(msg) > client.shutdown() > raise exception.VolumeBackendAPIException(data=msg) > > On Sat, May 15, 2021 at 4:16 AM Sebastian Luna Valero < > sebastian.luna.valero at gmail.com> wrote: > >> >> Hi All, >> >> Thanks for your inputs so far. I am also trying to help Manu with this >> issue. >> >> The "cinder-volume" service was working properly with the existing >> configuration. However, after a power outage the service is no longer >> reported as "up". >> >> Looking at the source code, the service status is reported as "down" by >> "cinder-scheduler" in here: >> >> >> https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L618 >> >> With message: "WARNING cinder.scheduler.host_manager [req-<>- default >> default] volume service is down. (host: rbd:volumes at ceph-rbd)" >> >> I printed out the "service" tuple >> https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L615 >> and we get: >> >> "2021-05-15 09:57:24.918 7 WARNING cinder.scheduler.host_manager [<> - >> default default] >> Service(active_backend_id=None,availability_zone='nova',binary='cinder-volume',cluster=,cluster_name=None,created_at=2020-06-12T07:53:42Z,deleted=False,deleted_at=None,disabled=False,disabled_reason=None,frozen=False,host='rbd:volumes at ceph-rbd >> ',id=12,modified_at=None,object_current_version='1.38',replication_status='disabled',report_count=8067424,rpc_current_version='3.16',topic='cinder-volume',updated_at=2021-05-12T15:37:52Z,uuid='604668e8-c2e7-46ed-a2b8-086e588079ac')" >> >> Cinder is configured with a Ceph RBD backend, as explained in >> https://github.com/openstack/kolla-ansible/blob/stable/train/doc/source/reference/storage/external-ceph-guide.rst#cinder >> >> That's where the "backend_host=rbd:volumes" configuration is coming from. >> >> We are using 3 controller nodes for OpenStack and 3 monitor nodes for >> Ceph. >> >> The Ceph cluster doesn't report any error. The "cinder-volume" containers >> don't report any error. Moreover, when we go inside the "cinder-volume" >> container we are able to list existing volumes with: >> >> rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring >> ls >> >> So the connection to the Ceph cluster works. >> >> Why is "cinder-scheduler" reporting the that the backend Ceph cluster is >> down? >> >> Many thanks, >> Sebastian >> >> >> On Thu, 13 May 2021 at 13:12, Tobias Urdin >> wrote: >> >>> Hello, >>> >>> I just saw that you are running Ceph Octopus with Train release and >>> wanted to let you know that we saw issues with the os-brick version shipped >>> with Train not supporting client version of Ceph Octopus. >>> >>> So for our Ceph cluster running Octopus we had to keep the client >>> version on Nautilus until upgrading to Victoria which included a newer >>> version of os-brick. >>> >>> Maybe this is unrelated to your issue but just wanted to put it out >>> there. >>> >>> Best regards >>> Tobias >>> >>> > On 13 May 2021, at 12:55, ManuParra wrote: >>> > >>> > Hello Gorka, not yet, let me update cinder configuration, add the >>> option, restart cinder and I’ll update the status. >>> > Do you recommend other things to try for this cycle? >>> > Regards. >>> > >>> >> On 13 May 2021, at 09:37, Gorka Eguileor wrote: >>> >> >>> >>> On 13/05, ManuParra wrote: >>> >>> Hi Gorka again, yes, the first thing is to know why you can't >>> connect to that host (Ceph is actually set up for HA) so that's the way to >>> do it. I tell you this because previously from the beginning of the setup >>> of our setup it has always been like that, with that hostname and there has >>> been no problem. >>> >>> >>> >>> As for the errors, the strangest thing is that in Monasca I have not >>> found any error log, only warning on “volume service is down. (host: >>> rbd:volumes at ceph-rbd)" and info, which is even stranger. >>> >> >>> >> Have you tried the configuration change I recommended? >>> >> >>> >> >>> >>> >>> >>> Regards. >>> >>> >>> >>>> On 12 May 2021, at 23:34, Gorka Eguileor >>> wrote: >>> >>>> >>> >>>> On 12/05, ManuParra wrote: >>> >>>>> Hi Gorka, let me show the cinder config: >>> >>>>> >>> >>>>> [ceph-rbd] >>> >>>>> rbd_ceph_conf = /etc/ceph/ceph.conf >>> >>>>> rbd_user = cinder >>> >>>>> backend_host = rbd:volumes >>> >>>>> rbd_pool = cinder.volumes >>> >>>>> volume_backend_name = ceph-rbd >>> >>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver >>> >>>>> … >>> >>>>> >>> >>>>> So, using rbd_exclusive_cinder_pool=True it will be used just for >>> volumes? but the log is saying no connection to the backend_host. >>> >>>> >>> >>>> Hi, >>> >>>> >>> >>>> Your backend_host doesn't have a valid hostname, please set a proper >>> >>>> hostname in that configuration option. >>> >>>> >>> >>>> Then the next thing you need to have is the cinder-volume service >>> >>>> running correctly before making any requests. >>> >>>> >>> >>>> I would try adding rbd_exclusive_cinder_pool=true then tailing the >>> >>>> volume logs, and restarting the service. >>> >>>> >>> >>>> See if the logs show any ERROR level entries. >>> >>>> >>> >>>> I would also check the service-list output right after the service >>> is >>> >>>> restarted, if it's up then I would check it again after 2 minutes. >>> >>>> >>> >>>> Cheers, >>> >>>> Gorka. >>> >>>> >>> >>>> >>> >>>>> >>> >>>>> Regards. >>> >>>>> >>> >>>>> >>> >>>>>> On 12 May 2021, at 11:49, Gorka Eguileor >>> wrote: >>> >>>>>> >>> >>>>>> On 12/05, ManuParra wrote: >>> >>>>>>> Thanks, I have restarted the service and I see that after a few >>> minutes then cinder-volume service goes down again when I check it with the >>> command openstack volume service list. >>> >>>>>>> The host/service that contains the cinder-volumes is >>> rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come >>> from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the >>> volumes. I have checked Ceph and the status of everything is correct, no >>> errors or warnings. >>> >>>>>>> The error I have is that cinder can’t connect to >>> rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >>> >>>>>>> Kind regards. >>> >>>>>>> >>> >>>>>> >>> >>>>>> Hi, >>> >>>>>> >>> >>>>>> You are most likely using an older release, have a high number of >>> cinder >>> >>>>>> RBD volumes, and have not changed configuration option >>> >>>>>> "rbd_exclusive_cinder_pool" from its default "false" value. >>> >>>>>> >>> >>>>>> Please add to your driver's section in cinder.conf the following: >>> >>>>>> >>> >>>>>> rbd_exclusive_cinder_pool = true >>> >>>>>> >>> >>>>>> >>> >>>>>> And restart the service. >>> >>>>>> >>> >>>>>> Cheers, >>> >>>>>> Gorka. >>> >>>>>> >>> >>>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >>> >>>>>>>> >>> >>>>>>>> Hi, >>> >>>>>>>> >>> >>>>>>>> so restart the volume service;-) >>> >>>>>>>> >>> >>>>>>>> systemctl restart openstack-cinder-volume.service >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> Zitat von ManuParra : >>> >>>>>>>> >>> >>>>>>>>> Dear OpenStack community, >>> >>>>>>>>> >>> >>>>>>>>> I have encountered a problem a few days ago and that is that >>> when creating new volumes with: >>> >>>>>>>>> >>> >>>>>>>>> "openstack volume create --size 20 testmv" >>> >>>>>>>>> >>> >>>>>>>>> the volume creation status shows an error. If I go to the >>> error log detail it indicates: >>> >>>>>>>>> >>> >>>>>>>>> "Schedule allocate volume: Could not find any available >>> weighted backend". >>> >>>>>>>>> >>> >>>>>>>>> Indeed then I go to the cinder log and it indicates: >>> >>>>>>>>> >>> >>>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>> >>>>>>>>> >>> >>>>>>>>> I check with: >>> >>>>>>>>> >>> >>>>>>>>> "openstack volume service list” in which state are the >>> services and I see that indeed this happens: >>> >>>>>>>>> >>> >>>>>>>>> >>> >>>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | >>> down | 2021-04-29T09:48:42.000000 | >>> >>>>>>>>> >>> >>>>>>>>> And stopped since 2021-04-29 ! >>> >>>>>>>>> >>> >>>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there >>> are no problems with the Ceph BackEnd, everything is apparently working. >>> >>>>>>>>> >>> >>>>>>>>> This happened after an uncontrolled outage.So my question is >>> how do I restart only cinder-volumes (I also have cinder-backup, >>> cinder-scheduler but they are ok). >>> >>>>>>>>> >>> >>>>>>>>> Thank you very much in advance. Regards. >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>>> >>> >>>>>>> >>> >>>>>>> >>> >>>>>> >>> >>>>>> >>> >>>>> >>> >>>> >>> >>> >>> >> >>> >> >>> > >>> > >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ramishra at redhat.com Mon May 17 06:49:02 2021 From: ramishra at redhat.com (Rabi Mishra) Date: Mon, 17 May 2021 12:19:02 +0530 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> References: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> Message-ID: On Fri, May 14, 2021 at 9:07 PM Javier Pena wrote: > > On Fri, May 14, 2021 at 6:41 AM Marios Andreou wrote: > >> On Thu, May 13, 2021 at 9:41 PM James Slagle >> wrote: >> > >> > I'd like to propose that TripleO opt out of dependency management by >> removing tripleo-common from global-requirements.txt. I do not feel that >> the global dependency management brings any advantages or anything needed >> for TripleO. I can't think of any reason to enforce the ability to be >> globally pip installable with the rest of OpenStack. >> >> To add a bit more context as to how this discussion came about, we >> tried to remove tripleo-common from global-requirements and the >> check-requirements jobs in tripleoclient caused [1] as a result. >> >> So we need to decide whether to continue to be part of that >> requirements contract [2] or if we will do what is proposed here and >> remove ourselves altogether. >> If we decide _not_ to implement this proposal then we will also have >> to add the requirements-check jobs in tripleo-ansible [3] and >> tripleo-validations [4] as they are currently missing. >> >> > >> > Two of our most critical projects, tripleoclient and tripleo-common do >> not even put many of their data files in the right place where our code >> expects them when they are pip installed. So, I feel fairly confident that >> no one is pip installing TripleO and relying on global requirements >> enforcement. >> >> I don't think this is _just_ about pip installs. It is generally about >> the contents of each project requirements.txt. As part of the >> requirements contract, it means that those repos with which we are >> participating (the ones in projects.txt [5]) are protected against >> other projects making any breaking changes in _their_ >> requirements.txt. Don't the contents of requirements.txt also end up >> in the .spec file from which we are building rpm e.g. [6] for tht? In >> which case if we remove this and just stop catching any breaking >> changes in the check/gate check-requirements jobs, I suspect we will >> just move the problem to the rpm build and it will fail there. >> > > I don't see that in the spec file. Unless there is some other automation > somewhere that regenerates all of the BuildRequires/Requires and modifies > them to match requirements.txt/test-requirements.txt? > > We run periodic syncs once every cycle, and try our best to make spec > requirements match requirements.txt/test-requirements.txt for the project. > See > https://review.rdoproject.org/r/c/openstack/tripleoclient-distgit/+/33367 > for a recent example on tripleoclient. > > I don't have a special opinion on keeping tripleo-common inside > global-requirements.txt or not. However, all TripleO projects still need to > be co-installable with other OpenStack projects, otherwise we will not be > able to build packages for them due to all the dependency issues that could > arise. > I think this is probably less of an issue with containerized services(?). At present, there are only two service containers that install tripleo-common (mistral, nova-scheduler). With mistral deprecated and nova removed from the undercloud (tripleo-common has a tripleo specific scheduler filter used by undercloud nova), we probably won't have many issues. However, as we sync requirements from project requirements to package specs regularly, there could be issues with broken requirements. > I'm not sure if that was implied in the original post. > > Regards, > Javier > > > > > >> > One potential advantage of not being in global-requirements.txt is that >> our unit tests and functional tests could actually test the same code. As >> things stand today, our unit tests in projects that depend on >> tripleo-common are pinned to the version in global-requirements.txt, while >> our functional tests currently run with tripleo-common from master (or >> included depends-on). >> >> I don't think one in particular is a very valid point though - as >> things currently stand in global-requirements we aren't 'pinning' all >> we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid >> (I assume) some bad release we made. >> > > tripleo-common is pinned to the latest release when it's pip installed in > the venv, instead of using latest git (and including depends-on). You're > right that it's probably what we want to keep doing, and this is probably > not related to opting out of g-r. Especially since we don't want to require > latest git of a dependency when running unit tests locally. However it is > worth noting that our unit tests (tox) and functional tests (tripleo-ci) > use different code for the dependencies. That was not obvious to me and > others on the surface. Perhaps we could add additional tox jobs that do > require the latest tripleo-common from git to also cover that scenario. > > Here's an example: > https://review.opendev.org/c/openstack/python-tripleoclient/+/787907 > https://review.opendev.org/c/openstack/tripleo-common/+/787906 > > The tripleo-common patch fails unit tests as expected, the tripleoclient > which depends-on the tripleo-common patch passes unit tests, but fails > functional. I'd rather see that failure caught by a unit test as well. > > > -- > -- James Slagle > -- > > > -- Regards, Rabi Mishra -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon May 17 07:00:15 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 17 May 2021 09:00:15 +0200 Subject: [stein][neutron] l3 agent error In-Reply-To: References: Message-ID: <19115118.uMW6F6j3WE@p1> Hi, Dnia poniedziałek, 17 maja 2021 07:43:20 CEST Ignazio Cassano pisze: > Hello All, the l3 agent respawning error, cause neutron to fill controller > memory and the controller stops to responding and it is fenced by the > others. > So the router ha move some jobs to another controller and it fills its > memory and so on. Do I understand correctly that You have some memory leak in the neutron? If so, is it in neutron-server or neutron-l3-agent? And also, if that is true, can You open LP bug for that and provide some more info, like how many routers do You have there how to maybe reproduce it, etc. > I stopped neutron services and I cleaned HA directory under > /var/lib/neutron. > Restarting neutron services respawning errors disappeared but I had to > create again some router static routes (not all). > Probably it is a bug ? > Ignazio > > > Il Sab 15 Mag 2021, 19:41 Ignazio Cassano ha > > scritto: > > Hello Guys, > > I've just upgraded from openstack queens to rocky and then to stein on > > centos 7. > > In my configuration I have router high availability. > > After the upgrade and rebooting each controller one by one I get the > > following errors on all my 3 controllers under /var/log/neutron/l3- agent.log > > > > http://paste.openstack.org/show/805407/ > > > > If I run openstack router show for for one of uuid in the log: > > http://paste.openstack.org/show/805408/ > > > > Namespace for router in present on all 3 controllers. > > After the controllers reboot,some routers lost their routing tables, but > > restarting l3-agent they went ok. > > Is it possible router ha stopped working? > > Any idea,please ? > > Ignazio -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From mark at stackhpc.com Mon May 17 08:17:26 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 17 May 2021 09:17:26 +0100 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem In-Reply-To: References: Message-ID: On Fri, 14 May 2021 at 21:40, Előd Illés wrote: > > Hi teams in $SUBJECT, > > during the deletion of $series-eol tagged branches it turned out that > the below listed branches / repositories contains merged patches on top > of $series-eol tag. The issue is with this that whenever the branch is > deleted only the $series-eol (and other) tags can be checked out, so the > changes that were merged after the eol tags, will be *lost*. > > There are two options now: > > 1. Create another tag (something like: "$series-eol-extra"), so that the > extra patches will not be lost completely, because they can be checked > out with the newly created tags > > 2. Delete the branch anyway and don't care about the lost patch(es) > > Here are the list of such branches, please consider which option is good > for the team and reply to this mail: > > openstack/octavia > * stable/stein has patches on top of the stein-eol tag > * stable/queens has patches on top of the queens-eol tag > > openstack/kolla > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag Hi Előd, Thank you for running these checks before deleting the branches. In the case of kolla, the commits are relevant to CI only, disabling jobs. Please go ahead and delete those branches. Mark > > openstack/tripleo-common > * stable/rocky has patches on top of the rocky-eol tag > > openstack/os-apply-config > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/os-cloud-config > stable/ocata has patches on top of the ocata-eol tag > > Thanks, > > Előd > > > From tjoen at dds.nl Mon May 17 09:45:56 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 17 May 2021 11:45:56 +0200 Subject: [wallaby] patch needed for neutron-linuxbridge-agent: arp_protect.py Message-ID: neutron-18.0.0 tenacity-7.0.0 Python-3.9.4 neutron-linuxbridge-agent exits with AttributeError: 'FileNotFoundError' object has no attribute 'returncode' neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py: .. @tenacity.retry( wait=tenacity.wait_exponential(multiplier=0.02), retry=tenacity.retry_if_exception(lambda e: e.returncode == 255), reraise=True ) .. Not a python programmer. But I guess it has something to do with Python39 From ralonsoh at redhat.com Mon May 17 10:08:16 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 17 May 2021 12:08:16 +0200 Subject: [wallaby] patch needed for neutron-linuxbridge-agent: arp_protect.py In-Reply-To: References: Message-ID: Hello tjoen: Just a couple of comments: 1) There is a Neutron sanity check script that should be executed before running OpenStack. That will make sure Neutron can run on your system. One of the tests checks the "ebtables" version. If "ebtables" is not present, that test will fail. 2) "FileNotFoundError" means this binary is not present or is not in one of the binary directories. "ebtables" or its "nftables" equivalent should be in your system. Check for "ebtables-nft" or "ebtables-legacy". Regards. On Mon, May 17, 2021 at 11:52 AM tjoen wrote: > neutron-18.0.0 > tenacity-7.0.0 > Python-3.9.4 > > neutron-linuxbridge-agent exits with > AttributeError: 'FileNotFoundError' object has no attribute 'returncode' > > neutron/plugins/ml2/drivers/linuxbridge/agent/arp_protect.py: > .. > @tenacity.retry( > wait=tenacity.wait_exponential(multiplier=0.02), > retry=tenacity.retry_if_exception(lambda e: e.returncode == 255), > reraise=True > ) > .. > > Not a python programmer. But I guess it has something to do with > Python39 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Mon May 17 10:50:06 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Mon, 17 May 2021 11:50:06 +0100 Subject: [magnum] User meeting at 9:00 UTC Tuesday 18 May Message-ID: Hi all, We are planning to meet at 9:00 UTC Tuesday 18 May on #openstack-magnum (freenode). If you have anything pressing you’d like to discuss, please add to the agenda: https://etherpad.opendev.org/p/magnum-weekly-meeting . Cheers Bharat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharat at stackhpc.com Mon May 17 10:54:38 2021 From: bharat at stackhpc.com (Bharat Kunwar) Date: Mon, 17 May 2021 11:54:38 +0100 Subject: [magnum] User meeting at 9:00 UTC Tuesday 18 May In-Reply-To: References: Message-ID: <1E55C396-45C3-4420-AE8C-030ECF118B29@stackhpc.com> > On 17 May 2021, at 11:50, Bharat Kunwar wrote: > > Hi all, > > We are planning to meet at 9:00 UTC Tuesday 18 May on #openstack-magnum (freenode). If you have anything pressing you’d like to discuss, please add to the agenda: https://etherpad.opendev.org/p/magnum-weekly-meeting . Correction: #openstack-containers, not #openstack-magnum. > Cheers > > Bharat -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Mon May 17 12:00:59 2021 From: tjoen at dds.nl (tjoen) Date: Mon, 17 May 2021 14:00:59 +0200 Subject: Thanls! Re: [wallaby] problem neutron-linuxbridge-agent: arp_protect.py In-Reply-To: References: Message-ID: <85393194-5e89-f14e-0d27-bcb1fbfb979c@dds.nl> On 5/17/21 12:08 PM, Rodolfo Alonso Hernandez wrote: > Just a couple of comments: > 1) There is a Neutron sanity check script that should be executed before > running OpenStack. That will make sure Neutron can run on your system. One > of the tests checks the "ebtables" version. If "ebtables" is not present, > that test will fail. > 2) "FileNotFoundError" means this binary is not present or is not in one of > the binary directories. "ebtables" or its "nftables" equivalent should be > in your system. Check for "ebtables-nft" or "ebtables-legacy". Thank you for the hint! I had the same problem with dnsmasq and haproxy The error messages there were more specific Many changes in Wallaby compared to Ussuri ebtables, dnsmasq and haproxy were not needed in controller node of Ussuri From ignaziocassano at gmail.com Mon May 17 12:13:39 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Mon, 17 May 2021 14:13:39 +0200 Subject: [stein][neutron] l3 agent error In-Reply-To: <19115118.uMW6F6j3WE@p1> References: <19115118.uMW6F6j3WE@p1> Message-ID: Yes, you understood well. I have not verified if neutron-server o l3-agent fills the memory. The number of routers is 21. I fired the following bug: https://bugs.launchpad.net/neutron/+bug/1928675 Ignazio Il giorno lun 17 mag 2021 alle ore 09:01 Slawek Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > Dnia poniedziałek, 17 maja 2021 07:43:20 CEST Ignazio Cassano pisze: > > Hello All, the l3 agent respawning error, cause neutron to fill > controller > > memory and the controller stops to responding and it is fenced by the > > others. > > So the router ha move some jobs to another controller and it fills its > > memory and so on. > > Do I understand correctly that You have some memory leak in the neutron? > If > so, is it in neutron-server or neutron-l3-agent? And also, if that is > true, > can You open LP bug for that and provide some more info, like how many > routers > do You have there how to maybe reproduce it, etc. > > > I stopped neutron services and I cleaned HA directory under > > /var/lib/neutron. > > Restarting neutron services respawning errors disappeared but I had to > > create again some router static routes (not all). > > Probably it is a bug ? > > Ignazio > > > > > > Il Sab 15 Mag 2021, 19:41 Ignazio Cassano ha > > > > scritto: > > > Hello Guys, > > > I've just upgraded from openstack queens to rocky and then to stein on > > > centos 7. > > > In my configuration I have router high availability. > > > After the upgrade and rebooting each controller one by one I get the > > > following errors on all my 3 controllers under /var/log/neutron/l3- > agent.log > > > > > > http://paste.openstack.org/show/805407/ > > > > > > If I run openstack router show for for one of uuid in the log: > > > http://paste.openstack.org/show/805408/ > > > > > > Namespace for router in present on all 3 controllers. > > > After the controllers reboot,some routers lost their routing tables, > but > > > restarting l3-agent they went ok. > > > Is it possible router ha stopped working? > > > Any idea,please ? > > > Ignazio > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon May 17 13:08:20 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 17 May 2021 06:08:20 -0700 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: <20210514151425.zxgeym4lnhmbdrt3@yuggoth.org> References: <20210514151425.zxgeym4lnhmbdrt3@yuggoth.org> Message-ID: If the situation doesn't sort itself out, perhaps a logical course of action is to create a CNAME record, and have something like irc.openinfra.dev moving forward, That way the community could move again without updating documentation... at least past the very first mass-doc update. On Fri, May 14, 2021 at 8:17 AM Jeremy Stanley wrote: > > On 2021-05-14 11:05:55 -0400 (-0400), Chris Morgan wrote: > > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > > https://p.haavard.me/407 > > > > I have no independent validation of this. > > It seems like that may not be the whole story. Regardless, the infra > team have registered a small foothold of copies of our critical > channels on OFTC years ago, and have always considered that a > reasonable place to relocate in the event something happens to > Freenode which makes it no longer suitable. Letting people know to > switch the IRC server name in their clients and updating lots of > documentation mentioning Freenode would be the hardest part, > honestly. > -- > Jeremy Stanley From marios at redhat.com Mon May 17 13:29:11 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 17 May 2021 16:29:11 +0300 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 11:41 PM Előd Illés wrote: > > Hi teams in $SUBJECT, > > during the deletion of $series-eol tagged branches it turned out that > the below listed branches / repositories contains merged patches on top > of $series-eol tag. The issue is with this that whenever the branch is > deleted only the $series-eol (and other) tags can be checked out, so the > changes that were merged after the eol tags, will be *lost*. > > There are two options now: > > 1. Create another tag (something like: "$series-eol-extra"), so that the > extra patches will not be lost completely, because they can be checked > out with the newly created tags > > 2. Delete the branch anyway and don't care about the lost patch(es) > > Here are the list of such branches, please consider which option is good > for the team and reply to this mail: > Hello Elod thank you for all your work on this and apologies for the commits after the eol tag Personally I vote for the easiest path which I believe is option 2 here just discard those commits and remove the branch. I think 2 of the three flagged here are mine (:/ sorry I should know better ;)) so ack from me but adding owalsh/dbengt into the cc as the other commit is his. Some more comments/pointers inline thanks: > openstack/octavia > * stable/stein has patches on top of the stein-eol tag > * stable/queens has patches on top of the queens-eol tag > > openstack/kolla > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/tripleo-common > * stable/rocky has patches on top of the rocky-eol tag > for tripleo-common this is the patch in question: * https://github.com/openstack/tripleo-common/compare/rocky-eol...stable/rocky * https://github.com/openstack/tripleo-common/commit/77a0c827cbb02c3374d72f48973ba24d6c34d50c * Ensure tripleo ansible inventory file update is atomic https://review.opendev.org/q/Ifa41bfcb921496978f82aee4e67fdb419cf9ffc5 (cherry picked from commit 8e082f4 * (cherry picked from commit c1af9b7) * (squashing commits as the 1st patch is failing in the stein gate without the fix from the 2nd patch) * https://review.opendev.org/c/openstack/tripleo-common/+/765502 so cc'ing owalsh to allow him to raise an objection will also reach out on irc after I send this and point to it ;) > openstack/os-apply-config > * stable/pike has patches on top of the pike-eol tag * https://github.com/openstack/os-apply-config/compare/pike-eol...stable/pike * https://github.com/openstack/os-apply-config/commit/1fcccb880e30522d66238b205a48d553a050c562 * Remove tripleo-multinode-container|baremetal-minimal from layout * Change-Id: I6715edd673b45dad6fba7d1987eac8677f61eaa2 * 2 zuul.d/layout.yaml * https://review.opendev.org/c/openstack/os-apply-config/+/777527 > * stable/ocata has patches on top of the ocata-eol tag > * https://github.com/openstack/os-apply-config/compare/ocata-eol...stable/ocata * https://github.com/openstack/os-apply-config/commit/31768f04a30023a0d54099c1aeb80134ffe5dd64 * Remove tripleo-multinode-container|baremetal-minimal from layout * Change-Id: Iefe3eed322f1102344c6b54531c61a41ce4d227b * 9 zuul.d/layout.yaml * https://review.opendev.org/c/openstack/os-apply-config/+/777533 both of those are mine so ack from me on nuking them thanks to you and all the release team for checking and for your work on this and all the release things regards, marios > openstack/os-cloud-config > stable/ocata has patches on top of the ocata-eol tag > > Thanks, > > Előd > > > From fungi at yuggoth.org Mon May 17 14:12:43 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 May 2021 14:12:43 +0000 Subject: [infra] threat to freenode (where openstack irc hangs out) In-Reply-To: References: <20210514151425.zxgeym4lnhmbdrt3@yuggoth.org> Message-ID: <20210517141242.fi7unbz7dbxs5gis@yuggoth.org> On 2021-05-17 06:08:20 -0700 (-0700), Julia Kreger wrote: > If the situation doesn't sort itself out, perhaps a logical course > of action is to create a CNAME record, and have something like > irc.openinfra.dev moving forward, That way the community could > move again without updating documentation... at least past the > very first mass-doc update. [...] Yes, thanks for the reminder. I think we had previously agreed this would be a sensible way of engineering it. I suppose the last time it came up was in the pre-OpenDev days, but I still think having it be a general OpenDev signal as to where we're maintaining our service bots makes sense, and makes future relocations easier. This is what irc.gnome.org does, for example (it's currently a CNAME to irc.gimp.org). Some folks will probably still want to use the traditional IRC network names in their clients if they're joining channels for other communities which happen to exist on that same network, but for anyone who is only joining channels related to projects hosted in OpenDev I suppose it's a reasonable solution. If you're involved in multiple IRC-using communities on the same network, you're probably proficient enough with IRC and querying DNS to know what you're doing in that regard and work it out for yourself anyway. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Mon May 17 14:14:37 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 17 May 2021 16:14:37 +0200 Subject: [ironic] IPA image does not want to boot with UEFI In-Reply-To: References: Message-ID: Hi, I'm not sure. We have never hit this problem with DIB-built images before. I know that TripleO uses an even larger image than one we publish on tarballs.o.o. Dmitry On Wed, May 12, 2021 at 8:10 PM Vuk Gojnic wrote: > Hi Dmitry, > > Thanks for additional tipps. When investigating the initrd I have > noticed that the most of the space goes on firmware and > modules/drivers. If we notice something not working with TinyIPA we > can probably cherry-pick the modules that we need and leave everything > else out and that way get smaller image. > > I have another question though - do you know how could we make > Kernel/Grub accept to boot large initrd? How are other folks doing it? > I assume not everybody is just using TinyIPA for production... > > Tnx! > > Vuk > > On Wed, May 12, 2021 at 5:36 PM Dmitry Tantsur > wrote: > > > > I'm glad that it worked for you! > > > > Before others follow your advice: the difference in size in DIB builds > and tinyIPA is mostly because of firmware and kernel modules. If tinyIPA > does not work for you or behaves in a weird way (no disks detected, some > NICs not detected), then you're stuck with DIB builds. > > > > Vuk, there is one more option you could exercise. IPA-builder supports > an --lzma flag to pack the initramfs with a more efficient algorithm: > https://opendev.org/openstack/ironic-python-agent-builder/src/branch/master/ironic_python_agent_builder/__init__.py#L56 > . > > > > Dmitry > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon May 17 14:30:34 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 17 May 2021 16:30:34 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 5:07 PM Chris Morgan wrote: > > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > https://p.haavard.me/407 The references seem gone. What were they about? -yoctozepto From cboylan at sapwetik.org Mon May 17 14:41:52 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 17 May 2021 07:41:52 -0700 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Mon, May 17, 2021, at 7:30 AM, Radosław Piliszek wrote: > On Fri, May 14, 2021 at 5:07 PM Chris Morgan wrote: > > > > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > > https://p.haavard.me/407 > > The references seem gone. > What were they about? It was a draft document that was not supposed to be exposed publicly, but web crawlers found it anyway. I didn't read the document, but the summary seemed to be that the author and a number of freenode maintainers/admins/not sure of the proper term were at one point considering quitting. This document was discovered and posted to hacker news. The document's author asserted on the hacker news thread that they had no intention of following through with those actions and that the document was a draft that should have never been publicly listed. > > -yoctozepto > > From radoslaw.piliszek at gmail.com Mon May 17 14:46:14 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 17 May 2021 16:46:14 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Mon, May 17, 2021 at 4:43 PM Clark Boylan wrote: > > On Mon, May 17, 2021, at 7:30 AM, Radosław Piliszek wrote: > > On Fri, May 14, 2021 at 5:07 PM Chris Morgan wrote: > > > > > > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > > > https://p.haavard.me/407 > > > > The references seem gone. > > What were they about? > > It was a draft document that was not supposed to be exposed publicly, but web crawlers found it anyway. I didn't read the document, but the summary seemed to be that the author and a number of freenode maintainers/admins/not sure of the proper term were at one point considering quitting. This document was discovered and posted to hacker news. The document's author asserted on the hacker news thread that they had no intention of following through with those actions and that the document was a draft that should have never been publicly listed. Thank you! -yoctozepto From fungi at yuggoth.org Mon May 17 14:56:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 May 2021 14:56:35 +0000 Subject: [infra] threat to freenode (where openstack irc hangs out) In-Reply-To: <20210517141242.fi7unbz7dbxs5gis@yuggoth.org> References: <20210514151425.zxgeym4lnhmbdrt3@yuggoth.org> <20210517141242.fi7unbz7dbxs5gis@yuggoth.org> Message-ID: <20210517145635.pux2u2ezwlmzsiqz@yuggoth.org> On 2021-05-17 14:12:42 +0000 (+0000), Jeremy Stanley wrote: > On 2021-05-17 06:08:20 -0700 (-0700), Julia Kreger wrote: > > If the situation doesn't sort itself out, perhaps a logical course > > of action is to create a CNAME record, and have something like > > irc.openinfra.dev moving forward, That way the community could > > move again without updating documentation... at least past the > > very first mass-doc update. > [...] > > Yes, thanks for the reminder. I think we had previously agreed this > would be a sensible way of engineering it. I suppose the last time > it came up was in the pre-OpenDev days, but I still think having it > be a general OpenDev signal as to where we're maintaining our > service bots makes sense, and makes future relocations easier. This > is what irc.gnome.org does, for example (it's currently a CNAME to > irc.gimp.org). [...] Though as dansmith correctly pointed out in #openstack-infra just now, this would break some IRC-over-SSL configurations (unless the CNAME could get added as a SubjectAltName in the server cert somehow, which would require us to negotiate that with the operators of the networks in question, making this a not particularly nimble solution). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From arnaud.morin at gmail.com Mon May 17 14:59:33 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Mon, 17 May 2021 14:59:33 +0000 Subject: [largescale-sig][neutron] What driver are you using? In-Reply-To: References: Message-ID: Hi Laurent, Thanks for your reply! I agree that it depends on the scale usage. About the VLAN you are using for external networks, do you have/want to share the number of public IP you have in this L2 for a region? Cheers, On 11.05.21 - 19:21, Laurent Dumont wrote: > I feel like it depends a lot on the scale/target usage (public vs private > cloud). > > But at $dayjob, we are leveraging > > - vlans for external networking (linux-bridge + OVS) > - vxlans for internal Openstack networks. > > We like the simplicity of vxlan with minimal overlay configuration. There > are some scaling/performance issues with stuff like l2 population. > > VLANs are okay but it's hard to predict the next 5 years of growth. > > On Mon, May 10, 2021 at 8:34 AM Arnaud Morin wrote: > > > Hey large-scalers, > > > > We had a discusion in my company (OVH) about neutron drivers. > > We are using a custom driver based on BGP for public networking, and > > another custom driver for private networking (based on vlan). > > > > Benefits from this are obvious: > > - we maintain the code > > - we do what we want, not more, not less > > - it fits perfectly to the network layer our company is using > > - we have full control of the networking stack > > > > But it also have some downsides: > > - we have to maintain the code... (rebasing, etc.) > > - we introduce bugs that are not upstream (more code, more bugs) > > - a change in code is taking longer, we have few people working on this > > (compared to a community based) > > - this is not upstream (so not opensource) > > - we are not sharing (bad) > > > > So, we were wondering which drivers are used upstream in large scale > > environment (not sure a vlan driver can be used with more than 500 > > hypervisors / I dont know about vxlan or any other solution). > > > > Is there anyone willing to share this info? > > > > Thanks in advance! > > > > From marios at redhat.com Mon May 17 15:59:04 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 17 May 2021 18:59:04 +0300 Subject: [all][stable] Delete $series-eol tagged branches In-Reply-To: <96e1cb5a-99c5-7a05-01c0-1635743d9c1d@est.tech> References: <96e1cb5a-99c5-7a05-01c0-1635743d9c1d@est.tech> Message-ID: On Fri, May 14, 2021 at 11:11 PM Előd Illés wrote: > > Hi, > > As I wrote previously [1] the long-waited deletion of $series-eol tagged > branches started with the removal of ocata-eol tagged ones first (for > the list of deleted branches, see: [2]) > Then I also sent out a warning [3] about the next step, to delete > pike-eol tagged branches, which finally happened today (for the list of > deleted branches, see: [4]). > > So now I'm sending out a *warning* again, that as a 3rd step, the > deletion of already tagged but still open branches will continue in 1 or > 2 weeks time frame. If everything works as expected, then branches with > *queens-eol*, *rocky-eol* and *stein-eol* tags can be processed in one > batch. > > Also I would like to ask the teams who have $series-eol tagged branches > to abandon all open patches on those branches, otherwise the branch > cannot be deleted. Thank you for this Elod, I did a quick survey on our tripleo repos for rocky (tagged eol waiting for deletion) https://releases.openstack.org/teams/tripleo.html#rocky I found a few patches in progress and commented there * https://review.opendev.org/c/openstack/paunch/+/764901/4#message-6ad7eb01c314c908164cab541318388eb460121d * https://review.opendev.org/c/openstack/python-tripleoclient/+/723634/2#message-dacb09ed164e1c44bdea86b56a42d08c892fd4df * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/601598/12#message-dcaf8c787e4c2da8bc70c0a234efd811e1c2d484 * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/724148/4#message-0d019bb76d384ff32b1a5ea3b3bc2f34f9d29bae let's hope the authors will respond in time - I will follow up and try to reach out to them again if they don't regards, marios > > Thanks, > > Előd > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021949.html > [2] http://paste.openstack.org/show/804953/ > [3] > http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022173.html > [4] http://paste.openstack.org/show/805404/ > > > > From duc.openstack at gmail.com Mon May 17 16:08:43 2021 From: duc.openstack at gmail.com (Duc Truong) Date: Mon, 17 May 2021 09:08:43 -0700 Subject: [Senlin][Octavia] Health policy - LB_STATUS_POLLING detection mode In-Reply-To: <9815228e-07b2-540b-761b-62e26d0c2c45@sns.it> References: <9815228e-07b2-540b-761b-62e26d0c2c45@sns.it> Message-ID: H Giacomo, The patch set that you linked is 4 years old, so I don't remember what the issue was with implementing the LB poll in the Senlin health check. I don't see a problem with implementing that feature so feel free to check the contribution guide and submit some code. A few things to think about are: - Would the LB_STATUS_POLLING require the user to use the Senlin Load Balancer as well? - How would Senlin know which health monitor to check for a given Senlin node? A common use case for Senlin is autoscaling, so nodes will be added or deleted on the fly. Most likely that requires that the LB_STATUS_POLLING needs to be tied to the Senlin LB policy that is responsible for adding/removing LB pools and LB health monitors. If we go down that route, it is easy to retrieve the LB health monitor ID because that is stored in the Senlin node properties. Anyways, just some things to think about. Duc On Fri, May 14, 2021 at 2:51 AM Giacomo Lanciano wrote: > > Hi folks, > > I'd like to know what is the status of making the LB_STATUS_POLLING > detection mode available for the Senlin Health Policy [1]. According to > the docs and this patch [2], the implementation of this feature is > blocked by some issue on LBaaS/Octavia side, but I could not find any > details on what this issue really is. > > As the docs state, it would be really useful to have this detection mode > available, as it is much more reliable than the others at evaluating the > status of an application. If I can be of any help, I would be willing to > contribute. > > Thanks in advance. > > Kind regards. > > Giacomo > > [1] > https://docs.openstack.org/senlin/latest/contributor/policies/health_v1.html#failure-detection > [2] https://review.opendev.org/c/openstack/senlin/+/423012 > > -- > Giacomo Lanciano > Ph.D. Student in Data Science > Scuola Normale Superiore, Pisa, Italy > https://www.linkedin.com/in/giacomolanciano > > From laurentfdumont at gmail.com Mon May 17 16:10:06 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 17 May 2021 12:10:06 -0400 Subject: Restart cinder-volume with Ceph rdb In-Reply-To: References: <20210511203053.Horde.nJ-7FFjvzdcxuyQKn9UmErJ@webmail.nde.ag> <08C8BC3F-3930-4803-B007-3E3C6BD1F411@iaa.es> <20210512094943.nfttmyxoss3zut2n@localhost> <90542A09-3A7D-4FE2-83FD-10D46CCEF5A2@iaa.es> <20210512213446.7c222mlcdwxiosly@localhost> <0292176B-BE6D-448E-8948-EE10300E2520@iaa.es> <20210513073722.x3z3qkcpvg5am6ia@localhost> <771F27B8-6C13-4F04-85D3-331E2AF7D89F@binero.com> Message-ID: Glad to know it was resolved! It's a bit weird that explicitly setting the parameter works, but good to know! On Mon, May 17, 2021 at 2:11 AM Sebastian Luna Valero < sebastian.luna.valero at gmail.com> wrote: > > Thanks, Laurent. > > Long story short, we have been able to bring the "cinder-volume" service > back up. > > We restarted the "cinder-volume" and "cinder-scheduler" services with > "debug=True", got back the same debug message: > > 2021-05-15 23:15:27.091 31 DEBUG cinder.volume.drivers.rbd > [req-f43e30ae-2bdc-4690-9c1b-3e58081fdc9e - - - - -] connecting to > cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn > /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 > > Then, I had a look at the docs looking for "timeout" configuration options: > > > https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html#driver-options > > "rados_connect_timeout = -1; (Integer) Timeout value (in seconds) used > when connecting to ceph cluster. If value < 0, no timeout is set and > default librados value is used." > > I added it to the "cinder.conf" file for the "cinder-volume" service with: > "rados_connect_timeout=15". > > Before this change the "cinder-volume" logs ended with this message: > > 2021-05-15 23:02:48.821 31 INFO cinder.volume.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver > RBDDriver (1.2.0) > > After the change: > > 2021-05-15 23:02:48.821 31 INFO cinder.volume.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver > RBDDriver (1.2.0) > 2021-05-15 23:04:23.180 31 INFO cinder.volume.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver initialization > completed successfully. > 2021-05-15 23:04:23.190 31 INFO cinder.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initiating service 12 > cleanup > 2021-05-15 23:04:23.196 31 INFO cinder.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Service 12 cleanup > completed. > 2021-05-15 23:04:23.315 31 INFO cinder.volume.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initializing RPC > dependent components of volume driver RBDDriver (1.2.0) > 2021-05-15 23:05:10.381 31 INFO cinder.volume.manager > [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver post RPC > initialization completed successfully. > > And now the service is reported as "up" in "openstack volume service list" > and we can successfully create Ceph volumes now. Many will do more > validation tests today to confirm. > > So it looks like the "cinder-volume" service didn't start up properly in > the first place and that's why the service was "down". > > Why adding "rados_connect_timeout=15" to cinder.conf solved the issue? I > honestly don't know and it was a matter of luck to try this out. If anyone > knows the reason, we would love to know more. > > Thank you very much again for your kind help! > > Best regards, > Sebastian > > On Sat, 15 May 2021 at 19:40, Laurent Dumont > wrote: > >> That is a bit strange. I don't use the Ceph backend so I don't know any >> magic tricks. >> >> - I'm surprised that the Debug logging level doesn't add anything >> else. Is there any other lines besides the "connecting" one? >> - Can we narrow down the port/IP destination for the Ceph RBD traffic? >> - Can we failover the cinder-volume service to another controller and >> check the status of the volume service? >> - Did the power outage impact the Ceph cluster + network gear + all >> the controllers? >> - Does the content of /etc/ceph/ceph.conf appear to be valid inside >> the container? >> >> Looking at the code - >> https://github.com/openstack/cinder/blob/stable/train/cinder/volume/drivers/rbd.py#L432 >> >> It should raise an exception if there is a timeout when the connection >> client is built. >> >> except self.rados.Error: >> msg = _("Error connecting to ceph cluster.") >> LOG.exception(msg) >> client.shutdown() >> raise exception.VolumeBackendAPIException(data=msg) >> >> On Sat, May 15, 2021 at 4:16 AM Sebastian Luna Valero < >> sebastian.luna.valero at gmail.com> wrote: >> >>> >>> Hi All, >>> >>> Thanks for your inputs so far. I am also trying to help Manu with this >>> issue. >>> >>> The "cinder-volume" service was working properly with the existing >>> configuration. However, after a power outage the service is no longer >>> reported as "up". >>> >>> Looking at the source code, the service status is reported as "down" by >>> "cinder-scheduler" in here: >>> >>> >>> https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L618 >>> >>> With message: "WARNING cinder.scheduler.host_manager [req-<>- default >>> default] volume service is down. (host: rbd:volumes at ceph-rbd)" >>> >>> I printed out the "service" tuple >>> https://github.com/openstack/cinder/blob/stable/train/cinder/scheduler/host_manager.py#L615 >>> and we get: >>> >>> "2021-05-15 09:57:24.918 7 WARNING cinder.scheduler.host_manager [<> - >>> default default] >>> Service(active_backend_id=None,availability_zone='nova',binary='cinder-volume',cluster=,cluster_name=None,created_at=2020-06-12T07:53:42Z,deleted=False,deleted_at=None,disabled=False,disabled_reason=None,frozen=False,host='rbd:volumes at ceph-rbd >>> ',id=12,modified_at=None,object_current_version='1.38',replication_status='disabled',report_count=8067424,rpc_current_version='3.16',topic='cinder-volume',updated_at=2021-05-12T15:37:52Z,uuid='604668e8-c2e7-46ed-a2b8-086e588079ac')" >>> >>> Cinder is configured with a Ceph RBD backend, as explained in >>> https://github.com/openstack/kolla-ansible/blob/stable/train/doc/source/reference/storage/external-ceph-guide.rst#cinder >>> >>> That's where the "backend_host=rbd:volumes" configuration is coming from. >>> >>> We are using 3 controller nodes for OpenStack and 3 monitor nodes for >>> Ceph. >>> >>> The Ceph cluster doesn't report any error. The "cinder-volume" >>> containers don't report any error. Moreover, when we go inside the >>> "cinder-volume" container we are able to list existing volumes with: >>> >>> rbd -p cinder.volumes --id cinder -k >>> /etc/ceph/ceph.client.cinder.keyring ls >>> >>> So the connection to the Ceph cluster works. >>> >>> Why is "cinder-scheduler" reporting the that the backend Ceph cluster is >>> down? >>> >>> Many thanks, >>> Sebastian >>> >>> >>> On Thu, 13 May 2021 at 13:12, Tobias Urdin >>> wrote: >>> >>>> Hello, >>>> >>>> I just saw that you are running Ceph Octopus with Train release and >>>> wanted to let you know that we saw issues with the os-brick version shipped >>>> with Train not supporting client version of Ceph Octopus. >>>> >>>> So for our Ceph cluster running Octopus we had to keep the client >>>> version on Nautilus until upgrading to Victoria which included a newer >>>> version of os-brick. >>>> >>>> Maybe this is unrelated to your issue but just wanted to put it out >>>> there. >>>> >>>> Best regards >>>> Tobias >>>> >>>> > On 13 May 2021, at 12:55, ManuParra wrote: >>>> > >>>> > Hello Gorka, not yet, let me update cinder configuration, add the >>>> option, restart cinder and I’ll update the status. >>>> > Do you recommend other things to try for this cycle? >>>> > Regards. >>>> > >>>> >> On 13 May 2021, at 09:37, Gorka Eguileor >>>> wrote: >>>> >> >>>> >>> On 13/05, ManuParra wrote: >>>> >>> Hi Gorka again, yes, the first thing is to know why you can't >>>> connect to that host (Ceph is actually set up for HA) so that's the way to >>>> do it. I tell you this because previously from the beginning of the setup >>>> of our setup it has always been like that, with that hostname and there has >>>> been no problem. >>>> >>> >>>> >>> As for the errors, the strangest thing is that in Monasca I have >>>> not found any error log, only warning on “volume service is down. (host: >>>> rbd:volumes at ceph-rbd)" and info, which is even stranger. >>>> >> >>>> >> Have you tried the configuration change I recommended? >>>> >> >>>> >> >>>> >>> >>>> >>> Regards. >>>> >>> >>>> >>>> On 12 May 2021, at 23:34, Gorka Eguileor >>>> wrote: >>>> >>>> >>>> >>>> On 12/05, ManuParra wrote: >>>> >>>>> Hi Gorka, let me show the cinder config: >>>> >>>>> >>>> >>>>> [ceph-rbd] >>>> >>>>> rbd_ceph_conf = /etc/ceph/ceph.conf >>>> >>>>> rbd_user = cinder >>>> >>>>> backend_host = rbd:volumes >>>> >>>>> rbd_pool = cinder.volumes >>>> >>>>> volume_backend_name = ceph-rbd >>>> >>>>> volume_driver = cinder.volume.drivers.rbd.RBDDriver >>>> >>>>> … >>>> >>>>> >>>> >>>>> So, using rbd_exclusive_cinder_pool=True it will be used just for >>>> volumes? but the log is saying no connection to the backend_host. >>>> >>>> >>>> >>>> Hi, >>>> >>>> >>>> >>>> Your backend_host doesn't have a valid hostname, please set a >>>> proper >>>> >>>> hostname in that configuration option. >>>> >>>> >>>> >>>> Then the next thing you need to have is the cinder-volume service >>>> >>>> running correctly before making any requests. >>>> >>>> >>>> >>>> I would try adding rbd_exclusive_cinder_pool=true then tailing the >>>> >>>> volume logs, and restarting the service. >>>> >>>> >>>> >>>> See if the logs show any ERROR level entries. >>>> >>>> >>>> >>>> I would also check the service-list output right after the service >>>> is >>>> >>>> restarted, if it's up then I would check it again after 2 minutes. >>>> >>>> >>>> >>>> Cheers, >>>> >>>> Gorka. >>>> >>>> >>>> >>>> >>>> >>>>> >>>> >>>>> Regards. >>>> >>>>> >>>> >>>>> >>>> >>>>>> On 12 May 2021, at 11:49, Gorka Eguileor >>>> wrote: >>>> >>>>>> >>>> >>>>>> On 12/05, ManuParra wrote: >>>> >>>>>>> Thanks, I have restarted the service and I see that after a few >>>> minutes then cinder-volume service goes down again when I check it with the >>>> command openstack volume service list. >>>> >>>>>>> The host/service that contains the cinder-volumes is >>>> rbd:volumes at ceph-rbd that is RDB in Ceph, so the problem does not come >>>> from Cinder, rather from Ceph or from the RDB (Ceph) pools that stores the >>>> volumes. I have checked Ceph and the status of everything is correct, no >>>> errors or warnings. >>>> >>>>>>> The error I have is that cinder can’t connect to >>>> rbd:volumes at ceph-rbd. Any further suggestions? Thanks in advance. >>>> >>>>>>> Kind regards. >>>> >>>>>>> >>>> >>>>>> >>>> >>>>>> Hi, >>>> >>>>>> >>>> >>>>>> You are most likely using an older release, have a high number >>>> of cinder >>>> >>>>>> RBD volumes, and have not changed configuration option >>>> >>>>>> "rbd_exclusive_cinder_pool" from its default "false" value. >>>> >>>>>> >>>> >>>>>> Please add to your driver's section in cinder.conf the following: >>>> >>>>>> >>>> >>>>>> rbd_exclusive_cinder_pool = true >>>> >>>>>> >>>> >>>>>> >>>> >>>>>> And restart the service. >>>> >>>>>> >>>> >>>>>> Cheers, >>>> >>>>>> Gorka. >>>> >>>>>> >>>> >>>>>>>> On 11 May 2021, at 22:30, Eugen Block wrote: >>>> >>>>>>>> >>>> >>>>>>>> Hi, >>>> >>>>>>>> >>>> >>>>>>>> so restart the volume service;-) >>>> >>>>>>>> >>>> >>>>>>>> systemctl restart openstack-cinder-volume.service >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> Zitat von ManuParra : >>>> >>>>>>>> >>>> >>>>>>>>> Dear OpenStack community, >>>> >>>>>>>>> >>>> >>>>>>>>> I have encountered a problem a few days ago and that is that >>>> when creating new volumes with: >>>> >>>>>>>>> >>>> >>>>>>>>> "openstack volume create --size 20 testmv" >>>> >>>>>>>>> >>>> >>>>>>>>> the volume creation status shows an error. If I go to the >>>> error log detail it indicates: >>>> >>>>>>>>> >>>> >>>>>>>>> "Schedule allocate volume: Could not find any available >>>> weighted backend". >>>> >>>>>>>>> >>>> >>>>>>>>> Indeed then I go to the cinder log and it indicates: >>>> >>>>>>>>> >>>> >>>>>>>>> "volume service is down - host: rbd:volumes at ceph-rbd”. >>>> >>>>>>>>> >>>> >>>>>>>>> I check with: >>>> >>>>>>>>> >>>> >>>>>>>>> "openstack volume service list” in which state are the >>>> services and I see that indeed this happens: >>>> >>>>>>>>> >>>> >>>>>>>>> >>>> >>>>>>>>> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | >>>> down | 2021-04-29T09:48:42.000000 | >>>> >>>>>>>>> >>>> >>>>>>>>> And stopped since 2021-04-29 ! >>>> >>>>>>>>> >>>> >>>>>>>>> I have checked Ceph (monitors,managers, osds. etc) and there >>>> are no problems with the Ceph BackEnd, everything is apparently working. >>>> >>>>>>>>> >>>> >>>>>>>>> This happened after an uncontrolled outage.So my question is >>>> how do I restart only cinder-volumes (I also have cinder-backup, >>>> cinder-scheduler but they are ok). >>>> >>>>>>>>> >>>> >>>>>>>>> Thank you very much in advance. Regards. >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>>> >>>> >>>>>>> >>>> >>>>>>> >>>> >>>>>> >>>> >>>>>> >>>> >>>>> >>>> >>>> >>>> >>> >>>> >> >>>> >> >>>> > >>>> > >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alifshit at redhat.com Mon May 17 16:18:59 2021 From: alifshit at redhat.com (Artom Lifshitz) Date: Mon, 17 May 2021 12:18:59 -0400 Subject: [Nova] Meeting time poll In-Reply-To: References: Message-ID: I'm going to leave the poll up for a few more days, but it looks like Tuesdays 15:00 - 16:00 UTC or 16:00 - 17:00 are the leading options. If there are no objections before our next IRC meeting on the 20th of May, could we pick a time between these two and make the change official? On Thu, May 13, 2021 at 1:10 PM Artom Lifshitz wrote: > > On Thu, May 13, 2021 at 1:07 PM Artom Lifshitz wrote: > > > > Hey all, > > > > As discussed during the IRC meeting today, the Red Hat Nova team would > > like to know if it's possible to shift the IRC meeting to a different > > day and/or time. This would facilitate our own internal calls, but I > > want to be very clear that we'll structure our internal calls around > > upstream, not the other way around. So please do not perceive any > > pressure to change, this is just a question :) > > > > To help us figure this out, I've created a Doodle poll [1]. I believe > > the regular attendees of the IRC meeting are spread between Central > > Europe and NA West Coast, so I've tried to list times that kinda make > > sense in both of those places, with a couple of hours on each side as > > a safety margin. > > > > Please vote on when you'd like the Nova IRC meeting to take place. > > Ignore the actual dates (like May 10), the important bits are the days > > of the week (Monday, Tuesday, etc). This is obviously a recurring > > meeting, something that Doodle doesn't seem to understand. > > > > I've not included Mondays and Wednesdays in the list of possibilities, > > as they would not work for Red Hat Nova. You can also vote to keep the > > status quo :) > > > > The times are listed in UTC, like the current meeting time, so > > unfortunately you have to be mindful of the effects of daylight > > savings time :( > > > > Thanks in advance! > > > > [1] https://doodle.com/poll/45ptnyn85iuw7pxz > > And I just noticed that there's a calendar view [2] that you can use > to convert to your own time zone. Nifty! (You'll still have to be > mindful of daylight saving time though). > > [2] https://doodle.com/poll/45ptnyn85iuw7pxz#calendar From gmann at ghanshyammann.com Mon May 17 22:45:52 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 17 May 2021 17:45:52 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 20th at 1500 UTC Message-ID: <1797c81ce4e.c5493880678425.8347318298781364114@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 20th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 19th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From gmann at ghanshyammann.com Tue May 18 00:48:08 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 17 May 2021 19:48:08 -0500 Subject: [all][tc][goals] Project Specific PTL and Contributor Documentation: Week R-20 Update Message-ID: <1797cf1bde6.b09dbcff679085.3250944423641646471@ghanshyammann.com> Hello Everyone, As you know, we did not select any community-wide goal for Xena cycle and in PTG, TC decided to spend more time on the previous cycle pending community-wide goal work[1] One of the previous cycle (Ussuri) goals is 'Project Specific PTL and Contributor Documentation' which still not completed by many projects. - https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html I am starting the work to finish this goal which is pretty straight forwards and sending the updates in this ML thread. Please review the patches and merge them or push the patches for your project's repo if not yet done. Gerrit Topic: https://review.opendev.org/q/topic:%22project-ptl-and-contrib-docs%22+(status:open%20OR%20status:merged) Tracking: https://storyboard.openstack.org/#!/story/2007236 Progress Summary: =============== * Projects completed: 32 * Projects for which patches are up and required to merge those: 10 * Projects for which patches are not yet up: 12 [1] https://etherpad.opendev.org/p/tc-xena-tracker (Item#2) -gmann From gagehugo at gmail.com Tue May 18 04:45:12 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 17 May 2021 23:45:12 -0500 Subject: [openstack-helm] Meeting Cancelled Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting tomorrow May 18th, the meeting is cancelled. Our next meeting will be May 25th. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.bondarev at huawei.com Tue May 18 07:48:37 2021 From: oleg.bondarev at huawei.com (Oleg Bondarev) Date: Tue, 18 May 2021 07:48:37 +0000 Subject: [neutron] Bug Deputy Report May 10 - 16th Message-ID: <7670d98070af478bbeaea232d1fa1f1c@huawei.com> Hi everyone, Please find Bug Deputy report for the week May 10 - 16th below. Nothing Critical, just a few OVN bugs looking for triage from OVN folks (see the end of the list). Medium: - https://bugs.launchpad.net/neutron/+bug/1928345 - "neutron_tempest_plugin.api.test_trunk*" tests failing Edit o Assigned to Rodolfo Alonso o In progress: https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/791255 - https://bugs.launchpad.net/neutron/+bug/1928299 - centos7 train vm live migration stops network on vm for some minutes o Unassigned o Incomplete: same issue as in https://bugs.launchpad.net/neutron/+bug/1815989 - https://bugs.launchpad.net/neutron/+bug/1928450 - ovn_migration.sh script doesn't detect neutron_dhcp agents when... o Fix released - https://bugs.launchpad.net/neutron/+bug/1928465 - Geneve allocation is not update during migration from ml2/ovs to ovn o Assigned to Jakub Libosvar o Fixed by https://review.opendev.org/c/openstack/neutron/+/784352, in progress for Wallaby - https://bugs.launchpad.net/neutron/+bug/1928466 - Allowed address pairs aren't populated to the new host with DVR router o Assigned to Slawek o In progress: https://review.opendev.org/c/openstack/neutron/+/791492 - https://bugs.launchpad.net/neutron/+bug/1927926 - Ha port is not cleared when cleanup l3router ns o Assigned to fanxiujian o Invalid Low: - https://bugs.launchpad.net/neutron/+bug/1928471 - Wrong assertion methods in unit test o Assigned to Takashi Natsume o Fix released - https://review.opendev.org/c/openstack/neutron/+/791497 Wishlist: - https://bugs.launchpad.net/neutron/+bug/1928211 - Remove quota "ConfDriver", deprecated in Liberty o Assigned to Rodolfo Alonso o In progress: https://review.opendev.org/c/openstack/neutron/+/790999 OVN Triage needed: - https://bugs.launchpad.net/neutron/+bug/1928330 - [OVN] Unable to ping router from test IPV6 subnet o Unassigned o New: OVN folks please triage - https://bugs.launchpad.net/neutron/+bug/1927977 - OVN IDLs not initialized for all worker types o Unassigned o New - https://bugs.launchpad.net/neutron/+bug/1928164 - [OVN] Ovn-controller dose not update the flows table when localport tap device is rebuilt o Unassigned o New Thanks, Oleg --- Advanced Software Technology Lab Huawei -------------- next part -------------- An HTML attachment was scrubbed... URL: From xxxcloudlearner at gmail.com Tue May 18 08:43:51 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Tue, 18 May 2021 14:13:51 +0530 Subject: spice console configuration instead of vnc Message-ID: Dear Experts, I have installed Victoria On centos through packstack single node, i want to use spice console instead of vnc. And Installed the openstack-nova-spicehtml5proxy service and started and stop novncproxy service controller=10.1.75.20 In nova.conf [vnc] enabled=False [spice] agent_enabled = False enabled = True html5proxy_base_url = http://10.1.75.20:6082/spice_auto.html html5proxy_host = 0.0.0.0 html5proxy_port = 6082 keymap = en-us server_listen = 0.0.0.0 server_proxyclient_address = controller Above configuration made, but unable to get the spice console.. Kindly help -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Tue May 18 11:54:45 2021 From: tjoen at dds.nl (tjoen) Date: Tue, 18 May 2021 13:54:45 +0200 Subject: [wallaby]nova, oslo.db or SQLAlchemy bug? "no attribute statement" Message-ID: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> Python-3.9.4 nova-23.0.1 oslo.db-8.5.0 SQLAlchemy-1.4.15 docs.openstack.org/install-guide/launch-instance-provider.html $ openstack server list Status is SHUTDOWN $ openstack server start provider-instance results in site-packages/oslo_db/sqlalchemy/update_match.py", line 489, in _update_context.statement.froms[0] ERROR nova.api.openstack.wsgi AttributeError: 'QueryContext' object has no attribute 'statement' site-packages/oslo_db/sqlalchemy/update_match.py line 488 and 489: context = query._compile_context() primary_table = context.statement.froms[0] site-packages/sqlalchemy/orm/query.py def _compile_context(self, for_statement=False): compile_state = self._compile_state(for_statement=for_statement) context = QueryContext( context = QueryContext( compile_state, compile_state.statement, self._params, self.session, self.load_options, ) return context nova, oslo.db or SQLAlchemy bug? From fungi at yuggoth.org Tue May 18 12:40:38 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 May 2021 12:40:38 +0000 Subject: [all][qa][cinder][octavia][murano][sahara][manila][magnum][kuryr][neutron] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <20210504220655.hd5q4zzlpe2s7t4k@yuggoth.org> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> <179327e4f91.ee9c07fa889469.6980115070754232706@ghanshyammann.com> <20210504001111.52o2fgjeyizhiwts@barron.net> <17937a935ab.c1df754d5757.2201956277196352904@ghanshyammann.com> <20210504143608.mcmov6clb6vgkrpl@barron.net> <20210504220655.hd5q4zzlpe2s7t4k@yuggoth.org> Message-ID: <20210518124038.wqzv3fwzqzurcgle@yuggoth.org> On 2021-05-04 22:06:55 +0000 (+0000), Jeremy Stanley wrote: > On 2021-05-04 14:23:26 -0700 (-0700), Goutham Pacha Ravi wrote: > [...] > > The unittest job inherits from the base jobs that fungi's > > modifying here: > > https://review.opendev.org/c/opendev/base-jobs/+/789097/ and here: > > https://review.opendev.org/c/opendev/base-jobs/+/789098 ; so no > > need to pin a nodeset - we'll get the changes transparently when > > the patches merge. > [...] > > Specifically 789098, yeah, which is now scheduled for approval two > weeks from today: > > http://lists.opendev.org/pipermail/service-announce/2021-May/000019.html This has now merged as scheduled, ubuntu-focal nodes are the default for any jobs which don't override their nodeset to something else. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From hberaud at redhat.com Tue May 18 13:08:14 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 May 2021 15:08:14 +0200 Subject: [cinder][kolla][OSA][release] Wallaby cycle-trailing release deadline Message-ID: Hello teams with deliverables following the cycle-trailing release model! This is just a reminder about wrapping those Wallaby trailing deliverables up. A few cycles ago we extended the deadline for cycle-trailing to give more time, so the actual deadline isn't until July 02, 2021: https://releases.openstack.org/xena/schedule.html#x-cycle-trail If things are ready sooner than that though, all the better for our downstream consumers. Just for awareness, the following cycle-trailing deliverables will need their final releases at some point in the next few months: cinderlib kayobe kolla-ansible kolla openstack-ansible-roles openstack-ansible Thanks! Hervé Beraud (hberaud) and the Release Management Team -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue May 18 14:10:19 2021 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 May 2021 16:10:19 +0200 Subject: [wallaby]nova, oslo.db or SQLAlchemy bug? "no attribute statement" In-Reply-To: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> References: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> Message-ID: Hello, SQLAlchemy 1.4 requested some adaptations in various places (oslo.db, nova, cinder etc). Without digging into your issue, oslo.db 8.6.0 at least is required to support this version of SQLAlchemy [1]. Also you should notice that various patches are required on Nova and other services too [2][3][4][5]. I think that the services deliverables (nova, cinder, etc) should be released to propagate their fixes more widely and surely fix your context. For further reading please have a look to [6]. [1] https://review.opendev.org/c/openstack/releases/+/788488 [2] https://review.opendev.org/c/openstack/nova/+/788471 [3] https://review.opendev.org/c/openstack/placement/+/789921 [4] https://review.opendev.org/c/openstack/masakari/+/790216 [5] https://review.opendev.org/c/openstack/cinder/+/790797 [6] https://review.opendev.org/c/openstack/requirements/+/788339 Le mar. 18 mai 2021 à 13:57, tjoen a écrit : > Python-3.9.4 > nova-23.0.1 > oslo.db-8.5.0 > SQLAlchemy-1.4.15 > > docs.openstack.org/install-guide/launch-instance-provider.html > $ openstack server list > Status is SHUTDOWN > $ openstack server start provider-instance > results in > site-packages/oslo_db/sqlalchemy/update_match.py", line 489, in > _update_context.statement.froms[0] > ERROR nova.api.openstack.wsgi AttributeError: 'QueryContext' object > has no attribute 'statement' > > site-packages/oslo_db/sqlalchemy/update_match.py line 488 and 489: > context = query._compile_context() > primary_table = context.statement.froms[0] > > site-packages/sqlalchemy/orm/query.py > def _compile_context(self, for_statement=False): > compile_state = self._compile_state(for_statement=for_statement) > context = QueryContext( > context = QueryContext( > compile_state, > compile_state.statement, > self._params, > self.session, > self.load_options, > ) > return context > > nova, oslo.db or SQLAlchemy bug? > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue May 18 14:31:29 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Tue, 18 May 2021 16:31:29 +0200 Subject: [wallaby]nova, oslo.db or SQLAlchemy bug? "no attribute statement" In-Reply-To: References: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> Message-ID: On Tue, May 18, 2021 at 16:10, Herve Beraud wrote: > Hello, > > SQLAlchemy 1.4 requested some adaptations in various places (oslo.db, > nova, cinder etc). > > Without digging into your issue, oslo.db 8.6.0 at least is required > to support this version of SQLAlchemy [1]. > > Also you should notice that various patches are required on Nova and > other services too [2][3][4][5]. I think that the services > deliverables (nova, cinder, etc) should be released to propagate > their fixes more widely and surely fix your context. > > For further reading please have a look to [6]. > > [1] https://review.opendev.org/c/openstack/releases/+/788488 > [2] https://review.opendev.org/c/openstack/nova/+/788471 > [3] https://review.opendev.org/c/openstack/placement/+/789921 > [4] https://review.opendev.org/c/openstack/masakari/+/790216 > [5] https://review.opendev.org/c/openstack/cinder/+/790797 > [6] https://review.opendev.org/c/openstack/requirements/+/788339 > > Le mar. 18 mai 2021 à 13:57, tjoen a écrit : >> Python-3.9.4 >> nova-23.0.1 >> oslo.db-8.5.0 >> SQLAlchemy-1.4.15 >> Please note that Nova on stable/wallaby does not officially support SQLAlchemy 1.4 as per the global upper constraint in requirements https://github.com/openstack/requirements/blob/stable/wallaby/upper-constraints.txt#L150 Cheers, gibi >> >> docs.openstack.org/install-guide/launch-instance-provider.html >> $ openstack server list >> Status is SHUTDOWN >> $ openstack server start provider-instance >> results in >> site-packages/oslo_db/sqlalchemy/update_match.py", line 489, in >> _update_context.statement.froms[0] >> ERROR nova.api.openstack.wsgi AttributeError: 'QueryContext' object >> has no attribute 'statement' >> >> site-packages/oslo_db/sqlalchemy/update_match.py line 488 and 489: >> context = query._compile_context() >> primary_table = context.statement.froms[0] >> >> site-packages/sqlalchemy/orm/query.py >> def _compile_context(self, for_statement=False): >> compile_state = >> self._compile_state(for_statement=for_statement) >> context = QueryContext( >> context = QueryContext( >> compile_state, >> compile_state.statement, >> self._params, >> self.session, >> self.load_options, >> ) >> return context >> >> nova, oslo.db or SQLAlchemy bug? >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From tkajinam at redhat.com Tue May 18 14:42:09 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 18 May 2021 23:42:09 +0900 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: Thank you, Marios and the team for your time in the meeting. Based on our discussion, I'll nominate the following three volunteers from tripleo core team to the puppet-openstack core team. - Marios Andreou - Ronelle Landy - Wes Hayutin Their scope of +2 will be limited to tripleo job definitions (which are written in .zuul.yaml or zuul.d/*.yaml) at this moment. I've not received any objections so far (Thank you Tobias for sharing your thoughts !) but will wait for one week to be open for any feedback from the other cores or people around. My current plan is to add a specific hashtag so that these reviewers can easily find the related changes like [1] but please let me know if anybody has preference. [1] https://review.opendev.org/q/hashtag:%22puppet-tripleo-job%22+(status:open%20OR%20status:merged) P.S. I received some interest about maintaining puppet modules (especially our own integration jobs), so will have some people involved in that part as well. On Fri, May 14, 2021 at 8:57 PM Marios Andreou wrote: > On Fri, May 14, 2021 at 2:46 PM Takashi Kajinami > wrote: > > > > Hi Marios, > > > > On Fri, May 14, 2021 at 8:10 PM Marios Andreou > wrote: > >> > >> On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami > wrote: > >> > > >> > Hi team, > >> > > >> > >> Hi Takashi > >> > >> > >> > As you know, we currently have TripleO jobs in some of the puppet > repos > >> > to ensure a change in puppet side doesn't break TripleO which consumes > >> > some of the modules. > >> > >> in case it isn't clear and for anyone else reading, you are referring > >> to things like [1]. > > > > This is a nitfixing but puppet-pacemaker is a repo under the TripleO > project. > > I intend a job like > > > https://zuul.opendev.org/t/openstack/builds?job_name=puppet-nova-tripleo-standalone&project=openstack/puppet-nova > > which is maintained under puppet repos. > > > > ack thanks for the clarification ;) makes more sense now > > >> > >> > >> > > >> > Because these jobs hugely depend on the job definitions in TripleO > repos, > >> > I'm wondering whether we can invite a few cores from the TripleO CI > team > >> > to the puppet-openstack core group to maintain these jobs. > >> > I expect the scope here is very limited to tripleo job definitions > and doesn't > >> > expect any +2 for other parts. > >> > > >> > I'd be nice if I can hear any thoughts on this topic. > >> > >> Main question is what kind of maintenance do you have in mind? Is it > >> that these jobs are breaking often and they need fixes in the > >> puppet-repos themselves so we need more cores there? (though... I > >> would expect the fixes to be needed in tripleo-ci where the job > >> definitions are, unless the repos are overriding those definitions)? > > > > > > We define our own base tripleo-puppet-ci-centos-8-standalone job[4] and > > each puppet module defines their own tripleo job[5] by overriding the > base job, > > so that we can define some basic items like irellevant files or voting > status > > for all puppet modules in a single place. > > > > [4] > https://github.com/openstack/puppet-openstack-integration/blob/master/zuul.d/tripleo.yaml > > [5] https://github.com/openstack/puppet-nova/blob/master/.zuul.yaml > > > > > >> > >> Or is it that you don't have enough folks to get fixes merged so this > >> is mostly about growing the pool of reviewers? > > > > > > Yes. My main intention is to have more reviewers so that we can fix our > CI jobs timely. > > > > Actually the proposal came to my mind when I was implementing the > following changes > > to solve very frequent job timeouts which we currently observe in > puppet-nova wallaby. > > IMO these changes need more attention from TripleO's perspective rather > than puppet's > > perspective. > > https://review.opendev.org/q/topic:%22tripleo-tempest%22+(status:open) > > > > In the past when we introduced content provider jobs, we ended up with a > bunch of patches > > submitted to both tripleo jobs and puppet jobs. Having some people from > TripleO team > > would help moving forward such a transition more smoothly. > > > > In the past we have had three people (Alex, Emilien and I) involved in > both TripleO and puppet > > but since Emilien has shifted this focus, we have now 2 activities left. > > Additional one or two people would help us move patches forward more > efficiently. > > (Since I can't approve my own patch.) > > > >> I think limiting the scope to just the contents of zuul.d/ or > >> .zuul.yaml can work; we already have a trust based system in TripleO > >> with some cores only expected to exercise their voting rights in > >> particular repos even though they have full voting rights across all > >> tripleo repos). > >> > >> Are you able to join our next tripleo-ci community call? It is on > >> Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, > >> perhaps we can work something out depending on what you need. > >> Otherwise no problem let's continue to discuss here > > > > > > Sure. I can join and bring up this topic. > > I'll keep this thread to hear some opinions from the puppet side as well. > > > > > > ok thanks look forward to discussing on Tuesday then, > > regards, marios > > > >> > >> > >> regards, marios > >> > >> [1] > https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker > >> [2] https://meet.google.com/bqx-xwht-wky > >> [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both > >> > >> > >> > >> > > >> > Thank you, > >> > Takashi > >> > > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Tue May 18 15:18:01 2021 From: tjoen at dds.nl (tjoen) Date: Tue, 18 May 2021 17:18:01 +0200 Subject: [wallaby]nova, oslo.db or SQLAlchemy bug? "no attribute statement" In-Reply-To: References: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> Message-ID: On 5/18/21 4:10 PM, Herve Beraud wrote: > SQLAlchemy 1.4 requested some adaptations in various places (oslo.db, nova, > cinder etc). I understand from reply (not to the list) from Balazs Gibizer that Wallaby needs SQLAlchemy < 1.4 > Without digging into your issue, oslo.db 8.6.0 at least is required to > support this version of SQLAlchemy [1]. Yes, Wallaby only oslo.db = 8.5. 8.6 is for next release I understand now Thanks both for the clarification. I'll downgrade SQLAlchemy and report later if I get Wallaby running > Le mar. 18 mai 2021 à 13:57, tjoen a écrit : > >> Python-3.9.4 >> nova-23.0.1 >> oslo.db-8.5.0 >> SQLAlchemy-1.4.15 From rosmaita.fossdev at gmail.com Tue May 18 15:47:35 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 18 May 2021 11:47:35 -0400 Subject: [cinder] festival of XS reviews 21 April 2021 Message-ID: Hello Cinder community members, This is a reminder that the most recent edition of the Cinder Festival of XS Reviews will be held at the end of this week on Friday 21 April. who: Everyone! what: The Cinder Festival of XS Reviews when: Friday 21 April 2021 from 1400-1600 UTC where: https://meetpad.opendev.org/cinder-festival-of-reviews This recurring meeting can be placed on your calendar by using this handy ICS file: http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics See you there! brian From bence.romsics at gmail.com Tue May 18 15:49:14 2021 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 18 May 2021 17:49:14 +0200 Subject: [xena][neutron][ovn] Follow up to BGP with OVN PTG discussions In-Reply-To: <27a90b1b-19b4-286b-0e9b-9bb04a44a7a4@redhat.com> References: <27a90b1b-19b4-286b-0e9b-9bb04a44a7a4@redhat.com> Message-ID: Hi Dan, > Red Hat has begun gathering a team of engineers to add OpenStack support > for BGP dynamic routing using the Free Range Routing (FRR) set of > daemons. Acting as a technical lead for the project, I led one session > in the TripleO room to discuss the installer components and two sessions > in the Neutron room to discuss BGP routing with OVN, and BGP EVPN with OVN. There may be quite a lot of overlap with what we (Ericsson) are working on right now, we would be really interested in your long term vision and also the details of your plans. > There will likely be opportunities to leverage APIs and contribute to > existing work being done with Neutron Dynamic Routing, BGPVPN, and > other work being done to implement BGP EVPN. We would like to > collaborate with Ericsson and others and come up with a solution that > fits us all! There are a few related specs proposed already. Below are two links that may be the most relevant to you. All your input is welcome. BFD: https://review.opendev.org/c/openstack/neutron-specs/+/767337 BGP: https://review.opendev.org/c/openstack/neutron-specs/+/783791 > BGP may be used for equal-cost multipath (ECMP) load balancing of > outbound links, and bi-directional forwarding detection (BFD) for > resiliency to ensure that a path provides connectivity. > BGP may also be used for routing inbound traffic to provider network IPs > or floating IPs for instance connectivity. I believe we also share these two use cases - with some caveats, please see below. > The Compute nodes will run > FRR to advertise routes to the local VM IPs or floating IPs hosted on > the node. FRR has a daemon named Zebra that is responsible for > exchanging routes between routing daemons such as BGP and the kernel. > The redistribute connected statement in the FRR configuration will cause > local IP addresses on the host to be advertised via BGP. Floating IP > addresses are attached to a loopback interface in a namespace, so they > will be redistributed using this method. Changes to OVN will be required > to ensure provider network IPs assigned to VMs will be assigned to a > loopback interface in a namespace in a similar fashion. Am I getting it right that your primary objective is to route the traffic directly to the hypervisors and there hoist it to the tunnel networks? Some of the links in your email also gave me the impression that occasionaly you'd want to route the traffic to a neutron router's gateway port. Is that right? In which cases? Currently neutron-dynamic-routing advertises routes with their nexthop being the router's gw port. We have a use case for arbitrary VM ports being the nexthop. And you seem to have a use case for the hypervisor being the nexthop. Maybe we could come up with an extension of the n-d-r API that can express these variations... Similar thoughts could be applied to the BFD proposal too. > https://github.com/luis5tb/bgp-agent In the further development of the proof-of-concept, how much do you plan to make this API driven? The PoC seems to be reacting to port binding events, but most other information (peers, filters, maybe nexthops) seem to be coming from TripleO deployed configuration and not from the API. How would you like this to look in the long term? > BGP EVPN with multitenancy will require separate VRFs per tenant. This > will allow separate routing tables to be maintained, and allow for > overlapping IP addresses for different Neutron tenant networks. FRR may > have the capability to utilize a single BGP peering session to combine > advertisements for all these VRFs, but there is still work to be done to > prototype this design. This may result in more efficient BGP dynamic > updates, and could potentially make troubleshooting more straightforward. BGP to API endpoints and BGPVPN related things are not on our plate right now. However support in Neutron for VRFs could be interesting to us too. Thanks for the great writeup! Cheers, Bence Romsics irc: rubasov Ericsson Software Technology From geguileo at redhat.com Tue May 18 15:49:25 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 18 May 2021 17:49:25 +0200 Subject: cinder.volume.drivers.rbd connecting In-Reply-To: References: Message-ID: <20210518154925.jvrnja6geg7kg3mv@localhost> On 12/05, ManuParra wrote: > Hi, we have faced some problems when creating volumes to add to VMs, to see what was happening I activated the Debug=True mode of Cinder in the cinder.conf file. > I see that when I try to create a new volume I get the following in the log: > > "DEBUG cinder.volume.drivers.rbd connecting to (conf=/etc/ceph/ceph.conf, timeout=-1) _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431” > > I’m using OpenStack Train and Ceph Octopus. > When I check with openstack volume service list > > +------------------+----------------------+------+---------+-------+----------------------------+ > | Binary | Host | Zone | Status | State | Updated At | > +------------------+----------------------+------+---------+-------+----------------------------+ > | cinder-scheduler | spsrc-controller-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 | > | cinder-scheduler | spsrc-controller-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > | cinder-scheduler | spsrc-controller-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 | > | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-11T10:48:42.000000 | > | cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > | cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 | > | cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > +------------------+----------------------+------+---------+-------+——————————————+ > > So cinder-volume is Down, > > I compare "cinder-backup" Ceph config with "cinder-volume", and they are equal! so why only one of them works? > diff /etc/kolla/cinder-backup/ceph.conf /etc/kolla/cinder-volume/ceph.conf > > I go inside the "cinder_volume" container > docker exec -it cinder_volume /bin/bash > > Try listing cinder volumes, works! > rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls > > Any Ideas. Kind regards. Hi, Cinder volume could be down because the stats polling is taking too long. If that's the case, then you can set: rbd_exclusive_cinder_pool = true in your driver's section in cinder.conf to fix it. Cheers, Gorka. From mparra at iaa.es Tue May 18 15:55:28 2021 From: mparra at iaa.es (ManuParra) Date: Tue, 18 May 2021 17:55:28 +0200 Subject: cinder.volume.drivers.rbd connecting In-Reply-To: <20210518154925.jvrnja6geg7kg3mv@localhost> References: <20210518154925.jvrnja6geg7kg3mv@localhost> Message-ID: Hi Gorka, Thank you very much for your help, we checked that option but before testing it we worked on the following (there is another message in the list where we have discussed it and you point it out): This was the solution proposed by my colleague Sebastian: > We restarted the "cinder-volume" and "cinder-scheduler" services with "debug=True", got back the same debug message: >2021-05-15 23:15:27.091 31 DEBUG cinder.volume.drivers.rbd [req-f43e30ae-2bdc-4690-9c1b-3e58081fdc9e - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 >Then, I had a look at the docs looking for "timeout" configuration options: >https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html#driver-options >"rados_connect_timeout = -1; (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used." >I added it to the "cinder.conf" file for the "cinder-volume" service with: "rados_connect_timeout=15". >Before this change the "cinder-volume" logs ended with this message: >2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) >After the change: >2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) >2021-05-15 23:04:23.180 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver initialization completed successfully. >2021-05-15 23:04:23.190 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initiating service 12 cleanup >2021-05-15 23:04:23.196 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Service 12 cleanup completed. >2021-05-15 23:04:23.315 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0) >2021-05-15 23:05:10.381 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver post RPC initialization completed successfully. >And now the service is reported as "up" in "openstack volume service list" and we can successfully create Ceph volumes now. Many will do more validation tests today to confirm. >So it looks like the "cinder-volume" service didn't start up properly in the first place and that's why the service was "down”. Kind regards. > On 18 May 2021, at 17:49, Gorka Eguileor wrote: > > On 12/05, ManuParra wrote: >> Hi, we have faced some problems when creating volumes to add to VMs, to see what was happening I activated the Debug=True mode of Cinder in the cinder.conf file. >> I see that when I try to create a new volume I get the following in the log: >> >> "DEBUG cinder.volume.drivers.rbd connecting to (conf=/etc/ceph/ceph.conf, timeout=-1) _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431” >> >> I’m using OpenStack Train and Ceph Octopus. >> When I check with openstack volume service list >> >> +------------------+----------------------+------+---------+-------+----------------------------+ >> | Binary | Host | Zone | Status | State | Updated At | >> +------------------+----------------------+------+---------+-------+----------------------------+ >> | cinder-scheduler | spsrc-controller-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 | >> | cinder-scheduler | spsrc-controller-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | >> | cinder-scheduler | spsrc-controller-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 | >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-11T10:48:42.000000 | >> | cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | >> | cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 | >> | cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 | >> +------------------+----------------------+------+---------+-------+——————————————+ >> >> So cinder-volume is Down, >> >> I compare "cinder-backup" Ceph config with "cinder-volume", and they are equal! so why only one of them works? >> diff /etc/kolla/cinder-backup/ceph.conf /etc/kolla/cinder-volume/ceph.conf >> >> I go inside the "cinder_volume" container >> docker exec -it cinder_volume /bin/bash >> >> Try listing cinder volumes, works! >> rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls >> >> Any Ideas. Kind regards. > > Hi, > > Cinder volume could be down because the stats polling is taking too > long. > > If that's the case, then you can set: > > rbd_exclusive_cinder_pool = true > > in your driver's section in cinder.conf to fix it. > > Cheers, > Gorka. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue May 18 16:03:14 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 18 May 2021 12:03:14 -0400 Subject: [cinder] festival of XS reviews 21 May 2021 In-Reply-To: References: Message-ID: <1a40b655-5ea6-3e43-1cda-87c8cebeb3e2@gmail.com> I'm living in the past. Correct date is Friday 21 May. Sorry for the confusion. Same time (1400 UTC) and place: https://meetpad.opendev.org/cinder-festival-of-reviews On 5/18/21 11:47 AM, Brian Rosmaita wrote: > Hello Cinder community members, > > This is a reminder that the most recent edition of the Cinder Festival > of XS Reviews will be held at the end of this week on Friday 21 April. > > who: Everyone! > what: The Cinder Festival of XS Reviews > when: Friday 21 April 2021 from 1400-1600 UTC > where: https://meetpad.opendev.org/cinder-festival-of-reviews > > This recurring meeting can be placed on your calendar by using this > handy ICS file: >   http://eavesdrop.openstack.org/calendars/cinder-festival-of-reviews.ics > > > See you there! > brian From bekir.fajkovic at citynetwork.eu Mon May 17 08:46:20 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Mon, 17 May 2021 10:46:20 +0200 Subject: Scheduling backups in Trove In-Reply-To: References: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> Message-ID: <424f17d2a9ba4a18cb26796171e49010@citynetwork.eu> Hello! Thanks for the answer, however i am not sure that i exactly understand the approach of scheduling the backups by leveraging container running inside trove guest instance. I will try to figure out. Anyway, would this approach have a global impact, meaning only one type of schedule(s) will be applicable for all the tenants or is there still some kind of flexibility for each tenant to apply own schedules in this way? I would be very thankful for some more details,  if possible :) We have now deployed Trove in one of our regions and are going to let certain customers test the functionality and features. We currently deployed mysql, mariadb and postgresql datastores with latest production-ready datastore versions for mysql and mariadb (5.7.34 and 10.4.18) and for postgresql it is version 12.4.  As You might understand, we have many questions unanswered, and if there is anyone capable and willing to answer some of them we would be very thankful: - What is next in pipe in terms of production-ready datastore types and datastore versions? - Clustering - according to the official documentation pages it is still experimental feature. When can we expect this to be supported and for what datastore types? - According to some info we received earlier, PostgreSQL 12.4 is only partially supported - what part of functionality is not fully supported here - replication or something else? - Creation of users and databases through OpenStack/Trove API is only supported with mysql datastore type. When can we expect the same level of functionality for at least the other two datastore types? - MongoDB in particular, when can this datastore type be expected to be supported for deployment? - In the case of database instance failure (for example failure due to the failure of the Compute node hosting the instance), is there any built-in mechanisms in Trove trying to   automatically bring up and recover the instance, that i am not aware of? I am so sorry for this question bombarding but i simply have to ask :) Thanks in advance! Best regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- From: Lingxian Kong (anlin.kong at gmail.com) Date: 04/27/21 00:30 To: Bekir Fajkovic (bekir.fajkovic at citynetwork.eu) Cc: openstack-discuss (openstack-discuss at lists.openstack.org) Subject: Re: Scheduling backups in Trove Hi Bekir, You can definitely create Mistral workflow to periodically trigger Trove backup if Mistral supports Trove action and you have already deployed Mistral in your cloud. Otherwise, another option is to implement schedule backups in Trove itself (by leveraging container running inside trove guest instance). --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Sat, Apr 24, 2021 at 3:58 AM Bekir Fajkovic wrote: Hello! A question regarding the best practices when it comes to scheduling backups: Is there any built-in mechanism implemented today in the service or do the customer or cloud service provider have to schedule the backup themselves? I see some proposals about implementing backup schedules through Mistral workflows: https://specs.openstack.org/openstack/trove-specs/specs/newton/scheduled-backup.html But i am not sure about the status of that. Best Regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangkaimoda at qq.com Tue May 18 10:31:46 2021 From: yangkaimoda at qq.com (=?ISO-8859-1?B?eWFuZ2thaQ==?=) Date: Tue, 18 May 2021 18:31:46 +0800 Subject: [neutron]l3 vrotuer use dpdk ports making Forwarding performance too low Message-ID: Hi,     I have a problem, when l3 vrouter use dpdk ports,  the performance is too low, eg: the bandwidth of vm's east-west. This is a aware issue? Have some solution if need l3 router and vm on one dpdk host? Best Regards, Yangkai -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Tue May 18 18:44:39 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 18 May 2021 18:44:39 +0000 Subject: [interop] patches to review Message-ID: Team, Can use a few reviews on these: https://review.opendev.org/c/osf/interop/+/784622 https://review.opendev.org/c/osf/interop/+/789399 https://review.opendev.org/c/osf/interop/+/787646 Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Wed May 19 08:06:43 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Wed, 19 May 2021 10:06:43 +0200 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> Message-ID: Hi, I didn't really want to get into this, but we (VF squad) have recently encountered an issue that might provide some context to this discussion. Due to omission on my part, we have been building pdf documentation for validations-common without querying openstack/requirements upper-constrainst. This meant that we were always pulling the newest version of packages, and as the job was triggered only rarely, we didn't see the problem until promotes began to fail. A new version of one of our dependencies (I believe it was sphinx) introduced, by proxy, dependency on the "tgtermes.sty" style file. Lack of upper constraints made the issue harder to diagnose, as the version of packages to installed changed, depending on their relative dependencies and continuous updates. With several dozen packages, there is considerable chance of one getting a version bump every week or so. Which brings me to my point. The openstack/requirements does provide one rather essential service for us. In the form of upper-constraints for our pip builds. While we are mostly installing software through rpm, many CI jobs use pip in some fashion. Without upper constraints, pip pulls aggressively the newest version available and compatible with other packages. Which causes several issues, noted by even pip people. There is also a question of security. There is a possibility of a bad actor introducing a package with an extremely high version number. Such a package would get precedence over the legitimate releases. In fact, just this sort of attack was spotted in the wild.[1] Now, nothing is preventing us from using upper requirements, without being in the openstack/requirements projects. On the other hand, if we remove ourselves from the covenant, nothing is stopping the openstack/requirements people from changing versions of the accepted packages without considering the impact it could have on our projects. I believe this is something we need to consider. [1] (https://medium.com/@alex.birsan/dependency-confusion-4a5d60fec610) On Mon, May 17, 2021 at 8:55 AM Rabi Mishra wrote: > > > On Fri, May 14, 2021 at 9:07 PM Javier Pena wrote: > >> >> On Fri, May 14, 2021 at 6:41 AM Marios Andreou wrote: >> >>> On Thu, May 13, 2021 at 9:41 PM James Slagle >>> wrote: >>> > >>> > I'd like to propose that TripleO opt out of dependency management by >>> removing tripleo-common from global-requirements.txt. I do not feel that >>> the global dependency management brings any advantages or anything needed >>> for TripleO. I can't think of any reason to enforce the ability to be >>> globally pip installable with the rest of OpenStack. >>> >>> To add a bit more context as to how this discussion came about, we >>> tried to remove tripleo-common from global-requirements and the >>> check-requirements jobs in tripleoclient caused [1] as a result. >>> >>> So we need to decide whether to continue to be part of that >>> requirements contract [2] or if we will do what is proposed here and >>> remove ourselves altogether. >>> If we decide _not_ to implement this proposal then we will also have >>> to add the requirements-check jobs in tripleo-ansible [3] and >>> tripleo-validations [4] as they are currently missing. >>> >>> > >>> > Two of our most critical projects, tripleoclient and tripleo-common do >>> not even put many of their data files in the right place where our code >>> expects them when they are pip installed. So, I feel fairly confident that >>> no one is pip installing TripleO and relying on global requirements >>> enforcement. >>> >>> I don't think this is _just_ about pip installs. It is generally about >>> the contents of each project requirements.txt. As part of the >>> requirements contract, it means that those repos with which we are >>> participating (the ones in projects.txt [5]) are protected against >>> other projects making any breaking changes in _their_ >>> requirements.txt. Don't the contents of requirements.txt also end up >>> in the .spec file from which we are building rpm e.g. [6] for tht? In >>> which case if we remove this and just stop catching any breaking >>> changes in the check/gate check-requirements jobs, I suspect we will >>> just move the problem to the rpm build and it will fail there. >>> >> >> I don't see that in the spec file. Unless there is some other automation >> somewhere that regenerates all of the BuildRequires/Requires and modifies >> them to match requirements.txt/test-requirements.txt? >> >> We run periodic syncs once every cycle, and try our best to make spec >> requirements match requirements.txt/test-requirements.txt for the project. >> See >> https://review.rdoproject.org/r/c/openstack/tripleoclient-distgit/+/33367 >> for a recent example on tripleoclient. >> >> I don't have a special opinion on keeping tripleo-common inside >> global-requirements.txt or not. However, all TripleO projects still need to >> be co-installable with other OpenStack projects, otherwise we will not be >> able to build packages for them due to all the dependency issues that could >> arise. >> > > I think this is probably less of an issue with containerized services(?). > At present, there are only two service containers that install > tripleo-common (mistral, nova-scheduler). With mistral deprecated and nova > removed from the undercloud (tripleo-common has a tripleo specific > scheduler filter used by undercloud nova), we probably won't have many > issues. > > However, as we sync requirements from project requirements to package > specs regularly, there could be issues with broken requirements. > > > > >> I'm not sure if that was implied in the original post. >> >> Regards, >> Javier >> >> >> >> > >>> > One potential advantage of not being in global-requirements.txt is >>> that our unit tests and functional tests could actually test the same code. >>> As things stand today, our unit tests in projects that depend on >>> tripleo-common are pinned to the version in global-requirements.txt, while >>> our functional tests currently run with tripleo-common from master (or >>> included depends-on). >>> >>> I don't think one in particular is a very valid point though - as >>> things currently stand in global-requirements we aren't 'pinning' all >>> we have there is "tripleo-common!=11.3.0 # Apache-2.0" [7] to avoid >>> (I assume) some bad release we made. >>> >> >> tripleo-common is pinned to the latest release when it's pip installed in >> the venv, instead of using latest git (and including depends-on). You're >> right that it's probably what we want to keep doing, and this is probably >> not related to opting out of g-r. Especially since we don't want to require >> latest git of a dependency when running unit tests locally. However it is >> worth noting that our unit tests (tox) and functional tests (tripleo-ci) >> use different code for the dependencies. That was not obvious to me and >> others on the surface. Perhaps we could add additional tox jobs that do >> require the latest tripleo-common from git to also cover that scenario. >> >> Here's an example: >> https://review.opendev.org/c/openstack/python-tripleoclient/+/787907 >> https://review.opendev.org/c/openstack/tripleo-common/+/787906 >> >> The tripleo-common patch fails unit tests as expected, the tripleoclient >> which depends-on the tripleo-common patch passes unit tests, but fails >> functional. I'd rather see that failure caught by a unit test as well. >> >> >> -- >> -- James Slagle >> -- >> >> >> > > -- > Regards, > Rabi Mishra > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed May 19 08:48:11 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 19 May 2021 10:48:11 +0200 Subject: [nova][train] live migration issue Message-ID: Hello Guys, on train centos7 I am facing live migration issue only for some instances (not all). The error reported is: 2021-05-19 08:45:57.096 142537 ERROR nova.compute.manager [-] [instance: b18450e8-b3db-4886-a737-c161d99c6a46] Live migration failed.: libvirtError: Unable to read from monitor: Connection reset by peer The instance remains in pause on both source and destination host. Any help,please ? Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed May 19 08:55:39 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 19 May 2021 10:55:39 +0200 Subject: [nova][train] live migration issue In-Reply-To: References: Message-ID: I am sorry, the openstack version is stein Il giorno mer 19 mag 2021 alle ore 10:48 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello Guys, > on train centos7 I am facing live migration issue only for some instances > (not all). > The error reported is: > 2021-05-19 08:45:57.096 142537 ERROR nova.compute.manager [-] [instance: > b18450e8-b3db-4886-a737-c161d99c6a46] Live migration failed.: libvirtError: > Unable to read from monitor: Connection reset by peer > > The instance remains in pause on both source and destination host. > > Any help,please ? > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From syedammad83 at gmail.com Wed May 19 09:52:38 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Wed, 19 May 2021 14:52:38 +0500 Subject: [wallaby][trove] Instance Volume Resize Message-ID: Hi, I am using wallaby / trove on ubuntu 20.04. I am trying to extend volume of database instance. Its having trouble that instance cannot exceed volume size of 10GB. My flavor has 2vcpus 4GB RAM and 10GB disk. I created a database instance with 5GB database size and mysql datastore. The deployment has created 10GB root and 5GB /var/lib/mysql. I have tried to extend volume to 11GB, it failed with error that "Volume 'size' cannot exceed maximum of 10 GB, 11 cannot be accepted". I want to keep root disk size to 10GB and only want to extend /var/lib/mysql keeping the same flavor. Is it possible or should I need to upgrade flavor as well ? -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Wed May 19 10:37:01 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 19 May 2021 12:37:01 +0200 Subject: [sdk] R1.0 preparation work Message-ID: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> Hey all, As we were discussing during the PTG preparation work for the openstacksdk R1.0 has been started. There is now feature branch “feature/r1” and a set of patches already in place ([1]). (While https://review.opendev.org/c/openstack/devstack/+/791541 is not merged there are no functional tests running, but that is not blocking from doing the main work) Things to be done: - get rid of all direct REST calls in the cloud layer. Instead reuse corresponding proxies [must] - generalise tag, metadata, quota(set), limits [must] - clean the code from some deprecated things and py2 remaining (if any) [must] - review resource caching to be implemented on the resource layer [optional] - introduction of read-only resource properties [optional] - restructure documentation to make it more used friendly [optional] Planned R1 interface changes Every cloud layer method (Connection.connection.***, and not i.e. Connection.connection.compute.***) will consistently return Resource objects. At the moment there is a mix of Munch and Resource types depending on the case. Resource class itself fulfil dict, Munch and attribute interface, so there should be no breaking changes for getting info out of it. The only known limitation is that compared to bare dict/Munch it might be not allowed to modify some properties directly on the object. Ansible collection modules [2] would be modified to explicitly convert return of sdk into dict (Resource.to_dict()) to further operate on it. This means that in some cases older Ansible modules (2.7-2.9) will not properly work with newer SDK. Zuul jobs and everybody stuck without possibility to use newer Ansible collection [2] are potential victims here (depends on the real usage pattern). Sadly there is no way around it, since historically ansible modules operate whatever SDK returns and Ansible sometimes decides to alter objects (i.e. no_log case) what we might want to forbid (read-only or virtual properties). Sorry about that. In some rare cases attribute naming (whatever returned by the cloud layer) might be affected (i.e. is_bootable vs bootable). We are going to strictly bring all names to the convention. Due to the big amount of changes touching lot of files here I propose to stop adding new features into the master branch directly and instead put them into feature/r1 branch. I want to keep master during this time more like a stable branch for bugfixes and other important things. Once r1 branch feel ready we will merge it into master and most likely release something like RC to be able to continue integration work with Ansible. Everybody interested in the future of sdk is welcome in doing reviews and code [1] to know what comes and to speed up the work. Any concerns? Suggestions? Regards, Artem [1] https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 [2] https://opendev.org/openstack/ansible-collections-openstack -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed May 19 10:55:45 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 19 May 2021 12:55:45 +0200 Subject: cinder.volume.drivers.rbd connecting In-Reply-To: References: <20210518154925.jvrnja6geg7kg3mv@localhost> Message-ID: <20210519105545.d3yrf3hlxkvbooji@localhost> On 18/05, ManuParra wrote: > Hi Gorka, > Thank you very much for your help, we checked that option but before testing it we worked on the following (there is another message in the list where we have discussed it and you point it out): > This was the solution proposed by my colleague Sebastian: > Hi, That's good to know. Maybe we need to explore changing the default Cinder settings. Cheers, Gorka. > > We restarted the "cinder-volume" and "cinder-scheduler" services with "debug=True", got back the same debug message: > > >2021-05-15 23:15:27.091 31 DEBUG cinder.volume.drivers.rbd [req-f43e30ae-2bdc-4690-9c1b-3e58081fdc9e - - - - -] connecting to cinder at ceph (conf=/etc/ceph/ceph.conf, timeout=-1). _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431 > > >Then, I had a look at the docs looking for "timeout" configuration options: > > >https://docs.openstack.org/cinder/train/configuration/block-storage/drivers/ceph-rbd-volume-driver.html#driver-options > > >"rados_connect_timeout = -1; (Integer) Timeout value (in seconds) used when connecting to ceph cluster. If value < 0, no timeout is set and default librados value is used." > > >I added it to the "cinder.conf" file for the "cinder-volume" service with: "rados_connect_timeout=15". > > >Before this change the "cinder-volume" logs ended with this message: > > >2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) > > >After the change: > > >2021-05-15 23:02:48.821 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Starting volume driver RBDDriver (1.2.0) > >2021-05-15 23:04:23.180 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver initialization completed successfully. > >2021-05-15 23:04:23.190 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initiating service 12 cleanup > >2021-05-15 23:04:23.196 31 INFO cinder.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Service 12 cleanup completed. > >2021-05-15 23:04:23.315 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Initializing RPC dependent components of volume driver RBDDriver (1.2.0) > >2021-05-15 23:05:10.381 31 INFO cinder.volume.manager [req-6e8f9f46-ee34-4925-9fc8-dea8729d0d93 - - - - -] Driver post RPC initialization completed successfully. > > >And now the service is reported as "up" in "openstack volume service list" and we can successfully create Ceph volumes now. Many will do more validation tests today to confirm. > > >So it looks like the "cinder-volume" service didn't start up properly in the first place and that's why the service was "down”. > > Kind regards. > > > > On 18 May 2021, at 17:49, Gorka Eguileor wrote: > > > > On 12/05, ManuParra wrote: > >> Hi, we have faced some problems when creating volumes to add to VMs, to see what was happening I activated the Debug=True mode of Cinder in the cinder.conf file. > >> I see that when I try to create a new volume I get the following in the log: > >> > >> "DEBUG cinder.volume.drivers.rbd connecting to (conf=/etc/ceph/ceph.conf, timeout=-1) _do_conn /usr/lib/python3.6/site-packages/cinder/volume/drivers/rbd.py:431” > >> > >> I’m using OpenStack Train and Ceph Octopus. > >> When I check with openstack volume service list > >> > >> +------------------+----------------------+------+---------+-------+----------------------------+ > >> | Binary | Host | Zone | Status | State | Updated At | > >> +------------------+----------------------+------+---------+-------+----------------------------+ > >> | cinder-scheduler | spsrc-controller-1 | nova | enabled | up | 2021-05-11T10:06:39.000000 | > >> | cinder-scheduler | spsrc-controller-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > >> | cinder-scheduler | spsrc-controller-3 | nova | enabled | up | 2021-05-11T10:06:39.000000 | > >> | cinder-volume | rbd:volumes at ceph-rbd | nova | enabled | down | 2021-04-11T10:48:42.000000 | > >> | cinder-backup | spsrc-mon-2 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > >> | cinder-backup | spsrc-mon-1 | nova | enabled | up | 2021-05-11T10:06:44.000000 | > >> | cinder-backup | spsrc-mon-3 | nova | enabled | up | 2021-05-11T10:06:47.000000 | > >> +------------------+----------------------+------+---------+-------+——————————————+ > >> > >> So cinder-volume is Down, > >> > >> I compare "cinder-backup" Ceph config with "cinder-volume", and they are equal! so why only one of them works? > >> diff /etc/kolla/cinder-backup/ceph.conf /etc/kolla/cinder-volume/ceph.conf > >> > >> I go inside the "cinder_volume" container > >> docker exec -it cinder_volume /bin/bash > >> > >> Try listing cinder volumes, works! > >> rbd -p cinder.volumes --id cinder -k /etc/ceph/ceph.client.cinder.keyring ls > >> > >> Any Ideas. Kind regards. > > > > Hi, > > > > Cinder volume could be down because the stats polling is taking too > > long. > > > > If that's the case, then you can set: > > > > rbd_exclusive_cinder_pool = true > > > > in your driver's section in cinder.conf to fix it. > > > > Cheers, > > Gorka. > > > From thierry at openstack.org Wed May 19 12:30:18 2021 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 19 May 2021 14:30:18 +0200 Subject: [largescale-sig] OpenInfra.Live May 20: "Upgrades" Message-ID: <984799aa-fd59-c3d9-4b0b-adbaea328f81@openstack.org> Hi everyone, For the last few months, the Large Scale SIG has been organizing video meetings around specific scaling topics, with operators of large scale deployments of OpenStack. We've decided to join forces with OpenInfra.Live and do future such meetings as OpenInfra.Live episodes. Tomorrow at 9am CT / 14:00 UTC we'll have the following live discussion: "Upgrades in Large Scale OpenStack infrastructure" including presentations from Belmiro Moreira (CERN), Arnaud Morin (OVHCloud), Mohammed Naser (Vexxhost), Imtiaz Chowdhury (Workday), Chris Morgan (Bloomberg) and Joshua Slater (Blizzard Entertainment). Come learn how those operators do it! The live audience will be able to ask questions to that awesome lineup of speakers, so don't miss the live show: https://www.youtube.com/watch?v=yf5iFiCg_Tw See you all tomorrow :) -- Thierry Carrez From erin at openstack.org Wed May 19 12:45:56 2021 From: erin at openstack.org (Erin Disney) Date: Wed, 19 May 2021 07:45:56 -0500 Subject: OpenInfra Live - May 20 at 9am CT (1400 UTC) Message-ID: Hi everyone, This week’s OpenInfra Live episode is brought to you by the OpenStack community! Keeping up with new OpenStack releases can be a challenge. At a very large scale, it can be daunting. In this episode of OpenInfra.Live, operators from some of the largest OpenStack deployments at Blizzard Entertainment, OVH, Bloomberg, Workday, Vexxhost and CERN will explain their upgrades methodology, share their experience, and answer the questions of our live audience. Come hear from the best how they do it! Episode: Upgrades in Large Scale OpenStack Infrastructure Date and time: May 20 at 9am CT (1400 UTC) You can watch us live on: YouTube: https://www.youtube.com/watch?v=yf5iFiCg_Tw LinkedIn: https://www.linkedin.com/feed/update/urn:li:ugcPost:6798703707925168128/ Facebook: https://www.facebook.com/104139126308032/posts/3974109482644291/ WeChat: recording will be posted on OpenStack WeChat after the live stream Thanks, Erin Erin Disney Sr. Event Marketing Manager Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Wed May 19 13:19:02 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 19 May 2021 14:19:02 +0100 Subject: Freenode and libera.chat Message-ID: Hi all, For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt Best, Erno 'jokke' Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed May 19 13:30:04 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 19 May 2021 15:30:04 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: > Hi all, > > For those of you who have not woken up to this sad day yet. Andrew Lee has > taken his stance as owner of freenode ltd. and by the (one sided) story of > the former volunteer staff members basically forced the whole community out. > > As there is history of LTM shutting down networks before (snoonet), it is > appropriate to expect that the intentions here are not aligned with the > communities and specially the users who's data he has access to via this > administrative takeover. > > I think it's our time to take swift action and show our support to all the > hard working volunteers who were behind freenode and move all our > activities to irc.libera.chat. > Probably not the best timing, but should we consider (again) running a more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla - Matrix, there are probably more. Dmitry > > Please see https://twitter.com/freenodestaff and Christian's letter which > links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > Best, > Erno 'jokke' Kuvaja > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 19 13:40:53 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 May 2021 13:40:53 +0000 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <20210519134053.bmnahleiyufozy5x@yuggoth.org> On 2021-05-19 15:30:04 +0200 (+0200), Dmitry Tantsur wrote: [...] > Probably not the best timing, but should we consider (again) > running a more advanced free software chat system? E.g. Outreachy > uses Zulip, Mozilla - Matrix, there are probably more. [...] Feel free to try running anything you like, but discussions will generally take place wherever they take place. Adding a new communication platform doesn't mean everyone will switch to it. After all, there are already OpenStack discussion spaces on Discord, Slack, Reddit... but IRC as a protocol has survived and outlived its earlier "killed replacements" by decades, so I expect it will persist regardless of what else comes along. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From artem.goncharov at gmail.com Wed May 19 13:41:32 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 19 May 2021 15:41:32 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021, 15:35 Dmitry Tantsur wrote: > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: > >> Hi all, >> >> For those of you who have not woken up to this sad day yet. Andrew Lee >> has taken his stance as owner of freenode ltd. and by the (one sided) story >> of the former volunteer staff members basically forced the whole community >> out. >> >> As there is history of LTM shutting down networks before (snoonet), it is >> appropriate to expect that the intentions here are not aligned with the >> communities and specially the users who's data he has access to via this >> administrative takeover. >> >> I think it's our time to take swift action and show our support to all >> the hard working volunteers who were behind freenode and move all our >> activities to irc.libera.chat. >> > > Probably not the best timing, but should we consider (again) running a > more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla > - Matrix, there are probably more. > +1, IRC is great, but it's technologically too outdated. > > Dmitry > > >> >> Please see https://twitter.com/freenodestaff and Christian's letter >> which links to the others as well >> https://fuchsnet.ch/freenode-resign-letter.txt >> >> Best, >> Erno 'jokke' Kuvaja >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Wed May 19 13:42:02 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Wed, 19 May 2021 14:42:02 +0100 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur wrote: > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: > >> Hi all, >> >> For those of you who have not woken up to this sad day yet. Andrew Lee >> has taken his stance as owner of freenode ltd. and by the (one sided) story >> of the former volunteer staff members basically forced the whole community >> out. >> >> As there is history of LTM shutting down networks before (snoonet), it is >> appropriate to expect that the intentions here are not aligned with the >> communities and specially the users who's data he has access to via this >> administrative takeover. >> >> I think it's our time to take swift action and show our support to all >> the hard working volunteers who were behind freenode and move all our >> activities to irc.libera.chat. >> > > Probably not the best timing, but should we consider (again) running a > more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla > - Matrix, there are probably more. > > Yeah I'd say not the greatest timing to start such a conversation under pressure. FWIF I'd rather not. The signal to noise ratio tends to be very poor on all of them with their stickers, gifs and all the active media crap embedded in them. - jokke > Dmitry > > >> >> Please see https://twitter.com/freenodestaff and Christian's letter >> which links to the others as well >> https://fuchsnet.ch/freenode-resign-letter.txt >> >> Best, >> Erno 'jokke' Kuvaja >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 19 13:49:33 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 May 2021 13:49:33 +0000 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> On 2021-05-19 14:19:02 +0100 (+0100), Erno Kuvaja wrote: [...] > I think it's our time to take swift action and show our support to > all the hard working volunteers who were behind freenode and move > all our activities to irc.libera.chat. [...] In past years when the stability of Freenode's service came into question, we've asserted that OFTC would probably have been a better home for our channels from the beginning (as they're more aligned with our community philosophies), but we ended up on Freenode mostly due to the Ubuntu community's presence there. We'd previously been unable to justify the impact to users of switching networks, but there seemed to be consensus that if Freenode shut down we'd move to OFTC. The earliest concrete proposal I can find for this was made in March 2014, but it's come up multiple times in the years since: http://lists.openstack.org/pipermail/openstack-dev/2014-March/028783.html Honestly I'd be concerned about moving to a newly-established IRC network, and would much prefer the stability of a known and established one. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From senrique at redhat.com Wed May 19 14:05:33 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 19 May 2021 11:05:33 -0300 Subject: [cinder] Bug deputy report for week of 2021-05-19 Message-ID: Hello, This is a bug report from 2021-05-12 to 2021-05-19. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High:- Medium: - https://bugs.launchpad.net/cinder/+bug/1928232 "Infinidat volume driver shouldn't use mock in production code". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928400 "scheduler _create_snapshot filter_function fails when filtering on user_id or project_id". Unassigned. - https://bugs.launchpad.net/os-brick/+bug/1928944 "NVMe-oF connector returning the wrong nqn". Assigned to Gorka Eguileor. Low: - https://bugs.launchpad.net/cinder/+bug/1928929 "When using Brocade-FC-Zone-Driver with zoning_policy as initiator, attaching 2nd volume to a VM fails". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928649 "Storwize/SVC driver report fast-formatting error when trying to edit the volume immediately after volume creation". Unassigned. Whishlist: - https://bugs.launchpad.net/cinder/+bug/1928431 "Add documentation for image-volume cache". Unassigned. Cheers, Sofi -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 19 14:12:50 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 19 May 2021 16:12:50 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 02:42:02PM +0100, Erno Kuvaja wrote: > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur wrote: [...] > > Probably not the best timing, but should we consider (again) running a > > more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla > > - Matrix, there are probably more. > > > > Yeah I'd say not the greatest timing to start such a conversation under > pressure. > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on all > of them with their stickers, gifs and all the active media crap embedded in > them. I agree that the dancing poop GIFs are very distracting. So "yes" to sticking with IRC given its strengths and simplicity. Though, I'm biased here as a long-time happy IRC user. That said, two things: - Those who _do_ want to use Matrix seem to have an option in terms of Matrix <-> IRC bridge. I see several people with [m] in their IRC nicks on #virt channel on OFTC network. So people seem to successfully use IRC with Matrix. - IRC seems to give an outdated feel for many newcomers. So if there are other "better" FOSS alternatives that accomodates our community's needs — and without alienating existing IRC users' needs — we should be willing to explore. But I agree that this is not the right timing to start this discussion, and I'm not even sure if this is a "real problem". [...] -- /kashyap From dms at danplanet.com Wed May 19 14:22:51 2021 From: dms at danplanet.com (Dan Smith) Date: Wed, 19 May 2021 07:22:51 -0700 Subject: Freenode and libera.chat In-Reply-To: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> (Jeremy Stanley's message of "Wed, 19 May 2021 13:49:33 +0000") References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: > Honestly I'd be concerned about moving to a newly-established IRC > network, and would much prefer the stability of a known and > established one. Agree. It seems to me that *if* we need to go anywhere, OFTC is the obvious place. It'll be a "take one step to the left" move for most people, and avoids the need to discuss "which" and "where" and "is there an open client for that" and "is there a *good* client for that for my platform", etc. --Dan From jpodivin at redhat.com Wed May 19 14:34:50 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Wed, 19 May 2021 16:34:50 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: +1 on sticking with the IRC. It's simple, established, and a lot of us have tuned our workflow around it. As a person who was using discord and slack for some time in a professional capacity, I can only say that it's hadly an improvement over IRC. And in more general terms. When it comes to change, I don't think "it's newer" is much of an argument by itself. On Wed, May 19, 2021 at 4:17 PM Kashyap Chamarthy wrote: > On Wed, May 19, 2021 at 02:42:02PM +0100, Erno Kuvaja wrote: > > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > wrote: > > [...] > > > > Probably not the best timing, but should we consider (again) running a > > > more advanced free software chat system? E.g. Outreachy uses Zulip, > Mozilla > > > - Matrix, there are probably more. > > > > > > Yeah I'd say not the greatest timing to start such a conversation under > > pressure. > > > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > all > > of them with their stickers, gifs and all the active media crap embedded > in > > them. > > I agree that the dancing poop GIFs are very distracting. So "yes" to > sticking with IRC given its strengths and simplicity. Though, I'm > biased here as a long-time happy IRC user. > > That said, two things: > > - Those who _do_ want to use Matrix seem to have an option in terms of > Matrix <-> IRC bridge. I see several people with [m] in their IRC > nicks on #virt channel on OFTC network. So people seem to > successfully use IRC with Matrix. > > - IRC seems to give an outdated feel for many newcomers. So if there > are other "better" FOSS alternatives that accomodates our > community's needs — and without alienating existing IRC users' needs > — we should be willing to explore. But I agree that this is not the > right timing to start this discussion, and I'm not even sure if > this is a "real problem". > > [...] > > -- > /kashyap > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 19 14:35:02 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 19 May 2021 16:35:02 +0200 Subject: [nova][train] live migration issue In-Reply-To: References: Message-ID: (Hi, we've talked on #openstack-nova; updating on list too.) On Wed, May 19, 2021 at 10:48:11AM +0200, Ignazio Cassano wrote: > Hello Guys, > on train centos7 I am facing live migration issue only for some instances > (not all). > The error reported is: > 2021-05-19 08:45:57.096 142537 ERROR nova.compute.manager [-] [instance: > b18450e8-b3db-4886-a737-c161d99c6a46] Live migration failed.: libvirtError: > Unable to read from monitor: Connection reset by peer > > The instance remains in pause on both source and destination host. > > Any help,please ? Summarizing the issue for those who are following along this conversation: The debugging chat tral from #openstack-nova starts here: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2021-05-19.log.html#t2021-05-19T08:50:11 Version ------- - libvirt: 4.5.0, package: 36.el7_9.5 - QEMU: 2.12.0qemu-kvm-ev-2.12.0-44.1.el7_8.1 - kernel: 3.10.0-1160.25.1.el7.x86_64 Problem ------- It seems to be some guests (on NFS) seem to crash during live migration with the below errors in the QEMU guest log: [...] 2021-05-19T08:12:30.396878Z qemu-kvm: Failed to load virtqueue_state:vring.used 2021-05-19T08:12:30.397555Z qemu-kvm: Failed to load virtio/virtqueues:vq 2021-05-19T08:12:30.397581Z qemu-kvm: Failed to load virtio-blk:virtio 2021-05-19T08:12:30.397606Z qemu-kvm: error while loading state for instance 0x0 of device '0000:00:08.0/virtio-blk' 2021-05-19T08:12:30.399542Z qemu-kvm: load of migration failed: Input/output error 2021-05-19 08:12:31.022+0000: shutting down, reason=crashed [...] And this error from libvirt (as obtained via `journalctl -u libvirtd -l --since=yesterday -p err`): error : qemuDomainObjBeginJobInternal:6825 : Timed out during operation: cannot acquire state change lock (held by monitor=remo Diagnosis --------- Further, these "cannot acquire state change lock" error from libvirt is notoriously hard to debug without a reliable reproducer. As it could be due to QEMU getting hung, which in turn could be caused by stuck I/O. See also the discussion (but no conclusion) on this related QEMU bug[1]. Particularly comment#11. In short, without a solid reproducer, these virtio issues are hard to track down, I'm afraid. [1] https://bugs.launchpad.net/nova/+bug/1761798 -- live migration intermittently fails in CI with "VQ 0 size 0x80 Guest index 0x12c inconsistent with Host index 0x134: delta 0xfff8" -- /kashyap From jay.faulkner at verizonmedia.com Wed May 19 14:40:31 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Wed, 19 May 2021 07:40:31 -0700 Subject: [E] Re: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: I am in agreement we should move off freenode as a result of these resignations and information coming to light. IMO, we should execute a minimalistic server change to OFTC or libera, and not gum up this emergency migration with attempts to change protocol as well as server for OpenStack chats. - Jay Faulkner On Wed, May 19, 2021 at 7:28 AM Dan Smith wrote: > > Honestly I'd be concerned about moving to a newly-established IRC > > network, and would much prefer the stability of a known and > > established one. > > Agree. It seems to me that *if* we need to go anywhere, OFTC is the > obvious place. It'll be a "take one step to the left" move for most > people, and avoids the need to discuss "which" and "where" and "is there > an open client for that" and "is there a *good* client for that for my > platform", etc. > > --Dan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Wed May 19 14:52:17 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 19 May 2021 17:52:17 +0300 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: Big +1. Also, it would be nice to organize some poll to get an answer of whether the community wants to check alternatives of IRC or continue using this protocol. ср, 19 мая 2021 г. в 16:37, Dmitry Tantsur : > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: > >> Hi all, >> >> For those of you who have not woken up to this sad day yet. Andrew Lee >> has taken his stance as owner of freenode ltd. and by the (one sided) story >> of the former volunteer staff members basically forced the whole community >> out. >> >> As there is history of LTM shutting down networks before (snoonet), it is >> appropriate to expect that the intentions here are not aligned with the >> communities and specially the users who's data he has access to via this >> administrative takeover. >> >> I think it's our time to take swift action and show our support to all >> the hard working volunteers who were behind freenode and move all our >> activities to irc.libera.chat. >> > > Probably not the best timing, but should we consider (again) running a > more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla > - Matrix, there are probably more. > > Dmitry > > >> >> Please see https://twitter.com/freenodestaff and Christian's letter >> which links to the others as well >> https://fuchsnet.ch/freenode-resign-letter.txt >> >> Best, >> Erno 'jokke' Kuvaja >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Wed May 19 14:56:39 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 19 May 2021 16:56:39 +0200 Subject: Freenode and libera.chat In-Reply-To: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: On Wed, May 19, 2021 at 01:49:33PM +0000, Jeremy Stanley wrote: [...] > In past years when the stability of Freenode's service came into > question, we've asserted that OFTC would probably have been a better > home for our channels from the beginning (as they're more aligned > with our community philosophies), but we ended up on Freenode mostly > due to the Ubuntu community's presence there. We'd previously been > unable to justify the impact to users of switching networks, but > there seemed to be consensus that if Freenode shut down we'd move to > OFTC. The earliest concrete proposal I can find for this was made in > March 2014, but it's come up multiple times in the years since: > > http://lists.openstack.org/pipermail/openstack-dev/2014-March/028783.html > > Honestly I'd be concerned about moving to a newly-established IRC > network, and would much prefer the stability of a known and > established one. Yeah, moving to OFTC makes a lot of sense. FWIW, I've been participating on #qemu and #virt channels on OFTC for more than six years now and I've rarely seen glitches or random drops there. (Also, agree with Dan Smith on "move one step to the left", i.e. low-to-no friction.) -- /kashyap From ignaziocassano at gmail.com Wed May 19 15:17:52 2021 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 19 May 2021 17:17:52 +0200 Subject: [nova][train] live migration issue In-Reply-To: References: Message-ID: Hello, some news ....I wonder if they can help: I am testing with some virtual machine again. If I follows this steps it works (but I lost network connection): 1) Detach network interface from instance 2) Attach network interface to instance 3) Migrate instance 4) Loggin into instance using console and restart networking while if I restart networking before live migration it does not work. So, when someone mentioned ######################## we get this "guest index inconsistent" error when the migrated RAM is inconsistent with the migrated 'virtio' device state. And a common case is where a 'virtio' device does an operation after the vCPU is stopped and after RAM has been transmitted. #############################à the network traffic could be the problem ? Ignazio Il giorno mer 19 mag 2021 alle ore 16:35 Kashyap Chamarthy < kchamart at redhat.com> ha scritto: > (Hi, we've talked on #openstack-nova; updating on list too.) > > On Wed, May 19, 2021 at 10:48:11AM +0200, Ignazio Cassano wrote: > > Hello Guys, > > on train centos7 I am facing live migration issue only for some instances > > (not all). > > The error reported is: > > 2021-05-19 08:45:57.096 142537 ERROR nova.compute.manager [-] [instance: > > b18450e8-b3db-4886-a737-c161d99c6a46] Live migration failed.: > libvirtError: > > Unable to read from monitor: Connection reset by peer > > > > The instance remains in pause on both source and destination host. > > > > Any help,please ? > > Summarizing the issue for those who are following along this conversation: > > The debugging chat tral from #openstack-nova starts here: > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2021-05-19.log.html#t2021-05-19T08:50:11 > > Version > ------- > > - libvirt: 4.5.0, package: 36.el7_9.5 > - QEMU: 2.12.0qemu-kvm-ev-2.12.0-44.1.el7_8.1 > - kernel: 3.10.0-1160.25.1.el7.x86_64 > > Problem > ------- > > It seems to be some guests (on NFS) seem to crash during live migration > with the below errors in the QEMU guest log: > > [...] > 2021-05-19T08:12:30.396878Z qemu-kvm: Failed to load > virtqueue_state:vring.used > 2021-05-19T08:12:30.397555Z qemu-kvm: Failed to load > virtio/virtqueues:vq > 2021-05-19T08:12:30.397581Z qemu-kvm: Failed to load virtio-blk:virtio > 2021-05-19T08:12:30.397606Z qemu-kvm: error while loading state for > instance 0x0 of device '0000:00:08.0/virtio-blk' > 2021-05-19T08:12:30.399542Z qemu-kvm: load of migration failed: > Input/output error > 2021-05-19 08:12:31.022+0000: shutting down, reason=crashed > [...] > > And this error from libvirt (as obtained via `journalctl -u libvirtd -l > --since=yesterday -p err`): > > error : qemuDomainObjBeginJobInternal:6825 : Timed out during > operation: cannot acquire state change lock (held by monitor=remo > > Diagnosis > --------- > > Further, these "cannot acquire state change lock" error from libvirt is > notoriously hard to debug without a reliable reproducer. As it could be > due to QEMU getting hung, which in turn could be caused by stuck I/O. > > See also the discussion (but no conclusion) on this related QEMU bug[1]. > Particularly comment#11. > > In short, without a solid reproducer, these virtio issues are hard to > track down, I'm afraid. > > > [1] https://bugs.launchpad.net/nova/+bug/1761798 -- live migration > intermittently fails in CI with "VQ 0 size 0x80 Guest index 0x12c > inconsistent with Host index 0x134: delta 0xfff8" > > -- > /kashyap > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Wed May 19 16:22:28 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 19 May 2021 18:22:28 +0200 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> Yes, pool would be great. Please do not take this offensive, but just stating IRC survived till now and thus we should keep it is not really productive from my pov. Why is everything what OpenStack doing/using is so complex? (Please do not comment on the items below, I’m not really interested in any answers/explanations. This is a rhetorical question) - gerrit. Yes it is great, yes it is fulfilling our needs. But how much we would lower the entry barrier for the contributions not using such complex setup that we have. - irc. Yes it survived till now. Yes it does simple things the best way. When I am online - everything is perfect (except of often connection drops). But the fun starts when I am not online (one of the simplest things for the communication platform with normally 60% of the day duration). Why should anyone care of searching any reasonably maintained IRC bouncer (or grep through eavesdrop logs), would should anyone pay for a simple mobile client? - issue tracker. You know yourself... Onboarding new people into the OpenStack contribution is a process of multiple months (so many times done that, also with all the Student programs we do). Once you are in it for years - everything seems to be absolutely fine. But entering this world is nearly a nightmare. I do not want to say - let’s change everything at once (or anything at all), but if we have chance we should not abandon idea of doing things better this time. In a daily work we all swim in workarounds we did for nearly everything. Cheers > On 19. May 2021, at 16:56, Kashyap Chamarthy wrote: > > On Wed, May 19, 2021 at 01:49:33PM +0000, Jeremy Stanley wrote: > > [...] > >> In past years when the stability of Freenode's service came into >> question, we've asserted that OFTC would probably have been a better >> home for our channels from the beginning (as they're more aligned >> with our community philosophies), but we ended up on Freenode mostly >> due to the Ubuntu community's presence there. We'd previously been >> unable to justify the impact to users of switching networks, but >> there seemed to be consensus that if Freenode shut down we'd move to >> OFTC. The earliest concrete proposal I can find for this was made in >> March 2014, but it's come up multiple times in the years since: >> >> http://lists.openstack.org/pipermail/openstack-dev/2014-March/028783.html >> >> Honestly I'd be concerned about moving to a newly-established IRC >> network, and would much prefer the stability of a known and >> established one. > > Yeah, moving to OFTC makes a lot of sense. FWIW, I've been > participating on #qemu and #virt channels on OFTC for more than six > years now and I've rarely seen glitches or random drops there. > > (Also, agree with Dan Smith on "move one step to the left", i.e. > low-to-no friction.) > > -- > /kashyap > > From gthiemonge at redhat.com Wed May 19 16:29:18 2021 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 19 May 2021 18:29:18 +0200 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem In-Reply-To: References: Message-ID: On Fri, May 14, 2021 at 10:41 PM Előd Illés wrote: > Hi teams in $SUBJECT, > > during the deletion of $series-eol tagged branches it turned out that > the below listed branches / repositories contains merged patches on top > of $series-eol tag. The issue is with this that whenever the branch is > deleted only the $series-eol (and other) tags can be checked out, so the > changes that were merged after the eol tags, will be *lost*. > > There are two options now: > > 1. Create another tag (something like: "$series-eol-extra"), so that the > extra patches will not be lost completely, because they can be checked > out with the newly created tags > > 2. Delete the branch anyway and don't care about the lost patch(es) > > Here are the list of such branches, please consider which option is good > for the team and reply to this mail: > > openstack/octavia > * stable/stein has patches on top of the stein-eol tag > * stable/queens has patches on top of the queens-eol tag > As discussed during today's Octavia weekly meeting, we agreed to delete those branches. Those patches shouldn't have been backported here. Thanks, > openstack/kolla > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/tripleo-common > * stable/rocky has patches on top of the rocky-eol tag > > openstack/os-apply-config > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/os-cloud-config > stable/ocata has patches on top of the ocata-eol tag > > Thanks, > > Előd > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 19 16:36:32 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 May 2021 16:36:32 +0000 Subject: Freenode and libera.chat In-Reply-To: <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> Message-ID: <20210519163631.r2e2zwsddtldqqpd@yuggoth.org> On 2021-05-19 18:22:28 +0200 (+0200), Artem Goncharov wrote: > Yes, pool would be great. > > Please do not take this offensive, but just stating IRC survived > till now and thus we should keep it is not really productive from > my pov. > > Why is everything what OpenStack doing/using is so complex? > (Please do not comment on the items below, I’m not really > interested in any answers/explanations. This is a rhetorical > question) [...] Thanks for letting us know that there was no point in reading the rest of your message. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From artem.goncharov at gmail.com Wed May 19 16:49:24 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 19 May 2021 18:49:24 +0200 Subject: Freenode and libera.chat In-Reply-To: <20210519163631.r2e2zwsddtldqqpd@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <20210519163631.r2e2zwsddtldqqpd@yuggoth.org> Message-ID: <2E02A887-90F4-439C-8CC0-9278FCE8D304@gmail.com> > On 19. May 2021, at 18:36, Jeremy Stanley wrote: > > On 2021-05-19 18:22:28 +0200 (+0200), Artem Goncharov wrote: >> Yes, pool would be great. >> >> Please do not take this offensive, but just stating IRC survived >> till now and thus we should keep it is not really productive from >> my pov. >> >> Why is everything what OpenStack doing/using is so complex? >> (Please do not comment on the items below, I’m not really >> interested in any answers/explanations. This is a rhetorical >> question) > [...] > > Thanks for letting us know that there was no point in reading the > rest of your message. Uhm, harmed myself only ;-) Need to be extra careful picking wording. From tjoen at dds.nl Wed May 19 16:50:01 2021 From: tjoen at dds.nl (tjoen) Date: Wed, 19 May 2021 18:50:01 +0200 Subject: [wallaby]nova, oslo.db or SQLAlchemy bug? "no attribute statement" In-Reply-To: References: <91d2851d-5059-2b89-bb4a-4615a615bc39@dds.nl> Message-ID: <3b0d139d-fa4a-2ad3-3ecd-0eb12a68c732@dds.nl> On 5/18/21 5:18 PM, tjoen wrote: > Thanks both for the clarification. I'll downgrade SQLAlchemy > and report later if I get Wallaby running Got Wallaby running. Not without problems. Thanks for the help From C-Albert.Braden at charter.com Wed May 19 17:01:15 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Wed, 19 May 2021 17:01:15 +0000 Subject: [EXTERNAL] Re: Freenode and libera.chat In-Reply-To: References: Message-ID: <8f87213b41f5448fb6c1fc6fe1155a7b@ncwmexgp009.CORP.CHARTERCOM.com> +1 for finding another IRC network. -1 for switching to an exciting new chat client. From: Andrey Kurilin Sent: Wednesday, May 19, 2021 10:52 AM To: Dmitry Tantsur Cc: openstack-discuss Subject: [EXTERNAL] Re: Freenode and libera.chat CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. Big +1. Also, it would be nice to organize some poll to get an answer of whether the community wants to check alternatives of IRC or continue using this protocol. ср, 19 мая 2021 г. в 16:37, Dmitry Tantsur >: On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja > wrote: Hi all, For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. Probably not the best timing, but should we consider (again) running a more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla - Matrix, there are probably more. Dmitry Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt Best, Erno 'jokke' Kuvaja -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -- Best regards, Andrey Kurilin. E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed May 19 17:26:47 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 19 May 2021 18:26:47 +0100 Subject: Freenode and libera.chat In-Reply-To: <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> Message-ID: <22d4928a1842e5ac407348fb330ed3e8c24f305c.camel@redhat.com> On Wed, 2021-05-19 at 18:22 +0200, Artem Goncharov wrote: > Yes, pool would be great. > > Please do not take this offensive, but just stating IRC survived till now and thus we should keep it is not really productive from my pov. > > Why is everything what OpenStack doing/using is so complex? (Please do not comment on the items below, I’m not really interested in any answers/explanations. This is a rhetorical question) > - gerrit. Yes it is great, yes it is fulfilling our needs. But how much we would lower the entry barrier for the contributions not using such complex setup that we have. well its significantly better then using a pull request model that is used in github or email list when it comes to code review tools or lack of them in the email case. so if gerrit is your low barriier then you have set the mark pretty high. there are few tools that actully do this as well as gerrit does. > - irc. Yes it survived till now. Yes it does simple things the best way. When I am online - everything is perfect (except of often connection drops). But the fun starts when I am not online (one of the simplest things for the communication platform with normally 60% of the day duration). Why should anyone care of searching any reasonably maintained IRC bouncer (or grep through eavesdrop logs), would should anyone pay for a simple mobile client? > - issue tracker. You know yourself... i personlly would have prefer to use githubs issue tracker or just go back to launchpad for all project but we all have different prefrences. i personally prefer how we track issue upstream to how we track them downstrema for example with a mix of bugzilla and jira and trello concuccnetly. having one tracker for everything or possible 2 if you work on multiple project that span both the story boad camp and launch pad camp is better then many but i still like githubs issue tracker alot. since we dont use github for development though using it for issue tracking would send mixed messages. > > Onboarding new people into the OpenStack contribution is a process of multiple months (so many times done that, also with all the Student programs we do). > Once you are in it for years - everything seems to be absolutely fine. But entering this world is nearly a nightmare. that depend on where you are coming from. my experince with on boarding interns who were on workplacment was we coudl normally get the up to speed in 1-2 weeks. most of the time was not spent on irc or email or gerrit. that was the simple bit that they got more or less stait away (espically if you teach them to use git review) the challange with onboarding was always the scope and explain how the different parts interact and get ther first verions of openstack installed so they could start working with it. although we normally started smaller with just cloning a project and runing unit test with tox. then a devstack install and then the rest. > > I do not want to say - let’s change everything at once (or anything at all), but if we have chance we should not abandon idea of doing things better this time. In a daily work we all swim in workarounds we did for nearly everything. improvment are good but many of the alternivies i genuwebly dont think would actully be an improvment over what we have today. irc is proably the case we might have the most to gain in but many of the tools like slack would be a regression in fucntionality since we would loose the much of the ease of use and isntead gain unwanted feature (posting documents and images inline in messages) and higuer resouce requriemetn to run the clients. i agree that searchabilty fo the irc logs is non existing but sendign a simple url to the irc logs when you know wehre it is simple. i.e. putting a link to the converstaion i just had with someone about a feature i a gerrit review comments is trival i go todays logs and scoll down to where the converstaion was. most of the alternitive loose that share ablity or require you to have an account to view the logs or partisapate. anyway i do think this is off topic and if we make any desisin on this it need the tc involvment as it will affect all projects. for now i dont think we should change all our ways of working and i dont generally think the tooling we use is bad. we can make imporment but i generally think that we are using some of the better options that are avaiabel today already. > > Cheers > > > > On 19. May 2021, at 16:56, Kashyap Chamarthy wrote: > > > > On Wed, May 19, 2021 at 01:49:33PM +0000, Jeremy Stanley wrote: > > > > [...] > > > > > In past years when the stability of Freenode's service came into > > > question, we've asserted that OFTC would probably have been a better > > > home for our channels from the beginning (as they're more aligned > > > with our community philosophies), but we ended up on Freenode mostly > > > due to the Ubuntu community's presence there. We'd previously been > > > unable to justify the impact to users of switching networks, but > > > there seemed to be consensus that if Freenode shut down we'd move to > > > OFTC. The earliest concrete proposal I can find for this was made in > > > March 2014, but it's come up multiple times in the years since: > > > > > > http://lists.openstack.org/pipermail/openstack-dev/2014-March/028783.html > > > > > > Honestly I'd be concerned about moving to a newly-established IRC > > > network, and would much prefer the stability of a known and > > > established one. > > > > Yeah, moving to OFTC makes a lot of sense. FWIW, I've been > > participating on #qemu and #virt channels on OFTC for more than six > > years now and I've rarely seen glitches or random drops there. > > > > (Also, agree with Dan Smith on "move one step to the left", i.e. > > low-to-no friction.) > > > > -- > > /kashyap > > > > > > From jlabarre at redhat.com Wed May 19 18:05:06 2021 From: jlabarre at redhat.com (James LaBarre) Date: Wed, 19 May 2021 14:05:06 -0400 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On 5/19/21 10:12 AM, Kashyap Chamarthy wrote: > On Wed, May 19, 2021 at 02:42:02PM +0100, Erno Kuvaja wrote: >> FWIF I'd rather not. The signal to noise ratio tends to be very poor on all >> of them with their stickers, gifs and all the active media crap embedded in >> them. > I agree that the dancing poop GIFs are very distracting. So "yes" to > sticking with IRC given its strengths and simplicity. Though, I'm > biased here as a long-time happy IRC user. Unless of course you were using the Comic Chat client for IRC... (I think on some systems, using Comic Chat on a non-CC server could get you banned). -- James LaBarre Software Engineer, OpenStack MultiArch Red Hat jlabarre at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed May 19 20:26:22 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 19 May 2021 13:26:22 -0700 Subject: Freenode and libera.chat In-Reply-To: <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> Message-ID: <338eff02-c210-4b75-94ce-8c27c0414f6f@www.fastmail.com> On Wed, May 19, 2021, at 9:22 AM, Artem Goncharov wrote: > Yes, pool would be great. > > Please do not take this offensive, but just stating IRC survived till > now and thus we should keep it is not really productive from my pov. > > Why is everything what OpenStack doing/using is so complex? (Please do > not comment on the items below, I’m not really interested in any > answers/explanations. This is a rhetorical question) I know you explicitly asked that we not respond, but I don't find this particular approach to applying criticism to be particularly productive. Nor is it helpful to further the discussion around the Freenode situation. I'm happy to try and re-frame these discussions on separate threads if we'd like to. Keep in mind that the tools and infrastructure we use today are largely maintained by an ever shrinking group of individuals. We do our best to meet the needs of our users (and my completely biased opinion is that we do a kick ass job with the resources we're given). That said I'm sure we can improve in a number of ways and framing that in a constructive way rather than telling us to not respond would be preferred. > - gerrit. Yes it is great, yes it is fulfilling our needs. But how much > we would lower the entry barrier for the contributions not using such > complex setup that we have. > - irc. Yes it survived till now. Yes it does simple things the best > way. When I am online - everything is perfect (except of often > connection drops). But the fun starts when I am not online (one of the > simplest things for the communication platform with normally 60% of the > day duration). Why should anyone care of searching any reasonably > maintained IRC bouncer (or grep through eavesdrop logs), would should > anyone pay for a simple mobile client? > - issue tracker. You know yourself... > > Onboarding new people into the OpenStack contribution is a process of > multiple months (so many times done that, also with all the Student > programs we do). > Once you are in it for years - everything seems to be absolutely fine. > But entering this world is nearly a nightmare. > > I do not want to say - let’s change everything at once (or anything at > all), but if we have chance we should not abandon idea of doing things > better this time. In a daily work we all swim in workarounds we did for > nearly everything. > > Cheers > From gael.therond at bitswalk.com Wed May 19 21:31:13 2021 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 19 May 2021 23:31:13 +0200 Subject: Freenode and libera.chat Message-ID: Big +1 on my side too. Honestly, as someone else told it badly already, IRC is a nightmare to set and manage for anyone not coming from the 80/90s age of the internet. I do personally use it since decades and so I’m used to it BUT It’s not because I know how to use it that I’ll hide the truth. IRC and any client for it is challenging for any new comers. No, people don’t have to once again read and learn another tool just for our community. We’re in 2021 and things have improved a lot in terms of messaging systems, rich exchanges platforms emerged (With many opensource) and so I consider (as many others I bet) that as a community looking for diversity and inclusion it’s time to bring the community of existing and new contributors better tools, the messaging system revamp being one of the most important one to be reworked. I’m supporting (With another person from the internet) a small discord server of around 50 members and trying to answer many questions on reddit. Why did we setup such alternative? Because most of those people were young students thriving to get straight answers and support from the community. Why did they didn’t easily found answers? Because IRC channels are most of time dead towns (except for the kolla one) and that for people starting to learn Openstack it’s challenging to get access to IRC as it’s not as straightforward and feature rich as what alternatives messaging solutions can provide. It’s a matter of UI/UX, time consumed and amount of knowledge required to acquire just for one slightly simple/quick question. Same thing with the list by the way. I deeply think that if we want to continue to embark more new comers we will need alternatives for current communication platforms such as the IRCs channels and the list. Those are tools from the past. Sure they’re efficient, sure many of us know how to manage them but I really feel like adding another step to people trying to jump in isn’t a good way to welcome them. I perfectly know that some of us found it interesting as it kind of gates from «moronic» questions and it’s kind of seen as way to mark a tribal validation of your effort to join us, but in the meantime I think we’re loosing opportunities to bring more peoples in. It’s like in the real life, you’ll never want to use a tool or help someone that actually require you to follow a long and painful process before being able to help or use it. Sorry for the long message but I really wanted to explain the issue with those tools as one of my concerns actually surfaced today. PS: I don’t want us to join on Discord or any «SaaS solution » we can definitely host our own opensource alternative if we really want to gain autonomy on such topic. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 19 22:37:08 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 19 May 2021 17:37:08 -0500 Subject: [all][qa][cinder][octavia][murano] Devstack dropping support for Ubuntu Bionic 18.04 In-Reply-To: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> References: <1791fbc6a69.c7ea6225784791.5650809726341177154@ghanshyammann.com> Message-ID: <17986c68852.118298fb727634.834491897221540117@ghanshyammann.com> ---- On Thu, 29 Apr 2021 17:25:12 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > As per the testing runtime since Victoria [1], we need to move our CI/CD to Ubuntu Focal 20.04 but > it seems there are few jobs still running on Bionic. As devstack team is planning to drop the Bionic support > you need to move those to Focal otherwise they will start failing. We are planning to merge the devstack patch > by 2nd week of May. > > - https://review.opendev.org/c/openstack/devstack/+/788754 Devstack patch to drop Bionic support (788754) is merged now and if any of your projects or 3rd party jobs is running on Bionic, it will start failing and time to migrate it to ubuntu-focal. -gmann > > I have not listed all the job but few of them which were failing with ' rtslib-fb-targetctl error' are below: > > Cinder- cinder-plugin-ceph-tempest-mn-aa > - https://opendev.org/openstack/cinder/src/commit/7441694cd42111d8f24912f03f669eec72fee7ce/.zuul.yaml#L166 > > python-cinderclient - python-cinderclient-functional-py36 > - https://review.opendev.org/c/openstack/python-cinderclient/+/788834 > > Octavia- https://opendev.org/openstack/octavia-tempest-plugin/src/branch/master/zuul.d/jobs.yaml#L182 > > Murani- murano-dashboard-sanity-check > -https://opendev.org/openstack/murano-dashboard/src/commit/b88b32abdffc171e6650450273004a41575d2d68/.zuul.yaml#L15 > > Also if your 3rd party CI is still running on Bionic, you can plan to migrate it to Focal before devstack patch merge. > > [1] https://governance.openstack.org/tc/reference/runtimes/victoria.html > > -gmann > > From fungi at yuggoth.org Wed May 19 23:29:14 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 May 2021 23:29:14 +0000 Subject: [automation-sig][freezer][i18n-sig][powervmstacker-sig][public-cloud-sig] IRC channel cleanup Message-ID: <20210519232914.t2im6yugufgcyg7y@yuggoth.org> I've proposed a pair of changes removing the OpenDev Collaboratory IRC bots (including logging) from inactive channels. Some of these channels correspond to current groups within the OpenStack community, so it's only fair to give them a heads up and let them know. The following channels have had no human discussion at all (no comments from anything other than our bots) for all of 2021 so far: * #openstack-auto-scaling * #openstack-fr * #openstack-freezer * #openstack-powervm * #openstack-publiccloud * #openstack-self-healing It's fine if these efforts aren't utilizing IRC, but there's probably little point in us logging or reporting information in them if they're unused. The patches for removing our services from those channels are available for review here (they also include cleanup of channels for other known defunct efforts or vestiges of channels which have since move to new names): https://review.opendev.org/792301 https://review.opendev.org/792302 If you're concerned and would like to resume using one or more of these channels, feel free to follow up here on the mailing list or with review comments some time in the next few days. If there are no objections I plan to merge these no later than Monday, May 24. Of course it's quite easy to reinstate services on a channel at any time, so if you don't see this until the changes have already merged, please follow up anyway and we can restore the bots as needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed May 19 23:42:21 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 19 May 2021 18:42:21 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 20th at 1500 UTC In-Reply-To: <1797c81ce4e.c5493880678425.8347318298781364114@ghanshyammann.com> References: <1797c81ce4e.c5493880678425.8347318298781364114@ghanshyammann.com> Message-ID: <17987023ee5.c46e52b828136.1660249967610239015@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on May 20th at 1500 UTC in #openstack-tc IRC channel. == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Planning for TC + PTL interaction (gmann) ** https://etherpad.opendev.org/p/tc-ptl-interaction * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann ---- On Mon, 17 May 2021 17:45:52 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 20th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 19th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gmann at ghanshyammann.com Thu May 20 00:44:48 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 19 May 2021 19:44:48 -0500 Subject: [all][infra] Topic change request for the retired projects IRC channel Message-ID: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> Hi infra-root, We have retired the few projects in OpenStack and their IRC channel are still there. I chatted with fungi about it on TC channel and he suggested that unregistering these channels is not a good idea or recommendation. In that case, can we change the Topic of these channels to something saying that "XYZ project is retired and this channel is also not active, contact openstack-discuss ML for any query" Below is the list of retired projects channel: #openstack-karbor #openstack-searchlight #openstack-qinling #openstack-tricircle #congress -gmann From zaitcev at redhat.com Thu May 20 01:03:31 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Wed, 19 May 2021 20:03:31 -0500 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <20210519200331.27629cf1@suzdal.zaitcev.lan> On Wed, 19 May 2021 14:19:02 +0100 Erno Kuvaja wrote: > I think it's our time to take swift action and show our support to all the > hard working volunteers who were behind freenode and move all our > activities to irc.libera.chat. I would prefer to take a deliberate action instead. The leaders of the split admit that their actions were planned in advance. It says so at their website: "In early 2021, that changed. ... This was the writing on the wall. As a precautionary measure, we began laying the groundwork for what would become Libera.Chat." So they were plotting this for a long time and we're supposed to jump swiftly? That doesn't seem very fair. -- Pete From Arkady.Kanevsky at dell.com Thu May 20 01:23:52 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 20 May 2021 01:23:52 +0000 Subject: [all][infra] Topic change request for the retired projects IRC channel In-Reply-To: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> References: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> Message-ID: +1 -----Original Message----- From: Ghanshyam Mann Sent: Wednesday, May 19, 2021 7:45 PM To: openstack-discuss Subject: [all][infra] Topic change request for the retired projects IRC channel [EXTERNAL EMAIL] Hi infra-root, We have retired the few projects in OpenStack and their IRC channel are still there. I chatted with fungi about it on TC channel and he suggested that unregistering these channels is not a good idea or recommendation. In that case, can we change the Topic of these channels to something saying that "XYZ project is retired and this channel is also not active, contact openstack-discuss ML for any query" Below is the list of retired projects channel: #openstack-karbor #openstack-searchlight #openstack-qinling #openstack-tricircle #congress -gmann From Istvan.Szabo at agoda.com Thu May 20 05:59:13 2021 From: Istvan.Szabo at agoda.com (Szabo, Istvan (Agoda)) Date: Thu, 20 May 2021 05:59:13 +0000 Subject: Load back a vm to openstack Message-ID: Hi, I have 1 vm which was deleted from openstack but still have the libvirt xml so I can redefine it, I also have the instance disk file so I would like to know is there a way to load back to openstack the vm? Istvan Szabo Senior Infrastructure Engineer --------------------------------------------------- Agoda Services Co., Ltd. e: istvan.szabo at agoda.com --------------------------------------------------- ________________________________ This message is confidential and is for the sole use of the intended recipient(s). It may also be privileged or otherwise protected by copyright or other legal rules. If you have received it by mistake please let us know by reply email and delete it from your system. It is prohibited to copy this message or disclose its content to anyone. Any confidentiality or privilege is not waived or lost by any mistaken delivery or unauthorized disclosure of the message. All messages sent to and from Agoda may be monitored to ensure compliance with company policies, to protect the company's interests and to remove potential malware. Electronic messages may be intercepted, amended, lost or deleted, or contain viruses. -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 20 07:07:49 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 20 May 2021 09:07:49 +0200 Subject: [neutron] Drivers meeting 21.05.2021 and 28.05.2021 cancelled Message-ID: <9430363.YgL4pZciUx@p1> Hi, We don't have any new RFEs to discuss [1] so let's cancel tomorrow's drivers meeting. Next week (28.05.2021) we have Recharge Day in Red Hat so we will probably not have quorum on the drivers meeting and I will also be offline so let's cancel it too. See You on the drivers meeting in 2 weeks (4.06.2021). Have a great weekend :) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From eblock at nde.ag Thu May 20 07:13:06 2021 From: eblock at nde.ag (Eugen Block) Date: Thu, 20 May 2021 07:13:06 +0000 Subject: Load back a vm to openstack In-Reply-To: Message-ID: <20210520071306.Horde.vfGZUzNC7cs1_J4Flt28XgE@webmail.nde.ag> Hi, my first idea would be to upload the disk file as a new image to glance and then launch a new instance. Another idea would be to remove the deleted flag in the nova database and see if it starts again. I haven't tried that myself, though, I also don't know how many changes that would require. But manipulating the database is not supported so I wouldn't recommend it. If you want to try it anyway make sure you have a database backup, or try it in a test environment beforehand. Regards, Eugen Zitat von "Szabo, Istvan (Agoda)" : > Hi, > > I have 1 vm which was deleted from openstack but still have the > libvirt xml so I can redefine it, I also have the instance disk file > so I would like to know is there a way to load back to openstack the > vm? > > Istvan Szabo > Senior Infrastructure Engineer > --------------------------------------------------- > Agoda Services Co., Ltd. > e: istvan.szabo at agoda.com > --------------------------------------------------- > > > ________________________________ > This message is confidential and is for the sole use of the intended > recipient(s). It may also be privileged or otherwise protected by > copyright or other legal rules. If you have received it by mistake > please let us know by reply email and delete it from your system. It > is prohibited to copy this message or disclose its content to > anyone. Any confidentiality or privilege is not waived or lost by > any mistaken delivery or unauthorized disclosure of the message. All > messages sent to and from Agoda may be monitored to ensure > compliance with company policies, to protect the company's interests > and to remove potential malware. Electronic messages may be > intercepted, amended, lost or deleted, or contain viruses. From dtantsur at redhat.com Thu May 20 07:30:00 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 09:30:00 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 3:42 PM Erno Kuvaja wrote: > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > wrote: > >> >> >> On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: >> >>> Hi all, >>> >>> For those of you who have not woken up to this sad day yet. Andrew Lee >>> has taken his stance as owner of freenode ltd. and by the (one sided) story >>> of the former volunteer staff members basically forced the whole community >>> out. >>> >>> As there is history of LTM shutting down networks before (snoonet), it >>> is appropriate to expect that the intentions here are not aligned with the >>> communities and specially the users who's data he has access to via this >>> administrative takeover. >>> >>> I think it's our time to take swift action and show our support to all >>> the hard working volunteers who were behind freenode and move all our >>> activities to irc.libera.chat. >>> >> >> Probably not the best timing, but should we consider (again) running a >> more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla >> - Matrix, there are probably more. >> >> Yeah I'd say not the greatest timing to start such a conversation under > pressure. > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > all of them with their stickers, gifs and all the active media crap > embedded in them. > FWIW we face the need to paste images very often (screenshots of booting bare metal), so having a native image pasting is a plus to me. Actually, text pasting as well. If you've ever encountered someone pasting 50 lines of a traceback to IRC, you know why. Other features I'm looking for include: 1) Native authentication 2) Chat history and offline messages 3) Editing and deleting messages 4) Threads (or any other way of sub-division of channels) 5) Moderation tools (better than #openstack-unregistered which is hostile to newcomers) Nice to have: 6) Mobile client 7) Non-trivial syntax And yes, I do know that all of these (except for #3) can be simulated more or less with 3rd party tools. But this is not friendly to newcomers who don't have an own bouncer and familiarity with how things work in IRC (which does not match how things work in any other current chat - see #openstack-unregistered for an example). Dmitry P.S. I don't suggest the infra team maintains a matrix server. I do think that we, given how much money is made from OpenStack, can afford to do the same thing as Mozilla: pay Element for a hosted Matrix instance (and let them bother with scaling). I find the current situation of putting the burden on Freenode volunteers a bit unfair and vote against moving to Libera for this very reason. > > - jokke > > >> Dmitry >> >> >>> >>> Please see https://twitter.com/freenodestaff and Christian's letter >>> which links to the others as well >>> https://fuchsnet.ch/freenode-resign-letter.txt >>> >>> Best, >>> Erno 'jokke' Kuvaja >>> >> >> >> -- >> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu May 20 08:35:37 2021 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 20 May 2021 10:35:37 +0200 Subject: [E] Re: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: On Wed, May 19, 2021 at 4:48 PM Jay Faulkner wrote: > I am in agreement we should move off freenode as a result of these > resignations and information coming to light. IMO, we should execute a > minimalistic server change to OFTC or libera, and not gum up this emergency > migration with attempts to change protocol as well as server for OpenStack > chats. > > I absolutely back this idea of a very minimalistic and least move to another network, be OFTC. For the exact reason that we have a large community that periodically uses our IRC channels, we can't just make a big bang in urgency and we need to define a reasonable transition plan for ensuring that nobody gets lost in the middle. Also, from what I can read, there is no matter of urgency for such of a big move. I'd suggest us to continue using Freenode channels for a while, with a clear redirect message explaining visitors that we moved (and where they can find us now). -Sylvain - > Jay Faulkner > > On Wed, May 19, 2021 at 7:28 AM Dan Smith wrote: > >> > Honestly I'd be concerned about moving to a newly-established IRC >> > network, and would much prefer the stability of a known and >> > established one. >> >> Agree. It seems to me that *if* we need to go anywhere, OFTC is the >> obvious place. It'll be a "take one step to the left" move for most >> people, and avoids the need to discuss "which" and "where" and "is there >> an open client for that" and "is there a *good* client for that for my >> platform", etc. >> >> --Dan >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu May 20 09:33:53 2021 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 20 May 2021 09:33:53 +0000 Subject: [mistral] glance image upload Message-ID: Hey team, Is there any way to upload/download an image to/from glance using a mistral workflow? I am seeing the glance.images_upload and glance.images_data, but I cant figure out how it works, I was thinking that we could provide some sort of swift URL to download/upload from. Is there any example available somewhere or maybe it's not yet implemented? Thanks From elod.illes at est.tech Thu May 20 09:56:07 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 20 May 2021 11:56:07 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <5d31de6f-162c-56c8-eee1-efeb212df05d@est.tech> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <6767170.WaEBY35tY3@p1> <5d31de6f-162c-56c8-eee1-efeb212df05d@est.tech> Message-ID: <248c563c-e382-a986-f47a-6d3b3c31c559@est.tech> Hi, Now that neutron's stable/ocata is deleted some non-stadium project's stable periodic job also started to fail (on stable/ocata) [1]: - networking-bagpipe - networking-bgpvpn - networking-midonet - neutron-vpnaas There are two options to fix this: 1. transition these projects also to ocata-eol 2. fix them by using neutron's ocata-eol tag (I guess the 1st option is the easiest, considering that these projects are not among the most active ones) Does neutron team have a plan how to continue with these? Thanks, Előd [1] http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22error:%20pathspec%20'stable%2Focata'%20did%20not%20match%20any%20file(s)%20known%20to%20git.%5C%22&from=86400s On 2021. 05. 14. 22:18, Előd Illés wrote: > Hi, > > The patch was merged, so the ocata-eol tags were created for neutron > projects. After the successful tagging I have executed the branch > deletion, which has the following result: > > Branch stable/ocata successfully deleted from openstack/networking-ovn! > Branch stable/ocata successfully deleted from > openstack/neutron-dynamic-routing! > Branch stable/ocata successfully deleted from openstack/neutron-fwaas! > Branch stable/ocata successfully deleted from openstack/neutron-lbaas! > Branch stable/ocata successfully deleted from openstack/neutron-lib! > Branch stable/ocata successfully deleted from openstack/neutron! > > Thanks, > > Előd > > > On 2021. 05. 12. 9:22, Slawek Kaplonski wrote: >> >> Hi, >> >> >> Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: >> >> > Hi, >> >> > >> >> > Ocata is unfortunately unmaintained for a long time as some general >> test >> >> > jobs are broken there, so as a stable-maint-core member I support >> to tag >> >> > neutron's stable/ocata as End of Life. After the branch is tagged, >> >> > please ping me and I can arrange the deletion of the branch. >> >> > >> >> > For Pike, I volunteered at the PTG in 2020 to help with reviews >> there, I >> >> > still keep that offer, however I am clearly not enough to keep it >> >> > maintained, besides backports are not arriving for stable/pike in >> >> > neutron. Anyway, if the gate is functional there, then I say we could >> >> > keep it open (but as far as I see how gate situation is worsen now, as >> >> > more and more things go wrong, I don't expect that will take long). If >> >> > not, then I only ask that let's do the EOL'ing first with Ocata and >> when >> >> > it is done, then continue with neutron's stable/pike. >> >> > >> >> > For the process please follow the steps here: >> >> > >> https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life >> >> > (with the only exception, that in the last step, instead of infra team, >> >> > please turn to me/release team - patch for the documentation change is >> >> > on the way: >> >> > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) >> >> >> Thx. I just proposed patch >> https://review.opendev.org/c/openstack/releases/+/790904 >>  to make >> ocata-eol in all neutron projects. >> >> >> > >> >> > Thanks, >> >> > >> >> > Előd >> >> > >> >> > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: >> >> > > Hi, >> >> > > >> >> > > >> >> > > I checked today that stable/ocata and stable/pike branches in both >> >> > > Neutron and neutron stadium projects are pretty inactive since >> long time. >> >> > > >> >> > > * according to [1], last patch merged patch in Neutron for >> stable/pike >> >> > > was in July 2020 and in ocata October 2019, >> >> > > >> >> > > * for stadium projects, according to [2] it was September 2020. >> >> > > >> >> > > >> >> > > According to [3] and [4] there are no opened patches for any of those >> >> > > branches for Neutron and any stadium project except neutron-lbaas. >> >> > > >> >> > > >> >> > > So based on that info I want to propose that we will close both those >> >> > > branches are EOL now and before doing that, I would like to know if >> >> > > anyone would like to keep those branches to be open still. >> >> > > >> >> > > >> >> > > [1] >> >> > > >> https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:merged >> >> > > >> >> >> > > >> >> > > [2] >> >> > > >> https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki >> >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged >> >> > > >> > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> >> >> > > >> >> > > [3] >> >> > > >> https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:open >> >> > > >> >> >> > > >> >> > > [4] >> >> > > >> https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki >> >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open >> >> > > >> > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> >> >> > > >> >> > > >> >> > > -- >> >> > > >> >> > > Slawek Kaplonski >> >> > > >> >> > > Principal Software Engineer >> >> > > >> >> > > Red Hat >> >> >> >> -- >> >> Slawek Kaplonski >> >> Principal Software Engineer >> >> Red Hat >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu May 20 11:22:13 2021 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 20 May 2021 13:22:13 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: <248c563c-e382-a986-f47a-6d3b3c31c559@est.tech> References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <6767170.WaEBY35tY3@p1> <5d31de6f-162c-56c8-eee1-efeb212df05d@est.tech> <248c563c-e382-a986-f47a-6d3b3c31c559@est.tech> Message-ID: Hi, Thanks for bringing this up. I would vote on the 1. option to transition to ocata-eol, but would do for all Neutron Stadium projects: https://governance.openstack.org/tc/reference/projects/neutron.html - networking-bagpipe - networking-bgpvpn - networking-midonet - networking-odl - networking-ovn - networking-sfc - neutron-fwaas neutron-fwaas and networking-midonet are not maintained but the process can work for the old branches I suppose. Lajos Katona (lajoskatona) Előd Illés ezt írta (időpont: 2021. máj. 20., Cs, 11:58): > Hi, > > Now that neutron's stable/ocata is deleted some non-stadium project's > stable periodic job also started to fail (on stable/ocata) [1]: > > - networking-bagpipe > - networking-bgpvpn > - networking-midonet > - neutron-vpnaas > > There are two options to fix this: > > 1. transition these projects also to ocata-eol > 2. fix them by using neutron's ocata-eol tag > > (I guess the 1st option is the easiest, considering that these projects > are not among the most active ones) > > Does neutron team have a plan how to continue with these? > > Thanks, > > Előd > [1] > http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22error:%20pathspec%20'stable%2Focata'%20did%20not%20match%20any%20file(s)%20known%20to%20git.%5C%22&from=86400s > > > On 2021. 05. 14. 22:18, Előd Illés wrote: > > Hi, > > The patch was merged, so the ocata-eol tags were created for neutron > projects. After the successful tagging I have executed the branch deletion, > which has the following result: > > Branch stable/ocata successfully deleted from openstack/networking-ovn! > Branch stable/ocata successfully deleted from > openstack/neutron-dynamic-routing! > Branch stable/ocata successfully deleted from openstack/neutron-fwaas! > Branch stable/ocata successfully deleted from openstack/neutron-lbaas! > Branch stable/ocata successfully deleted from openstack/neutron-lib! > Branch stable/ocata successfully deleted from openstack/neutron! > > Thanks, > > Előd > > On 2021. 05. 12. 9:22, Slawek Kaplonski wrote: > > Hi, > > Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > > > Hi, > > > > > > Ocata is unfortunately unmaintained for a long time as some general test > > > jobs are broken there, so as a stable-maint-core member I support to tag > > > neutron's stable/ocata as End of Life. After the branch is tagged, > > > please ping me and I can arrange the deletion of the branch. > > > > > > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > > > still keep that offer, however I am clearly not enough to keep it > > > maintained, besides backports are not arriving for stable/pike in > > > neutron. Anyway, if the gate is functional there, then I say we could > > > keep it open (but as far as I see how gate situation is worsen now, as > > > more and more things go wrong, I don't expect that will take long). If > > > not, then I only ask that let's do the EOL'ing first with Ocata and when > > > it is done, then continue with neutron's stable/pike. > > > > > > For the process please follow the steps here: > > > > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > > > (with the only exception, that in the last step, instead of infra team, > > > please turn to me/release team - patch for the documentation change is > > > on the way: > > > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > > Thx. I just proposed patch > https://review.opendev.org/c/openstack/releases/+/790904 to make > ocata-eol in all neutron projects. > > > > > > Thanks, > > > > > > Előd > > > > > > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > > > > Hi, > > > > > > > > > > > > I checked today that stable/ocata and stable/pike branches in both > > > > Neutron and neutron stadium projects are pretty inactive since long > time. > > > > > > > > * according to [1], last patch merged patch in Neutron for stable/pike > > > > was in July 2020 and in ocata October 2019, > > > > > > > > * for stadium projects, according to [2] it was September 2020. > > > > > > > > > > > > According to [3] and [4] there are no opened patches for any of those > > > > branches for Neutron and any stadium project except neutron-lbaas. > > > > > > > > > > > > So based on that info I want to propose that we will close both those > > > > branches are EOL now and before doing that, I would like to know if > > > > anyone would like to keep those branches to be open still. > > > > > > > > > > > > [1] > > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > > > > > > > > > > > [2] > > > > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > > > > < > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networ > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > > > > > > > > [3] > > > > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > > > > > > > > > > > [4] > > > > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki > > > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > > > > < > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networ > > > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > > > > > > > > > > > > -- > > > > > > > > Slawek Kaplonski > > > > > > > > Principal Software Engineer > > > > > > > > Red Hat > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anlin.kong at gmail.com Thu May 20 11:39:22 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 20 May 2021 23:39:22 +1200 Subject: [wallaby][trove] Instance Volume Resize In-Reply-To: References: Message-ID: Modify trove service config file: [DEFAULT] max_accepted_volume_size = 10 is the default value if the config option is not specified. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Wed, May 19, 2021 at 9:56 PM Ammad Syed wrote: > Hi, > > I am using wallaby / trove on ubuntu 20.04. I am trying to extend volume > of database instance. Its having trouble that instance cannot exceed volume > size of 10GB. > > My flavor has 2vcpus 4GB RAM and 10GB disk. I created a database instance > with 5GB database size and mysql datastore. The deployment has created 10GB > root and 5GB /var/lib/mysql. I have tried to extend volume to 11GB, it > failed with error that "Volume 'size' cannot exceed maximum of 10 GB, 11 > cannot be accepted". > > I want to keep root disk size to 10GB and only want to extend > /var/lib/mysql keeping the same flavor. Is it possible or should I need to > upgrade flavor as well ? > > -- > Regards, > > > Syed Ammad Ali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 20 12:08:16 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 May 2021 12:08:16 +0000 Subject: Freenode and libera.chat In-Reply-To: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> Message-ID: <20210520120815.wdtzveu7ykxtthts@yuggoth.org> On 2021-05-19 13:49:33 +0000 (+0000), Jeremy Stanley wrote: > On 2021-05-19 14:19:02 +0100 (+0100), Erno Kuvaja wrote: > [...] > > I think it's our time to take swift action and show our support to > > all the hard working volunteers who were behind freenode and move > > all our activities to irc.libera.chat. > [...] > > In past years when the stability of Freenode's service came into > question, we've asserted that OFTC would probably have been a better > home for our channels from the beginning (as they're more aligned > with our community philosophies), but we ended up on Freenode mostly > due to the Ubuntu community's presence there. We'd previously been > unable to justify the impact to users of switching networks, but > there seemed to be consensus that if Freenode shut down we'd move to > OFTC. The earliest concrete proposal I can find for this was made in > March 2014, but it's come up multiple times in the years since: > > http://lists.openstack.org/pipermail/openstack-dev/2014-March/028783.html > > Honestly I'd be concerned about moving to a newly-established IRC > network, and would much prefer the stability of a known and > established one. I've also spent last night catching up all our channel registrations on OFTC. The ~150 active channels in which we operate our IRC bots are now registered to us there and under our control, with a handful of exceptions where I need to follow up with folks today to get our accessbot account added. Switching to OFTC at this point would be fairly quick, at least from an infrastructure perspective. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mnaser at vexxhost.com Thu May 20 12:21:14 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 20 May 2021 08:21:14 -0400 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > Hi all, > > For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. > > As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. > > I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. > > Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > Best, > Erno 'jokke' Kuvaja There is two sides to each story, this is the other one: https://freenode.net/news/freenode-is-foss I recommend that so long that we don't have any problems, we keep things as is. -- Mohammed Naser VEXXHOST, Inc. From smooney at redhat.com Thu May 20 12:23:07 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 13:23:07 +0100 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Thu, 2021-05-20 at 09:30 +0200, Dmitry Tantsur wrote: > On Wed, May 19, 2021 at 3:42 PM Erno Kuvaja wrote: > > > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > > wrote: > > > > > > > > > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja wrote: > > > > > > > Hi all, > > > > > > > > For those of you who have not woken up to this sad day yet. Andrew Lee > > > > has taken his stance as owner of freenode ltd. and by the (one sided) story > > > > of the former volunteer staff members basically forced the whole community > > > > out. > > > > > > > > As there is history of LTM shutting down networks before (snoonet), it > > > > is appropriate to expect that the intentions here are not aligned with the > > > > communities and specially the users who's data he has access to via this > > > > administrative takeover. > > > > > > > > I think it's our time to take swift action and show our support to all > > > > the hard working volunteers who were behind freenode and move all our > > > > activities to irc.libera.chat. > > > > > > > > > > Probably not the best timing, but should we consider (again) running a > > > more advanced free software chat system? E.g. Outreachy uses Zulip, Mozilla > > > - Matrix, there are probably more. > > > > > > Yeah I'd say not the greatest timing to start such a conversation under > > pressure. > > > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > > all of them with their stickers, gifs and all the active media crap > > embedded in them. > > > > FWIW we face the need to paste images very often (screenshots of booting > bare metal), so having a native image pasting is a plus to me. > > Actually, text pasting as well. If you've ever encountered someone pasting > 50 lines of a traceback to IRC, you know why. > > Other features I'm looking for include: > 1) Native authentication > 2) Chat history and offline messages > 3) Editing and deleting messages i actully think this ^ in partacalar is not something we want to have. at least not for arbiary tiem because it can retoactivly cahnge the context or tone of a conversation so deleteion and modifcation should be avoid retoactivly. > 4) Threads (or any other way of sub-division of channels) im not sure about this but in limited cases it might be useful generally thouhg spliting a channel i think woudl be a disadvantage as generally you want to have input form the channel as a whole breakout rooms can be useful from tim eto time but that is not typically the norm. > 5) Moderation tools (better than #openstack-unregistered which is hostile > to newcomers) we have generel done well without needing to do active moderation. #openstack-unregistered only was put in place to stop spam which is slightly differnt the moderation in terems of kicks and bans form a channel which i dont recall ever needing to use upstream at least in the nova channel. perhaps it has happened but that type of moderation largely has been unneed and i hope it will remain that way. > > Nice to have: > 6) Mobile client > 7) Non-trivial syntax > > And yes, I do know that all of these (except for #3) can be simulated more > or less with 3rd party tools. But this is not friendly to newcomers who > don't have an own bouncer and familiarity with how things work in IRC > (which does not match how things work in any other current chat - see > #openstack-unregistered for an example). > > Dmitry > > P.S. > I don't suggest the infra team maintains a matrix server. I do think that > we, given how much money is made from OpenStack, can afford to do the same > thing as Mozilla: pay Element for a hosted Matrix instance (and let them > bother with scaling). > > I find the current situation of putting the burden on Freenode volunteers a > bit unfair and vote against moving to Libera for this very reason. > > > > > > - jokke > > > > > > > Dmitry > > > > > > > > > > > > > > Please see https://twitter.com/freenodestaff and Christian's letter > > > > which links to the others as well > > > > https://fuchsnet.ch/freenode-resign-letter.txt > > > > > > > > Best, > > > > Erno 'jokke' Kuvaja > > > > > > > > > > > > > -- > > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > > > O'Neill > > > > > > From dtantsur at redhat.com Thu May 20 12:35:24 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 14:35:24 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Thu, May 20, 2021 at 2:23 PM Sean Mooney wrote: > On Thu, 2021-05-20 at 09:30 +0200, Dmitry Tantsur wrote: > > On Wed, May 19, 2021 at 3:42 PM Erno Kuvaja wrote: > > > > > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > > > wrote: > > > > > > > > > > > > > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja > wrote: > > > > > > > > > Hi all, > > > > > > > > > > For those of you who have not woken up to this sad day yet. Andrew > Lee > > > > > has taken his stance as owner of freenode ltd. and by the (one > sided) story > > > > > of the former volunteer staff members basically forced the whole > community > > > > > out. > > > > > > > > > > As there is history of LTM shutting down networks before > (snoonet), it > > > > > is appropriate to expect that the intentions here are not aligned > with the > > > > > communities and specially the users who's data he has access to > via this > > > > > administrative takeover. > > > > > > > > > > I think it's our time to take swift action and show our support to > all > > > > > the hard working volunteers who were behind freenode and move all > our > > > > > activities to irc.libera.chat. > > > > > > > > > > > > > Probably not the best timing, but should we consider (again) running > a > > > > more advanced free software chat system? E.g. Outreachy uses Zulip, > Mozilla > > > > - Matrix, there are probably more. > > > > > > > > Yeah I'd say not the greatest timing to start such a conversation > under > > > pressure. > > > > > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > > > all of them with their stickers, gifs and all the active media crap > > > embedded in them. > > > > > > > FWIW we face the need to paste images very often (screenshots of booting > > bare metal), so having a native image pasting is a plus to me. > > > > Actually, text pasting as well. If you've ever encountered someone > pasting > > 50 lines of a traceback to IRC, you know why. > > > > > Other features I'm looking for include: > > 1) Native authentication > > 2) Chat history and offline messages > > 3) Editing and deleting messages > i actully think this ^ in partacalar is not something we want to have. > at least not for arbiary tiem because it can retoactivly cahnge the > context or tone of a > conversation so deleteion and modifcation should be avoid retoactivly. > This risk is mitigated by proper logging. On the other hand, it may be a good thing if people could soften their tone 1 minute after they post something (happens to me). In any case, I don't know if it outweighs the inconvenience of somebody pasting 500 LoC accidentally and not being able to remove it. > > > 4) Threads (or any other way of sub-division of channels) > im not sure about this but in limited cases it might be useful generally > thouhg spliting > a channel i think woudl be a disadvantage as generally you want to have > input form the channel as a whole > breakout rooms can be useful from tim eto time but that is not typically > the norm. > Well, it could be a norm for us. Pretty much every IRC meeting someone interrupts with their question. If the meeting was in a thread, it wouldn't be an issue. Interleaving communications also happen very often. > > 5) Moderation tools (better than #openstack-unregistered which is hostile > > to newcomers) > we have generel done well without needing to do active moderation. > #openstack-unregistered only was put in place to stop spam which is > slightly differnt the moderation > Okay, this probably falls under #1 - native authentication. Dmitry in terems of kicks and bans form a channel which i dont recall ever needing > to use upstream at least in the nova channel. > perhaps it has happened but that type of moderation largely has been > unneed and i hope it will remain that way. > > > > > Nice to have: > > 6) Mobile client > > 7) Non-trivial syntax > > > > And yes, I do know that all of these (except for #3) can be simulated > more > > or less with 3rd party tools. But this is not friendly to newcomers who > > don't have an own bouncer and familiarity with how things work in IRC > > (which does not match how things work in any other current chat - see > > #openstack-unregistered for an example). > > > > Dmitry > > > > P.S. > > I don't suggest the infra team maintains a matrix server. I do think that > > we, given how much money is made from OpenStack, can afford to do the > same > > thing as Mozilla: pay Element for a hosted Matrix instance (and let them > > bother with scaling). > > > > I find the current situation of putting the burden on Freenode > volunteers a > > bit unfair and vote against moving to Libera for this very reason. > > > > > > > > > > - jokke > > > > > > > > > > Dmitry > > > > > > > > > > > > > > > > > > Please see https://twitter.com/freenodestaff and Christian's > letter > > > > > which links to the others as well > > > > > https://fuchsnet.ch/freenode-resign-letter.txt > > > > > > > > > > Best, > > > > > Erno 'jokke' Kuvaja > > > > > > > > > > > > > > > > > -- > > > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, > Michael > > > > O'Neill > > > > > > > > > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu May 20 12:55:05 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 14:55:05 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <0a336728-8ce1-2f79-1f58-7537ea716928@debian.org> On 5/19/21 4:34 PM, Jiri Podivin wrote: > +1 on sticking with the IRC. It's simple, established, and a lot of us > have tuned our workflow around it. > As a person who was using discord and slack for some time in a > professional capacity, I can only say that it's hadly an improvement > over IRC. > > And in more general terms. When it comes to change, I don't think "it's > newer" is much of an argument by itself. +1 to all you wrote. Thomas From jungleboyj at gmail.com Thu May 20 13:01:18 2021 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 20 May 2021 08:01:18 -0500 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <4bad82ed-7c4f-be29-0987-046ec3bdfc97@gmail.com> On 5/20/2021 7:21 AM, Mohammed Naser wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: >> Hi all, >> >> For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. >> >> As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. >> >> I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. >> >> Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt >> >> Best, >> Erno 'jokke' Kuvaja > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things as is. All, I feel better now that we have heard from the other side of the equation.  Agree with mnaser that it appears that we do not need to make any quick decisions and can keep an eye on things for the time being. Based on the other threads that this discussion has started it appears that members of the community would like to discuss other possible forms of communication.  Doing so is a healthy exercise, but I think it would be a mistake to make any hasty decisions driven by the current state of Freenode. Jay (jungleboyj) From james.slagle at gmail.com Thu May 20 13:17:46 2021 From: james.slagle at gmail.com (James Slagle) Date: Thu, 20 May 2021 09:17:46 -0400 Subject: [TripleO] tripleo-ci-centos-8-containers-multinode now using ephemeral Heat for master/wallaby Message-ID: For awareness, I wanted to point out that tripleo-ci-centos-8-containers-multinode is now using ephemeral Heat on the master and wallaby branches. This change[1] merged yesterday that made the switch. Of note, the Heat logs for the overcloud deployment are now under the directory undercloud/home/zuul/overcloud-deploy/overcloud/heat-launcher/log in the job logs. There is also a pending docs patch that covers a bit more how the feature works: https://review.opendev.org/c/openstack/tripleo-docs/+/783008 [1] https://review.opendev.org/c/openstack/tripleo-quickstart/+/777108 -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From C-Albert.Braden at charter.com Thu May 20 13:24:11 2021 From: C-Albert.Braden at charter.com (Braden, Albert) Date: Thu, 20 May 2021 13:24:11 +0000 Subject: [EXTERNAL] Re: Freenode and libera.chat In-Reply-To: <4bad82ed-7c4f-be29-0987-046ec3bdfc97@gmail.com> References: <4bad82ed-7c4f-be29-0987-046ec3bdfc97@gmail.com> Message-ID: <2b86a7e8bd8946be97921b9884453093@ncwmexgp009.CORP.CHARTERCOM.com> It appears that Andrew Lee is the same person who ruined snoonet. I don't believe anything he says, and vote for moving to another IRC platform. -----Original Message----- From: Jay Bryant Sent: Thursday, May 20, 2021 9:01 AM To: openstack-discuss at lists.openstack.org Subject: [EXTERNAL] Re: Freenode and libera.chat CAUTION: The e-mail below is from an external source. Please exercise caution before opening attachments, clicking links, or following guidance. On 5/20/2021 7:21 AM, Mohammed Naser wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: >> Hi all, >> >> For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. >> >> As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. >> >> I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. >> >> Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt >> >> Best, >> Erno 'jokke' Kuvaja > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things as is. All, I feel better now that we have heard from the other side of the equation.  Agree with mnaser that it appears that we do not need to make any quick decisions and can keep an eye on things for the time being. Based on the other threads that this discussion has started it appears that members of the community would like to discuss other possible forms of communication.  Doing so is a healthy exercise, but I think it would be a mistake to make any hasty decisions driven by the current state of Freenode. Jay (jungleboyj) E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. From dms at danplanet.com Thu May 20 13:30:47 2021 From: dms at danplanet.com (Dan Smith) Date: Thu, 20 May 2021 06:30:47 -0700 Subject: Freenode and libera.chat In-Reply-To: (Mohammed Naser's message of "Thu, 20 May 2021 08:21:14 -0400") References: Message-ID: > I recommend that so long that we don't have any problems, we keep things as is. Agreed. --Dan From dtantsur at redhat.com Thu May 20 13:46:59 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 15:46:59 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Thu, May 20, 2021 at 2:24 PM Mohammed Naser wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > > > Hi all, > > > > For those of you who have not woken up to this sad day yet. Andrew Lee > has taken his stance as owner of freenode ltd. and by the (one sided) story > of the former volunteer staff members basically forced the whole community > out. > > > > As there is history of LTM shutting down networks before (snoonet), it > is appropriate to expect that the intentions here are not aligned with the > communities and specially the users who's data he has access to via this > administrative takeover. > > > > I think it's our time to take swift action and show our support to all > the hard working volunteers who were behind freenode and move all our > activities to irc.libera.chat. > > > > Please see https://twitter.com/freenodestaff and Christian's letter > which links to the others as well > https://fuchsnet.ch/freenode-resign-letter.txt > > > > Best, > > Erno 'jokke' Kuvaja > > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things > as is. > Well, we already have a problem: ruined trust from the participants. We've had people explicitly voicing their undesire to use freenode any longer. Given that IRC is already not very inclusive (see my other emails), we may end up with several disjointed chats for different community members - exactly the thing we want to avoid. -1 to pretending that nothing has happened. Dmitry > > -- > Mohammed Naser > VEXXHOST, Inc. > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Thu May 20 13:53:42 2021 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 May 2021 22:53:42 +0900 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: References: <15209060.0YdeOJI3E6@p1> <55ff12c8-1e9c-16b5-578b-834d1ccf2563@est.tech> <6767170.WaEBY35tY3@p1> <5d31de6f-162c-56c8-eee1-efeb212df05d@est.tech> <248c563c-e382-a986-f47a-6d3b3c31c559@est.tech> Message-ID: Thanks for raising this. I would vote to mark them as EOL together regardless of that they were released as part of the official OpenStack release. All of these repositories depend on neutron and they are expected to run with a same release of neutron. so there is no big reason not to mark them as EOL. ocata branch of some repositories are not part of neutron official releases as of ocata. For such branches, we need to handle them separately (i.e, we cannot mark it as EOL using the releases repository). As far as I checked, all ocata releases under the neutron governance are already marked as EOL. Here is the list: - networking-ovn - neutron-dynamic-routing - neutron-fwaas - neutron-lbaas - neutron-lib - neutron >> Now that neutron's stable/ocata is deleted some non-stadium project's stable periodic job also started to fail (on stable/ocata) [1]: Repositories pointed out by Előd are repositories which were not under the neutron governance as of ocata release. They need to be EOL'ed separately. They all are under the neutron governance, so I think we can discuss EOL of them as the neutron team and all changes will be approved by the neutron-stable-maint team. >> - networking-bagpipe >> - networking-bgpvpn >> - networking-midonet >> - neutron-vpnaas Thanks, Akihiro Motoki (irc: amotoki) On Thu, May 20, 2021 at 8:26 PM Lajos Katona wrote: > > Hi, > Thanks for bringing this up. > I would vote on the 1. option to transition to ocata-eol, but would do for all Neutron Stadium projects: > https://governance.openstack.org/tc/reference/projects/neutron.html > > networking-bagpipe > networking-bgpvpn > networking-midonet > networking-odl > networking-ovn > networking-sfc > neutron-fwaas > > neutron-fwaas and networking-midonet are not maintained but the process can work for the old branches I suppose. > > Lajos Katona (lajoskatona) > > > Előd Illés ezt írta (időpont: 2021. máj. 20., Cs, 11:58): >> >> Hi, >> >> Now that neutron's stable/ocata is deleted some non-stadium project's stable periodic job also started to fail (on stable/ocata) [1]: >> >> - networking-bagpipe >> - networking-bgpvpn >> - networking-midonet >> - neutron-vpnaas >> >> There are two options to fix this: >> >> 1. transition these projects also to ocata-eol >> 2. fix them by using neutron's ocata-eol tag >> >> (I guess the 1st option is the easiest, considering that these projects are not among the most active ones) >> >> Does neutron team have a plan how to continue with these? >> >> Thanks, >> >> Előd >> >> [1] http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22error:%20pathspec%20'stable%2Focata'%20did%20not%20match%20any%20file(s)%20known%20to%20git.%5C%22&from=86400s >> >> >> On 2021. 05. 14. 22:18, Előd Illés wrote: >> >> Hi, >> >> The patch was merged, so the ocata-eol tags were created for neutron projects. After the successful tagging I have executed the branch deletion, which has the following result: >> >> Branch stable/ocata successfully deleted from openstack/networking-ovn! >> Branch stable/ocata successfully deleted from openstack/neutron-dynamic-routing! >> Branch stable/ocata successfully deleted from openstack/neutron-fwaas! >> Branch stable/ocata successfully deleted from openstack/neutron-lbaas! >> Branch stable/ocata successfully deleted from openstack/neutron-lib! >> Branch stable/ocata successfully deleted from openstack/neutron! >> >> Thanks, >> >> Előd >> >> >> On 2021. 05. 12. 9:22, Slawek Kaplonski wrote: >> >> Hi, >> >> >> Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: >> >> > Hi, >> >> > >> >> > Ocata is unfortunately unmaintained for a long time as some general test >> >> > jobs are broken there, so as a stable-maint-core member I support to tag >> >> > neutron's stable/ocata as End of Life. After the branch is tagged, >> >> > please ping me and I can arrange the deletion of the branch. >> >> > >> >> > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I >> >> > still keep that offer, however I am clearly not enough to keep it >> >> > maintained, besides backports are not arriving for stable/pike in >> >> > neutron. Anyway, if the gate is functional there, then I say we could >> >> > keep it open (but as far as I see how gate situation is worsen now, as >> >> > more and more things go wrong, I don't expect that will take long). If >> >> > not, then I only ask that let's do the EOL'ing first with Ocata and when >> >> > it is done, then continue with neutron's stable/pike. >> >> > >> >> > For the process please follow the steps here: >> >> > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life >> >> > (with the only exception, that in the last step, instead of infra team, >> >> > please turn to me/release team - patch for the documentation change is >> >> > on the way: >> >> > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) >> >> >> Thx. I just proposed patch https://review.opendev.org/c/openstack/releases/+/790904 to make ocata-eol in all neutron projects. >> >> >> > >> >> > Thanks, >> >> > >> >> > Előd >> >> > >> >> > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: >> >> > > Hi, >> >> > > >> >> > > >> >> > > I checked today that stable/ocata and stable/pike branches in both >> >> > > Neutron and neutron stadium projects are pretty inactive since long time. >> >> > > >> >> > > * according to [1], last patch merged patch in Neutron for stable/pike >> >> > > was in July 2020 and in ocata October 2019, >> >> > > >> >> > > * for stadium projects, according to [2] it was September 2020. >> >> > > >> >> > > >> >> > > According to [3] and [4] there are no opened patches for any of those >> >> > > branches for Neutron and any stadium project except neutron-lbaas. >> >> > > >> >> > > >> >> > > So based on that info I want to propose that we will close both those >> >> > > branches are EOL now and before doing that, I would like to know if >> >> > > anyone would like to keep those branches to be open still. >> >> > > >> >> > > >> >> > > [1] >> >> > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:merged >> >> > > >> >> > > >> >> > > [2] >> >> > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki >> >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged >> >> > > > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> >> >> > > >> >> > > [3] >> >> > > https://review.opendev.org/q/project:%255Eopenstack/neutron+(branch:stable/ocata+OR+branch:stable/pike)+status:open >> >> > > >> >> > > >> >> > > [4] >> >> > > https://review.opendev.org/q/(project:openstack/ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/neutron-.*+OR+project:%255Eopenstack/networki >> >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open >> >> > > > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> >> >> > > >> >> > > >> >> > > -- >> >> > > >> >> > > Slawek Kaplonski >> >> > > >> >> > > Principal Software Engineer >> >> > > >> >> > > Red Hat >> >> >> >> -- >> >> Slawek Kaplonski >> >> Principal Software Engineer >> >> Red Hat From artem.goncharov at gmail.com Thu May 20 13:58:06 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 15:58:06 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <24803964-53EF-44EC-ACC5-C20DA23A7E5E@gmail.com> > On 20. May 2021, at 15:46, Dmitry Tantsur wrote: > > > > On Thu, May 20, 2021 at 2:24 PM Mohammed Naser > wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja > wrote: > > > > Hi all, > > > > For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. > > > > As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. > > > > I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. > > > > Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > > > Best, > > Erno 'jokke' Kuvaja > > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things as is. > > Well, we already have a problem: ruined trust from the participants. We've had people explicitly voicing their undesire to use freenode any longer. Given that IRC is already not very inclusive (see my other emails), we may end up with several disjointed chats for different community members - exactly the thing we want to avoid. > > -1 to pretending that nothing has happened. +1 on -1 > > Dmitry > > > -- > Mohammed Naser > VEXXHOST, Inc. > > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Thu May 20 14:05:48 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Thu, 20 May 2021 17:05:48 +0300 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: чт, 20 мая 2021 г. в 15:32, Sean Mooney : > On Thu, 2021-05-20 at 09:30 +0200, Dmitry Tantsur wrote: > > On Wed, May 19, 2021 at 3:42 PM Erno Kuvaja wrote: > > > > > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > > > wrote: > > > > > > > > > > > > > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja > wrote: > > > > > > > > > Hi all, > > > > > > > > > > For those of you who have not woken up to this sad day yet. Andrew > Lee > > > > > has taken his stance as owner of freenode ltd. and by the (one > sided) story > > > > > of the former volunteer staff members basically forced the whole > community > > > > > out. > > > > > > > > > > As there is history of LTM shutting down networks before > (snoonet), it > > > > > is appropriate to expect that the intentions here are not aligned > with the > > > > > communities and specially the users who's data he has access to > via this > > > > > administrative takeover. > > > > > > > > > > I think it's our time to take swift action and show our support to > all > > > > > the hard working volunteers who were behind freenode and move all > our > > > > > activities to irc.libera.chat. > > > > > > > > > > > > > Probably not the best timing, but should we consider (again) running > a > > > > more advanced free software chat system? E.g. Outreachy uses Zulip, > Mozilla > > > > - Matrix, there are probably more. > > > > > > > > Yeah I'd say not the greatest timing to start such a conversation > under > > > pressure. > > > > > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > > > all of them with their stickers, gifs and all the active media crap > > > embedded in them. > > > > > > > FWIW we face the need to paste images very often (screenshots of booting > > bare metal), so having a native image pasting is a plus to me. > > > > Actually, text pasting as well. If you've ever encountered someone > pasting > > 50 lines of a traceback to IRC, you know why. > > > > > Other features I'm looking for include: > > 1) Native authentication > > 2) Chat history and offline messages > > 3) Editing and deleting messages > i actully think this ^ in partacalar is not something we want to have. > at least not for arbiary tiem because it can retoactivly cahnge the > context or tone of a > conversation so deleteion and modifcation should be avoid retoactivly. > I want to note that OpenStack is a huge community with a lot of not-native speakers(in terms of English). Having an opportunity to modify the message after sending it is a great feature. > > 4) Threads (or any other way of sub-division of channels) > im not sure about this but in limited cases it might be useful generally > thouhg spliting > a channel i think woudl be a disadvantage as generally you want to have > input form the channel as a whole > breakout rooms can be useful from tim eto time but that is not typically > the norm. > > 5) Moderation tools (better than #openstack-unregistered which is hostile > > to newcomers) > we have generel done well without needing to do active moderation. > #openstack-unregistered only was put in place to stop spam which is > slightly differnt the moderation > in terems of kicks and bans form a channel which i dont recall ever > needing to use upstream at least in the nova channel. > perhaps it has happened but that type of moderation largely has been > unneed and i hope it will remain that way. > > > > > Nice to have: > > 6) Mobile client > > 7) Non-trivial syntax > > > > And yes, I do know that all of these (except for #3) can be simulated > more > > or less with 3rd party tools. But this is not friendly to newcomers who > > don't have an own bouncer and familiarity with how things work in IRC > > (which does not match how things work in any other current chat - see > > #openstack-unregistered for an example). > > > > Dmitry > > > > P.S. > > I don't suggest the infra team maintains a matrix server. I do think that > > we, given how much money is made from OpenStack, can afford to do the > same > > thing as Mozilla: pay Element for a hosted Matrix instance (and let them > > bother with scaling). > > > > I find the current situation of putting the burden on Freenode > volunteers a > > bit unfair and vote against moving to Libera for this very reason. > > > > > > > > > > - jokke > > > > > > > > > > Dmitry > > > > > > > > > > > > > > > > > > Please see https://twitter.com/freenodestaff and Christian's > letter > > > > > which links to the others as well > > > > > https://fuchsnet.ch/freenode-resign-letter.txt > > > > > > > > > > Best, > > > > > Erno 'jokke' Kuvaja > > > > > > > > > > > > > > > > > -- > > > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, > Michael > > > > O'Neill > > > > > > > > > > > > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From owalsh at redhat.com Thu May 20 14:25:46 2021 From: owalsh at redhat.com (Oliver Walsh) Date: Thu, 20 May 2021 15:25:46 +0100 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem In-Reply-To: References: Message-ID: On Mon, 17 May 2021 at 14:29, Marios Andreou wrote: > On Fri, May 14, 2021 at 11:41 PM Előd Illés wrote: > > > > Hi teams in $SUBJECT, > > > > during the deletion of $series-eol tagged branches it turned out that > > the below listed branches / repositories contains merged patches on top > > of $series-eol tag. The issue is with this that whenever the branch is > > deleted only the $series-eol (and other) tags can be checked out, so the > > changes that were merged after the eol tags, will be *lost*. > > > > There are two options now: > > > > 1. Create another tag (something like: "$series-eol-extra"), so that the > > extra patches will not be lost completely, because they can be checked > > out with the newly created tags > > > > 2. Delete the branch anyway and don't care about the lost patch(es) > > > > Here are the list of such branches, please consider which option is good > > for the team and reply to this mail: > > > > Hello Elod > > thank you for all your work on this and apologies for the commits > after the eol tag > > Personally I vote for the easiest path which I believe is option 2 > here just discard those commits and remove the branch. I think 2 of > the three flagged here are mine (:/ sorry I should know better ;)) so > ack from me but adding owalsh/dbengt into the cc as the other commit > is his. > > Some more comments/pointers inline thanks: > > > openstack/octavia > > * stable/stein has patches on top of the stein-eol tag > > * stable/queens has patches on top of the queens-eol tag > > > > openstack/kolla > > * stable/pike has patches on top of the pike-eol tag > > * stable/ocata has patches on top of the ocata-eol tag > > > > openstack/tripleo-common > > * stable/rocky has patches on top of the rocky-eol tag > > > > for tripleo-common this is the patch in question: > > * > https://github.com/openstack/tripleo-common/compare/rocky-eol...stable/rocky > * > https://github.com/openstack/tripleo-common/commit/77a0c827cbb02c3374d72f48973ba24d6c34d50c > * Ensure tripleo ansible inventory file update is atomic > https://review.opendev.org/q/Ifa41bfcb921496978f82aee4e67fdb419cf9ffc5 > (cherry picked from commit 8e082f4 * (cherry picked from commit > c1af9b7) * (squashing commits as the 1st patch is failing in the stein > gate without > the fix from the 2nd patch) > * https://review.opendev.org/c/openstack/tripleo-common/+/765502 > > so cc'ing owalsh to allow him to raise an objection will also reach > out on irc after I send this and point to it ;) > No objection from me. Also confirmed on IRC with Daniel as he proposed the stable/rocky backport. Thanks, Ollie > > > openstack/os-apply-config > > * stable/pike has patches on top of the pike-eol tag > > * > https://github.com/openstack/os-apply-config/compare/pike-eol...stable/pike > * > https://github.com/openstack/os-apply-config/commit/1fcccb880e30522d66238b205a48d553a050c562 > * Remove tripleo-multinode-container|baremetal-minimal from layout > * Change-Id: I6715edd673b45dad6fba7d1987eac8677f61eaa2 > * 2 zuul.d/layout.yaml > * > https://review.opendev.org/c/openstack/os-apply-config/+/777527 > > > * stable/ocata has patches on top of the ocata-eol tag > > > > * > https://github.com/openstack/os-apply-config/compare/ocata-eol...stable/ocata > * > https://github.com/openstack/os-apply-config/commit/31768f04a30023a0d54099c1aeb80134ffe5dd64 > * Remove tripleo-multinode-container|baremetal-minimal from layout > * Change-Id: Iefe3eed322f1102344c6b54531c61a41ce4d227b > * 9 zuul.d/layout.yaml > * > https://review.opendev.org/c/openstack/os-apply-config/+/777533 > > > both of those are mine so ack from me on nuking them > > thanks to you and all the release team for checking and for your work > on this and all the release things > > regards, marios > > > > > > openstack/os-cloud-config > > stable/ocata has patches on top of the ocata-eol tag > > > > Thanks, > > > > Előd > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 20 14:34:10 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 20 May 2021 09:34:10 -0500 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <1798a32b8c7.bb921e9b77826.5500902096265172161@ghanshyammann.com> ---- On Thu, 20 May 2021 07:21:14 -0500 Mohammed Naser wrote ---- > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > > > Hi all, > > > > For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. > > > > As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. > > > > I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. > > > > Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > > > Best, > > Erno 'jokke' Kuvaja > > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things as is. I agree on this and not to be in hurry to take any decision. Let's wait and monitor the situation. -gmann > > -- > Mohammed Naser > VEXXHOST, Inc. > > From smooney at redhat.com Thu May 20 14:38:44 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 15:38:44 +0100 Subject: Freenode and libera.chat In-Reply-To: <24803964-53EF-44EC-ACC5-C20DA23A7E5E@gmail.com> References: <24803964-53EF-44EC-ACC5-C20DA23A7E5E@gmail.com> Message-ID: On Thu, 2021-05-20 at 15:58 +0200, Artem Goncharov wrote: > > > On 20. May 2021, at 15:46, Dmitry Tantsur wrote: > > > > > > > > On Thu, May 20, 2021 at 2:24 PM Mohammed Naser > wrote: > > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja > wrote: > > > > > > Hi all, > > > > > > For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. > > > > > > As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. > > > > > > I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. > > > > > > Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > > > > > Best, > > > Erno 'jokke' Kuvaja > > > > There is two sides to each story, this is the other one: > > > > https://freenode.net/news/freenode-is-foss > > > > I recommend that so long that we don't have any problems, we keep things as is. > > > > Well, we already have a problem: ruined trust from the participants. We've had people explicitly voicing their undesire to use freenode any longer. Given that IRC is already not very inclusive (see my other emails), we may end up with several disjointed chats for different community members - exactly the thing we want to avoid. > > > > -1 to pretending that nothing has happened. > > +1 on -1 well we already kind of have that with many of our chineese contiutors using wechat instead. i dont think may of the coparte chat service are more inclusive then irc. e.g. i woudl rate slack as less inclusive since it actively prevent bridging to other services and may or may not be avaiable in different gograpical regoins. https://www.travelchinacheaper.com/index-blocked-websites-in-china https://en.wikipedia.org/wiki/List_of_websites_blocked_in_mainland_China otehr distibuted/selfhosted comuntionation network like matrix can certenly work but wehn looking options we have to take that into account. not everyone can access every service or has the bandwith to do so and useing somehtin light wight like irc give us the possiblity to reach more peopel. for what its worth hermes which is no end of line unfortunetly was a greate mobile irc client https://github.com/numixproject/android-app-suite/tree/master/Hermes and https://play.google.com/store/apps/details?id=com.ruesga.rview&gl=IE is a pretty good mobile client for gerrit both of which i have used wehn traveling to ptgs in the past when i have needed to interact with our exstiing tools away form a laptop/pc. > > > > > Dmitry > >   > > > > > > > > > > -- > > Mohammed Naser > > VEXXHOST, Inc. > > > > > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill > From jay.faulkner at verizonmedia.com Thu May 20 14:42:40 2021 From: jay.faulkner at verizonmedia.com (Jay Faulkner) Date: Thu, 20 May 2021 07:42:40 -0700 Subject: [E] Re: Freenode and libera.chat In-Reply-To: References: Message-ID: > Well, we already have a problem: ruined trust from the participants. We've had people explicitly voicing their undesire to use freenode any longer. I am one of these people. OpenStack channels remain the only place I exist on Freenode, and I'm giving the community a short grace period to migrate, or else I'll be forced to stop using IRC as a communication method. As we have many folks in our community who are invested in free software and the communities around them, I can't imagine I'm the only person unwilling to continue usage of an IRC network that's been taken over in a hostile manner and wrested from the control of democratically elected volunteers. - Jay Faulkner On Thu, May 20, 2021 at 6:52 AM Dmitry Tantsur wrote: > > > On Thu, May 20, 2021 at 2:24 PM Mohammed Naser > wrote: > >> On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: >> > >> > Hi all, >> > >> > For those of you who have not woken up to this sad day yet. Andrew Lee >> has taken his stance as owner of freenode ltd. and by the (one sided) story >> of the former volunteer staff members basically forced the whole community >> out. >> > >> > As there is history of LTM shutting down networks before (snoonet), it >> is appropriate to expect that the intentions here are not aligned with the >> communities and specially the users who's data he has access to via this >> administrative takeover. >> > >> > I think it's our time to take swift action and show our support to all >> the hard working volunteers who were behind freenode and move all our >> activities to irc.libera.chat. >> > >> > Please see https://twitter.com/freenodestaff >> >> and Christian's letter which links to the others as well >> https://fuchsnet.ch/freenode-resign-letter.txt >> >> > >> > Best, >> > Erno 'jokke' Kuvaja >> >> There is two sides to each story, this is the other one: >> >> https://freenode.net/news/freenode-is-foss >> >> >> I recommend that so long that we don't have any problems, we keep things >> as is. >> > > Well, we already have a problem: ruined trust from the participants. We've > had people explicitly voicing their undesire to use freenode any longer. > Given that IRC is already not very inclusive (see my other emails), we may > end up with several disjointed chats for different community members - > exactly the thing we want to avoid. > > -1 to pretending that nothing has happened. > > Dmitry > > >> >> -- >> Mohammed Naser >> VEXXHOST, Inc. >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ > > , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu May 20 14:46:33 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 15:46:33 +0100 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <17febd88ddc8299fbee5772c5dca0e407d514868.camel@redhat.com> On Thu, 2021-05-20 at 17:05 +0300, Andrey Kurilin wrote: > чт, 20 мая 2021 г. в 15:32, Sean Mooney : > > > On Thu, 2021-05-20 at 09:30 +0200, Dmitry Tantsur wrote: > > > On Wed, May 19, 2021 at 3:42 PM Erno Kuvaja wrote: > > > > > > > On Wed, May 19, 2021 at 2:34 PM Dmitry Tantsur > > > > wrote: > > > > > > > > > > > > > > > > > > > On Wed, May 19, 2021 at 3:21 PM Erno Kuvaja > > wrote: > > > > > > > > > > > Hi all, > > > > > > > > > > > > For those of you who have not woken up to this sad day yet. Andrew > > Lee > > > > > > has taken his stance as owner of freenode ltd. and by the (one > > sided) story > > > > > > of the former volunteer staff members basically forced the whole > > community > > > > > > out. > > > > > > > > > > > > As there is history of LTM shutting down networks before > > (snoonet), it > > > > > > is appropriate to expect that the intentions here are not aligned > > with the > > > > > > communities and specially the users who's data he has access to > > via this > > > > > > administrative takeover. > > > > > > > > > > > > I think it's our time to take swift action and show our support to > > all > > > > > > the hard working volunteers who were behind freenode and move all > > our > > > > > > activities to irc.libera.chat. > > > > > > > > > > > > > > > > Probably not the best timing, but should we consider (again) running > > a > > > > > more advanced free software chat system? E.g. Outreachy uses Zulip, > > Mozilla > > > > > - Matrix, there are probably more. > > > > > > > > > > Yeah I'd say not the greatest timing to start such a conversation > > under > > > > pressure. > > > > > > > > FWIF I'd rather not. The signal to noise ratio tends to be very poor on > > > > all of them with their stickers, gifs and all the active media crap > > > > embedded in them. > > > > > > > > > > FWIW we face the need to paste images very often (screenshots of booting > > > bare metal), so having a native image pasting is a plus to me. > > > > > > Actually, text pasting as well. If you've ever encountered someone > > pasting > > > 50 lines of a traceback to IRC, you know why. > > > > > > > > Other features I'm looking for include: > > > 1) Native authentication > > > 2) Chat history and offline messages > > > 3) Editing and deleting messages > > i actully think this ^ in partacalar is not something we want to have. > > at least not for arbiary tiem because it can retoactivly cahnge the > > context or tone of a > > conversation so deleteion and modifcation should be avoid retoactivly. > > > > I want to note that OpenStack is a huge community with a lot of not-native > speakers(in terms of English). > Having an opportunity to modify the message after sending it is a great > feature. yes that is true and as a native english speaker with terrible spelling if i could edit some messages after the fact to make them more clear i might use that to a limited degree. realistically i would have to fix almost every message i send. i have already corrected this message several times. that is what i ment by depeing on the period of time. if its arbitary and can be changed at any time after its sent i dont think its a great featue but if it X seconds after teh message was sent and if it alowed you to see the orginal messagei could see it as a valueble thing. although i do feel liek we woudl be better havign this converation in a different thread/form then this specic email tread there are pros and cons to irc and all the other options. but we should adress discussign this and any cahnge we may or may not make independtly form the current events related to freenode. if we had the abilty to edit message have a log of the orginal and update meesage would still be valueable to have in parallel so that we can use it for documentation reasons wehn refrencing it in reviews extra and that is where my concern woudl be. maintian the archive usage of chat logs for later refrence in email and gerrit discussions. > > > > > 4) Threads (or any other way of sub-division of channels) > > im not sure about this but in limited cases it might be useful generally > > thouhg spliting > > a channel i think woudl be a disadvantage as generally you want to have > > input form the channel as a whole > > breakout rooms can be useful from tim eto time but that is not typically > > the norm. > > > 5) Moderation tools (better than #openstack-unregistered which is hostile > > > to newcomers) > > we have generel done well without needing to do active moderation. > > #openstack-unregistered only was put in place to stop spam which is > > slightly differnt the moderation > > in terems of kicks and bans form a channel which i dont recall ever > > needing to use upstream at least in the nova channel. > > perhaps it has happened but that type of moderation largely has been > > unneed and i hope it will remain that way. > > > > > > > > Nice to have: > > > 6) Mobile client > > > 7) Non-trivial syntax > > > > > > And yes, I do know that all of these (except for #3) can be simulated > > more > > > or less with 3rd party tools. But this is not friendly to newcomers who > > > don't have an own bouncer and familiarity with how things work in IRC > > > (which does not match how things work in any other current chat - see > > > #openstack-unregistered for an example). > > > > > > Dmitry > > > > > > P.S. > > > I don't suggest the infra team maintains a matrix server. I do think that > > > we, given how much money is made from OpenStack, can afford to do the > > same > > > thing as Mozilla: pay Element for a hosted Matrix instance (and let them > > > bother with scaling). > > > > > > I find the current situation of putting the burden on Freenode > > volunteers a > > > bit unfair and vote against moving to Libera for this very reason. > > > > > > > > > > > > > > - jokke > > > > > > > > > > > > > Dmitry > > > > > > > > > > > > > > > > > > > > > > Please see https://twitter.com/freenodestaff and Christian's > > letter > > > > > > which links to the others as well > > > > > > https://fuchsnet.ch/freenode-resign-letter.txt > > > > > > > > > > > > Best, > > > > > > Erno 'jokke' Kuvaja > > > > > > > > > > > > > > > > > > > > > -- > > > > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > > > > > Commercial register: Amtsgericht Muenchen, HRB 153243, > > > > > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, > > Michael > > > > > O'Neill > > > > > > > > > > > > > > > > > > > > > From mnaser at vexxhost.com Thu May 20 14:55:17 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Thu, 20 May 2021 10:55:17 -0400 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > Hi all, > > For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. > > As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. > > I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. > > Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt > > Best, > Erno 'jokke' Kuvaja I'd like to invite those who feel very strongly about this subject to add it to the OpenStack technical committee meeting agenda and bring it up to discussion. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee Given the next meeting is in 5 minutes, it might be a little late for this week, but good to push for next weeks. Thank you. -- Mohammed Naser VEXXHOST, Inc. From ruslanas at lpic.lt Thu May 20 15:00:04 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 20 May 2021 17:00:04 +0200 Subject: [ussuri][tripleo][ironic][tftp] add --blocksize to tftp during installation Message-ID: Hi all, I wanted to ask, is there an option in tripleO to deploy undercloud tftp server with additional option, I had to add it to /usr/share/openstack-tripleo-heat-templates/deployment/ironic/ironic-pxe-container-puppet.yaml, so my container start cmd looks like this now: tripleoussuri/centos-binary-ironic-pxe:current-tripleo /bin/bash -c BIND_HOST=$(hiera ironic::pxe::tftp_bind_host -c /etc/puppet/hiera.yaml); /usr/sbin/in.tftpd --foreground --user root --address $BIND_HOST:69 --blocksize 1399 --m ap-file /var/lib/ironic/tftpboot/map-file /var/lib/ironic/tftpboot ironic_pxe_tftp But I believe it should be possibility with ExtraConfigUndercloud? I know, our network better should be fixed, but it will take several decades :) and while it will be fixed I want to automate deployment and it should be upgrade resistant :) Thanks for the advice in advance. -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu May 20 15:01:24 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 17:01:24 +0200 Subject: Freenode and libera.chat In-Reply-To: References: <24803964-53EF-44EC-ACC5-C20DA23A7E5E@gmail.com> Message-ID: <188407D4-46D5-416A-8529-CB856499860B@gmail.com> > On 20. May 2021, at 16:38, Sean Mooney wrote: > > On Thu, 2021-05-20 at 15:58 +0200, Artem Goncharov wrote: >> >>> On 20. May 2021, at 15:46, Dmitry Tantsur wrote: >>> >>> >>> >>> On Thu, May 20, 2021 at 2:24 PM Mohammed Naser > wrote: >>> On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja > wrote: >>>> >>>> Hi all, >>>> >>>> For those of you who have not woken up to this sad day yet. Andrew Lee has taken his stance as owner of freenode ltd. and by the (one sided) story of the former volunteer staff members basically forced the whole community out. >>>> >>>> As there is history of LTM shutting down networks before (snoonet), it is appropriate to expect that the intentions here are not aligned with the communities and specially the users who's data he has access to via this administrative takeover. >>>> >>>> I think it's our time to take swift action and show our support to all the hard working volunteers who were behind freenode and move all our activities to irc.libera.chat. >>>> >>>> Please see https://twitter.com/freenodestaff and Christian's letter which links to the others as well https://fuchsnet.ch/freenode-resign-letter.txt >>>> >>>> Best, >>>> Erno 'jokke' Kuvaja >>> >>> There is two sides to each story, this is the other one: >>> >>> https://freenode.net/news/freenode-is-foss >>> >>> I recommend that so long that we don't have any problems, we keep things as is. >>> >>> Well, we already have a problem: ruined trust from the participants. We've had people explicitly voicing their undesire to use freenode any longer. Given that IRC is already not very inclusive (see my other emails), we may end up with several disjointed chats for different community members - exactly the thing we want to avoid. >>> >>> -1 to pretending that nothing has happened. >> >> +1 on -1 > > well we already kind of have that with many of our chineese contiutors using wechat > instead. i dont think may of the coparte chat service are more inclusive then irc. > e.g. i woudl rate slack as less inclusive since it actively prevent bridging to other services > and may or may not be avaiable in different gograpical regoins. So far I hear Slack is bad, Slack is bad. But nobody proposed we should switch to Slack (I am myself not fan of it). Instead we first need to admit there are issues with existing solution and start thinking how to deal with that (I feel exactly this ack is not what we all share). Feels like “IRC is a religion” and thus this arguing. > https://www.travelchinacheaper.com/index-blocked-websites-in-china > https://en.wikipedia.org/wiki/List_of_websites_blocked_in_mainland_China > otehr distibuted/selfhosted comuntionation network like matrix can certenly work but wehn > looking options we have to take that into account. +1 > > not everyone can access every service or has the bandwith to do so and useing somehtin light wight like irc > give us the possiblity to reach more peopel. > > for what its worth hermes which is no end of line unfortunetly was a greate mobile irc client > https://github.com/numixproject/android-app-suite/tree/master/Hermes > and https://play.google.com/store/apps/details?id=com.ruesga.rview&gl=IE is a pretty good mobile client for gerrit > both of which i have used wehn traveling to ptgs in the past when i have needed to interact with our exstiing tools away form a laptop/pc. And here there is some sort of contradiction (or at least me disagreeing): IRC due to its age and as a consequence absence of modern (and, importantly, maintained) tools and apps is not really helping to reach more people. I am personally struggling to find any working irc bouncer (I even already thrown away idea of having mobile app at all). My colleagues are not present in IRC at all due to that reason (it is simply not worth of invest). For the links you gave I think that Gerrit has reasonably good mobile WebUI support (last months I am fine with eventually reviewing/approving/rechecking changes from mobile browser). For IRC you can say there is nothing like that (due to the protocol specifics). >> >>> >>> Dmitry >>> >>> >>> >>> >>> >>> -- >>> Mohammed Naser >>> VEXXHOST, Inc. >>> >>> >>> >>> -- >>> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >>> Commercial register: Amtsgericht Muenchen, HRB 153243, >>> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill >> > > From zigo at debian.org Thu May 20 15:47:56 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 17:47:56 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> On 5/20/21 2:35 PM, Dmitry Tantsur wrote: > Well, it could be a norm for us. Pretty much every IRC meeting someone > interrupts with their question. If the meeting was in a thread, it > wouldn't be an issue. Interleaving communications also happen very often. Threads is the most horrible concept ever invented for chat. You get 100s of them, and when someone replies, you never know in which thread. Slack is really horrible for this... Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu May 20 15:48:51 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 17:48:51 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On 5/20/21 4:05 PM, Andrey Kurilin wrote: > Having an opportunity to modify the message after sending it is a great > feature. What's wrong with correcting yourself later on, rather than editing? Thomas From martin.chlumsky at gmail.com Thu May 20 15:55:45 2021 From: martin.chlumsky at gmail.com (Martin Chlumsky) Date: Thu, 20 May 2021 11:55:45 -0400 Subject: [horizon] Cross-domain user/role/project management Message-ID: Hello, We run a cloud service with multiple keystone domains (Default domain and later we added a federated domain (AzureAD)). We are giving "federated domain users" roles on the projects in the Default domain which existed before the integration with AureAD. Is there a way to manage cross-domain role assignments in Horizon? A horizon setting I missed? I tried as admin and only users and projects of one domain at a time ever show up (I tried switching the domain context to Default and the federated domain). Thank you, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu May 20 15:57:37 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 17:57:37 +0200 Subject: Freenode and libera.chat In-Reply-To: <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> Message-ID: <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> On 5/19/21 6:22 PM, Artem Goncharov wrote: > Yes, pool would be great. > > Please do not take this offensive, but just stating IRC survived till now and thus we should keep it is not really productive from my pov. What about: everything else than IRC is just plain crap? Seriously, that's plain truth... > Why is everything what OpenStack doing/using is so complex? (Please do not comment on the items below, I’m not really interested in any answers/explanations. This is a rhetorical question) > - gerrit. Yes it is great, yes it is fulfilling our needs. But how much we would lower the entry barrier for the contributions not using such complex setup that we have. > - irc. Yes it survived till now. Yes it does simple things the best way. When I am online - everything is perfect (except of often connection drops). But the fun starts when I am not online (one of the simplest things for the communication platform with normally 60% of the day duration). Why should anyone care of searching any reasonably maintained IRC bouncer (or grep through eavesdrop logs), would should anyone pay for a simple mobile client? > - issue tracker. You know yourself... Gerrit is just wonderful. What's hard isn't gerrit itself, is the way we are processing the auth, which is another problem. As for IRC bouncer, have you ever tried Quassel? It comes with: - a heavy client on all major platforms (Linux, Windows, Mac) - a mobile client (which is quite nice, really...) - an irc bouncer that's so easy to setup all of that integrated, in a single package. It's super super easy to setup and I love it. Please do not replace this wonder by Slack or one of its clones... Cheers, Thomas Goirand (zigo) From miguel at mlavalle.com Thu May 20 16:01:29 2021 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 20 May 2021 11:01:29 -0500 Subject: [neutron] Drivers meeting 21.05.2021 and 28.05.2021 cancelled In-Reply-To: <9430363.YgL4pZciUx@p1> References: <9430363.YgL4pZciUx@p1> Message-ID: Hi Slawek, I will be on PTO on June 4th and 11th. So I will see you again in the drivers meeting until June 18th Cheers On Thu, May 20, 2021 at 2:09 AM Slawek Kaplonski wrote: > Hi, > > We don't have any new RFEs to discuss [1] so let's cancel tomorrow's > drivers meeting. > > Next week (28.05.2021) we have Recharge Day in Red Hat so we will probably > not have quorum on the drivers meeting and I will also be offline so let's > cancel it too. > > See You on the drivers meeting in 2 weeks (4.06.2021). > > Have a great weekend :) > > [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu May 20 16:02:43 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 18:02:43 +0200 Subject: Freenode and libera.chat In-Reply-To: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> References: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> Message-ID: On Thu, May 20, 2021 at 5:50 PM Thomas Goirand wrote: > On 5/20/21 2:35 PM, Dmitry Tantsur wrote: > > Well, it could be a norm for us. Pretty much every IRC meeting someone > > interrupts with their question. If the meeting was in a thread, it > > wouldn't be an issue. Interleaving communications also happen very often. > > Threads is the most horrible concept ever invented for chat. You get > 100s of them, and when someone replies, you never know in which thread. > Slack is really horrible for this... > Problems of Slack UI are not problems with threading. Slack is terrible, no disagreement here. Also, do I understand you right that when you have 3 conversations going on at the same time, you always have an easy time understanding which one a ping corresponds to? I doubt it. Threads make the situation strictly better, assuming people don't go overboard with them. Another counter-argument: not having threads forces newcomers to use private messages. I cannot count how many times I've heard "asking here because I don't want to disturb the conversation". > > Cheers, > > Thomas Goirand (zigo) > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu May 20 16:06:55 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 18:06:55 +0200 Subject: Freenode and libera.chat In-Reply-To: <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> Message-ID: On Thu, May 20, 2021 at 6:01 PM Thomas Goirand wrote: > On 5/19/21 6:22 PM, Artem Goncharov wrote: > > Yes, pool would be great. > > > > Please do not take this offensive, but just stating IRC survived till > now and thus we should keep it is not really productive from my pov. > > What about: everything else than IRC is just plain crap? Seriously, > that's plain truth... > So is IRC. Seriously, I like bashing Slack as much as anyone, but this goes a bit overboard. You're disrespecting a decent number of FOSS projects that put good effort in making next generation communication platforms. > > > Why is everything what OpenStack doing/using is so complex? (Please do > not comment on the items below, I’m not really interested in any > answers/explanations. This is a rhetorical question) > > - gerrit. Yes it is great, yes it is fulfilling our needs. But how much > we would lower the entry barrier for the contributions not using such > complex setup that we have. > > - irc. Yes it survived till now. Yes it does simple things the best way. > When I am online - everything is perfect (except of often connection > drops). But the fun starts when I am not online (one of the simplest things > for the communication platform with normally 60% of the day duration). Why > should anyone care of searching any reasonably maintained IRC bouncer (or > grep through eavesdrop logs), would should anyone pay for a simple mobile > client? > > - issue tracker. You know yourself... > > Gerrit is just wonderful. What's hard isn't gerrit itself, is the way we > are processing the auth, which is another problem. > > As for IRC bouncer, have you ever tried Quassel? It comes with: > - a heavy client on all major platforms (Linux, Windows, Mac) > - a mobile client (which is quite nice, really...) > - an irc bouncer that's so easy to setup > Do I get it right that you're suggesting that YOU will maintain a free IRC bouncer for everyone who needs it for OpenStack business? Or do you suggest that everyone sets up their own one? Even outreachy interns, drive-by contributors and non-coding contributors? In other words, "just use an IRC bouncer" shifts the problem from us to those who want to talk to us. So much for inclusiveness. Dmitry > > all of that integrated, in a single package. It's super super easy to > setup and I love it. Please do not replace this wonder by Slack or one > of its clones... > > Cheers, > > Thomas Goirand (zigo) > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu May 20 16:34:41 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 20 May 2021 09:34:41 -0700 Subject: [sdk] R1.0 preparation work In-Reply-To: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> References: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> Message-ID: Hi Artem, My one concern about new features only landing in the feature branch is we have dependencies that may require new service project features to land in the main branch. I.e. The Octavia dashboard uses openstacksdk, so if we need to add a new feature we need to land it in SDK at the same time. Can we consider that it is ok for project related new features to land in both branches? Michael On Wed, May 19, 2021 at 3:40 AM Artem Goncharov wrote: > > Hey all, > > As we were discussing during the PTG preparation work for the openstacksdk R1.0 has been started. There is now feature branch “feature/r1” and a set of patches already in place ([1]). (While https://review.opendev.org/c/openstack/devstack/+/791541 is not merged there are no functional tests running, but that is not blocking from doing the main work) > > Things to be done: > > - get rid of all direct REST calls in the cloud layer. Instead reuse corresponding proxies [must] > - generalise tag, metadata, quota(set), limits [must] > - clean the code from some deprecated things and py2 remaining (if any) [must] > - review resource caching to be implemented on the resource layer [optional] > - introduction of read-only resource properties [optional] > - restructure documentation to make it more used friendly [optional] > > > Planned R1 interface changes > > Every cloud layer method (Connection.connection.***, and not i.e. Connection.connection.compute.***) will consistently return Resource objects. At the moment there is a mix of Munch and Resource types depending on the case. Resource class itself fulfil dict, Munch and attribute interface, so there should be no breaking changes for getting info out of it. > The only known limitation is that compared to bare dict/Munch it might be not allowed to modify some properties directly on the object. Ansible collection modules [2] would be modified to explicitly convert return of sdk into dict (Resource.to_dict()) to further operate on it. This means that in some cases older Ansible modules (2.7-2.9) will not properly work with newer SDK. Zuul jobs and everybody stuck without possibility to use newer Ansible collection [2] are potential victims here (depends on the real usage pattern). Sadly there is no way around it, since historically ansible modules operate whatever SDK returns and Ansible sometimes decides to alter objects (i.e. no_log case) what we might want to forbid (read-only or virtual properties). Sorry about that. > In some rare cases attribute naming (whatever returned by the cloud layer) might be affected (i.e. is_bootable vs bootable). We are going to strictly bring all names to the convention. > > > Due to the big amount of changes touching lot of files here I propose to stop adding new features into the master branch directly and instead put them into feature/r1 branch. I want to keep master during this time more like a stable branch for bugfixes and other important things. Once r1 branch feel ready we will merge it into master and most likely release something like RC to be able to continue integration work with Ansible. > > > Everybody interested in the future of sdk is welcome in doing reviews and code [1] to know what comes and to speed up the work. > > > Any concerns? Suggestions? > > > Regards, > Artem > > > [1] https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 > [2] https://opendev.org/openstack/ansible-collections-openstack > From artem.goncharov at gmail.com Thu May 20 16:36:02 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 18:36:02 +0200 Subject: [sdk] R1.0 preparation work In-Reply-To: References: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> Message-ID: <38E0FC89-AA56-41B7-BF9C-E3E54FA7B986@gmail.com> Yes, absolutely. I would prefer us to implement those in feature branch and cherry-picking those to master. > On 20. May 2021, at 18:34, Michael Johnson wrote: > > Hi Artem, > > My one concern about new features only landing in the feature branch > is we have dependencies that may require new service project features > to land in the main branch. > I.e. The Octavia dashboard uses openstacksdk, so if we need to add a > new feature we need to land it in SDK at the same time. > > Can we consider that it is ok for project related new features to land > in both branches? > > Michael > > On Wed, May 19, 2021 at 3:40 AM Artem Goncharov > wrote: >> >> Hey all, >> >> As we were discussing during the PTG preparation work for the openstacksdk R1.0 has been started. There is now feature branch “feature/r1” and a set of patches already in place ([1]). (While https://review.opendev.org/c/openstack/devstack/+/791541 is not merged there are no functional tests running, but that is not blocking from doing the main work) >> >> Things to be done: >> >> - get rid of all direct REST calls in the cloud layer. Instead reuse corresponding proxies [must] >> - generalise tag, metadata, quota(set), limits [must] >> - clean the code from some deprecated things and py2 remaining (if any) [must] >> - review resource caching to be implemented on the resource layer [optional] >> - introduction of read-only resource properties [optional] >> - restructure documentation to make it more used friendly [optional] >> >> >> Planned R1 interface changes >> >> Every cloud layer method (Connection.connection.***, and not i.e. Connection.connection.compute.***) will consistently return Resource objects. At the moment there is a mix of Munch and Resource types depending on the case. Resource class itself fulfil dict, Munch and attribute interface, so there should be no breaking changes for getting info out of it. >> The only known limitation is that compared to bare dict/Munch it might be not allowed to modify some properties directly on the object. Ansible collection modules [2] would be modified to explicitly convert return of sdk into dict (Resource.to_dict()) to further operate on it. This means that in some cases older Ansible modules (2.7-2.9) will not properly work with newer SDK. Zuul jobs and everybody stuck without possibility to use newer Ansible collection [2] are potential victims here (depends on the real usage pattern). Sadly there is no way around it, since historically ansible modules operate whatever SDK returns and Ansible sometimes decides to alter objects (i.e. no_log case) what we might want to forbid (read-only or virtual properties). Sorry about that. >> In some rare cases attribute naming (whatever returned by the cloud layer) might be affected (i.e. is_bootable vs bootable). We are going to strictly bring all names to the convention. >> >> >> Due to the big amount of changes touching lot of files here I propose to stop adding new features into the master branch directly and instead put them into feature/r1 branch. I want to keep master during this time more like a stable branch for bugfixes and other important things. Once r1 branch feel ready we will merge it into master and most likely release something like RC to be able to continue integration work with Ansible. >> >> >> Everybody interested in the future of sdk is welcome in doing reviews and code [1] to know what comes and to speed up the work. >> >> >> Any concerns? Suggestions? >> >> >> Regards, >> Artem >> >> >> [1] https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 >> [2] https://opendev.org/openstack/ansible-collections-openstack >> From johnsomor at gmail.com Thu May 20 16:37:08 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Thu, 20 May 2021 09:37:08 -0700 Subject: [sdk] R1.0 preparation work In-Reply-To: <38E0FC89-AA56-41B7-BF9C-E3E54FA7B986@gmail.com> References: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> <38E0FC89-AA56-41B7-BF9C-E3E54FA7B986@gmail.com> Message-ID: Great, I will try to make sure anyone that needs to add a feature for Octavia/Designate will follow that process. Michael On Thu, May 20, 2021 at 9:36 AM Artem Goncharov wrote: > > Yes, absolutely. I would prefer us to implement those in feature branch and cherry-picking those to master. > > > > On 20. May 2021, at 18:34, Michael Johnson wrote: > > > > Hi Artem, > > > > My one concern about new features only landing in the feature branch > > is we have dependencies that may require new service project features > > to land in the main branch. > > I.e. The Octavia dashboard uses openstacksdk, so if we need to add a > > new feature we need to land it in SDK at the same time. > > > > Can we consider that it is ok for project related new features to land > > in both branches? > > > > Michael > > > > On Wed, May 19, 2021 at 3:40 AM Artem Goncharov > > wrote: > >> > >> Hey all, > >> > >> As we were discussing during the PTG preparation work for the openstacksdk R1.0 has been started. There is now feature branch “feature/r1” and a set of patches already in place ([1]). (While https://review.opendev.org/c/openstack/devstack/+/791541 is not merged there are no functional tests running, but that is not blocking from doing the main work) > >> > >> Things to be done: > >> > >> - get rid of all direct REST calls in the cloud layer. Instead reuse corresponding proxies [must] > >> - generalise tag, metadata, quota(set), limits [must] > >> - clean the code from some deprecated things and py2 remaining (if any) [must] > >> - review resource caching to be implemented on the resource layer [optional] > >> - introduction of read-only resource properties [optional] > >> - restructure documentation to make it more used friendly [optional] > >> > >> > >> Planned R1 interface changes > >> > >> Every cloud layer method (Connection.connection.***, and not i.e. Connection.connection.compute.***) will consistently return Resource objects. At the moment there is a mix of Munch and Resource types depending on the case. Resource class itself fulfil dict, Munch and attribute interface, so there should be no breaking changes for getting info out of it. > >> The only known limitation is that compared to bare dict/Munch it might be not allowed to modify some properties directly on the object. Ansible collection modules [2] would be modified to explicitly convert return of sdk into dict (Resource.to_dict()) to further operate on it. This means that in some cases older Ansible modules (2.7-2.9) will not properly work with newer SDK. Zuul jobs and everybody stuck without possibility to use newer Ansible collection [2] are potential victims here (depends on the real usage pattern). Sadly there is no way around it, since historically ansible modules operate whatever SDK returns and Ansible sometimes decides to alter objects (i.e. no_log case) what we might want to forbid (read-only or virtual properties). Sorry about that. > >> In some rare cases attribute naming (whatever returned by the cloud layer) might be affected (i.e. is_bootable vs bootable). We are going to strictly bring all names to the convention. > >> > >> > >> Due to the big amount of changes touching lot of files here I propose to stop adding new features into the master branch directly and instead put them into feature/r1 branch. I want to keep master during this time more like a stable branch for bugfixes and other important things. Once r1 branch feel ready we will merge it into master and most likely release something like RC to be able to continue integration work with Ansible. > >> > >> > >> Everybody interested in the future of sdk is welcome in doing reviews and code [1] to know what comes and to speed up the work. > >> > >> > >> Any concerns? Suggestions? > >> > >> > >> Regards, > >> Artem > >> > >> > >> [1] https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 > >> [2] https://opendev.org/openstack/ansible-collections-openstack > >> > From artem.goncharov at gmail.com Thu May 20 16:38:54 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 18:38:54 +0200 Subject: [sdk] R1.0 preparation work In-Reply-To: References: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> Message-ID: Awesome. My rough plan is to have feature branch merged back to master by end of summer (but you know - all depends on reviews speed). Artem > On 20. May 2021, at 18:34, Michael Johnson wrote: > > Hi Artem, > > My one concern about new features only landing in the feature branch > is we have dependencies that may require new service project features > to land in the main branch. > I.e. The Octavia dashboard uses openstacksdk, so if we need to add a > new feature we need to land it in SDK at the same time. > > Can we consider that it is ok for project related new features to land > in both branches? > > Michael > > On Wed, May 19, 2021 at 3:40 AM Artem Goncharov > wrote: >> >> Hey all, >> >> As we were discussing during the PTG preparation work for the openstacksdk R1.0 has been started. There is now feature branch “feature/r1” and a set of patches already in place ([1]). (While https://review.opendev.org/c/openstack/devstack/+/791541 is not merged there are no functional tests running, but that is not blocking from doing the main work) >> >> Things to be done: >> >> - get rid of all direct REST calls in the cloud layer. Instead reuse corresponding proxies [must] >> - generalise tag, metadata, quota(set), limits [must] >> - clean the code from some deprecated things and py2 remaining (if any) [must] >> - review resource caching to be implemented on the resource layer [optional] >> - introduction of read-only resource properties [optional] >> - restructure documentation to make it more used friendly [optional] >> >> >> Planned R1 interface changes >> >> Every cloud layer method (Connection.connection.***, and not i.e. Connection.connection.compute.***) will consistently return Resource objects. At the moment there is a mix of Munch and Resource types depending on the case. Resource class itself fulfil dict, Munch and attribute interface, so there should be no breaking changes for getting info out of it. >> The only known limitation is that compared to bare dict/Munch it might be not allowed to modify some properties directly on the object. Ansible collection modules [2] would be modified to explicitly convert return of sdk into dict (Resource.to_dict()) to further operate on it. This means that in some cases older Ansible modules (2.7-2.9) will not properly work with newer SDK. Zuul jobs and everybody stuck without possibility to use newer Ansible collection [2] are potential victims here (depends on the real usage pattern). Sadly there is no way around it, since historically ansible modules operate whatever SDK returns and Ansible sometimes decides to alter objects (i.e. no_log case) what we might want to forbid (read-only or virtual properties). Sorry about that. >> In some rare cases attribute naming (whatever returned by the cloud layer) might be affected (i.e. is_bootable vs bootable). We are going to strictly bring all names to the convention. >> >> >> Due to the big amount of changes touching lot of files here I propose to stop adding new features into the master branch directly and instead put them into feature/r1 branch. I want to keep master during this time more like a stable branch for bugfixes and other important things. Once r1 branch feel ready we will merge it into master and most likely release something like RC to be able to continue integration work with Ansible. >> >> >> Everybody interested in the future of sdk is welcome in doing reviews and code [1] to know what comes and to speed up the work. >> >> >> Any concerns? Suggestions? >> >> >> Regards, >> Artem >> >> >> [1] https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 >> [2] https://opendev.org/openstack/ansible-collections-openstack >> From marios at redhat.com Thu May 20 16:44:25 2021 From: marios at redhat.com (Marios Andreou) Date: Thu, 20 May 2021 19:44:25 +0300 Subject: [TripleO] next irc meeting Tuesday 25 May @ 1400 UTC in #tripleo Message-ID: Reminder that the next TripleO irc meeting is: ** Tuesday 25 May 1400 UTC in freenode irc channel: #tripleo ** ** https://wiki.openstack.org/wiki/Meetings/TripleO ** ** https://etherpad.opendev.org/p/tripleo-meeting-items ** Please add anything you want to highlight at https://etherpad.opendev.org/p/tripleo-meeting-items This can be recently completed things, ongoing review requests, blocking issues, or anything else tripleo you want to share. Our last meeting was on May 11 - you can find the logs there http://eavesdrop.openstack.org/meetings/tripleo/2021/tripleo.2021-05-11-14.00.html Hope you can make it on Tuesday, regards, marios From gouthampravi at gmail.com Thu May 20 17:00:46 2021 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 20 May 2021 10:00:46 -0700 Subject: [sdk] R1.0 preparation work In-Reply-To: <38E0FC89-AA56-41B7-BF9C-E3E54FA7B986@gmail.com> References: <2FD4717F-E42B-4C5D-8AC8-67B8D26FA4DB@gmail.com> <38E0FC89-AA56-41B7-BF9C-E3E54FA7B986@gmail.com> Message-ID: On Thu, May 20, 2021 at 9:39 AM Artem Goncharov wrote: > Yes, absolutely. I would prefer us to implement those in feature branch > and cherry-picking those to master. > +1 Awesome. We have a number of manila patches in-flight that I wish to refresh, i'll retarget them to the r1 branch and cherry-pick into master when they merge. We don't yet use openstacksdk in manila-ui or python-manilaclient, or have our own ansible collections modules yet - so while we chase features, this may be a delayed concern for the manila folks. > > > > On 20. May 2021, at 18:34, Michael Johnson wrote: > > > > Hi Artem, > > > > My one concern about new features only landing in the feature branch > > is we have dependencies that may require new service project features > > to land in the main branch. > > I.e. The Octavia dashboard uses openstacksdk, so if we need to add a > > new feature we need to land it in SDK at the same time. > > > > Can we consider that it is ok for project related new features to land > > in both branches? > > > > Michael > > > > On Wed, May 19, 2021 at 3:40 AM Artem Goncharov > > wrote: > >> > >> Hey all, > >> > >> As we were discussing during the PTG preparation work for the > openstacksdk R1.0 has been started. There is now feature branch > “feature/r1” and a set of patches already in place ([1]). (While > https://review.opendev.org/c/openstack/devstack/+/791541 is not merged > there are no functional tests running, but that is not blocking from doing > the main work) > >> > >> Things to be done: > >> > >> - get rid of all direct REST calls in the cloud layer. Instead reuse > corresponding proxies [must] > >> - generalise tag, metadata, quota(set), limits [must] > >> - clean the code from some deprecated things and py2 remaining (if any) > [must] > >> - review resource caching to be implemented on the resource layer > [optional] > >> - introduction of read-only resource properties [optional] > >> - restructure documentation to make it more used friendly [optional] > >> > >> > >> Planned R1 interface changes > >> > >> Every cloud layer method (Connection.connection.***, and not i.e. > Connection.connection.compute.***) will consistently return Resource > objects. At the moment there is a mix of Munch and Resource types depending > on the case. Resource class itself fulfil dict, Munch and attribute > interface, so there should be no breaking changes for getting info out of > it. > >> The only known limitation is that compared to bare dict/Munch it might > be not allowed to modify some properties directly on the object. Ansible > collection modules [2] would be modified to explicitly convert return of > sdk into dict (Resource.to_dict()) to further operate on it. This means > that in some cases older Ansible modules (2.7-2.9) will not properly work > with newer SDK. Zuul jobs and everybody stuck without possibility to use > newer Ansible collection [2] are potential victims here (depends on the > real usage pattern). Sadly there is no way around it, since historically > ansible modules operate whatever SDK returns and Ansible sometimes decides > to alter objects (i.e. no_log case) what we might want to forbid (read-only > or virtual properties). Sorry about that. > >> In some rare cases attribute naming (whatever returned by the cloud > layer) might be affected (i.e. is_bootable vs bootable). We are going to > strictly bring all names to the convention. > >> > >> > >> Due to the big amount of changes touching lot of files here I propose > to stop adding new features into the master branch directly and instead put > them into feature/r1 branch. I want to keep master during this time more > like a stable branch for bugfixes and other important things. Once r1 > branch feel ready we will merge it into master and most likely release > something like RC to be able to continue integration work with Ansible. > >> > >> > >> Everybody interested in the future of sdk is welcome in doing reviews > and code [1] to know what comes and to speed up the work. > >> > >> > >> Any concerns? Suggestions? > >> > >> > >> Regards, > >> Artem > >> > >> > >> [1] > https://review.opendev.org/q/project:openstack%252Fopenstacksdk+branch:feature%252Fr1 > >> [2] https://opendev.org/openstack/ansible-collections-openstack > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu May 20 17:20:04 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 18:20:04 +0100 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> Message-ID: <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> On Thu, 2021-05-20 at 18:06 +0200, Dmitry Tantsur wrote: > On Thu, May 20, 2021 at 6:01 PM Thomas Goirand wrote: > > > On 5/19/21 6:22 PM, Artem Goncharov wrote: > > > Yes, pool would be great. > > > > > > Please do not take this offensive, but just stating IRC survived till > > now and thus we should keep it is not really productive from my pov. > > > > What about: everything else than IRC is just plain crap? Seriously, > > that's plain truth... > > > > So is IRC. Seriously, I like bashing Slack as much as anyone, but this goes > a bit overboard. You're disrespecting a decent number of FOSS projects that > put good effort in making next generation communication platforms. its not about slack bashing but there are types of applciation that are centralised and comercailed that have usage paridimes that many it seams in the opentack comunity today do not like. i actully had assume if we were to ever move away form irc we would move to a decentrialed plathform like matrix or similar but i think the thread has show that wile many fine the limitation of irc frustrating there are also many that find its simplicty uesful. if we did not have gerrit, email, pastbin and etherpads to use in addtion to irc i definetly would want somethign more and again im not agaisnt something like matix or other protocol now but im also not convice those solution are more inclusive. > > > > > > Why is everything what OpenStack doing/using is so complex? (Please do > > not comment on the items below, I’m not really interested in any > > answers/explanations. This is a rhetorical question) > > > - gerrit. Yes it is great, yes it is fulfilling our needs. But how much > > we would lower the entry barrier for the contributions not using such > > complex setup that we have. > > > - irc. Yes it survived till now. Yes it does simple things the best way. > > When I am online - everything is perfect (except of often connection > > drops). But the fun starts when I am not online (one of the simplest things > > for the communication platform with normally 60% of the day duration). Why > > should anyone care of searching any reasonably maintained IRC bouncer (or > > grep through eavesdrop logs), would should anyone pay for a simple mobile > > client? > > > - issue tracker. You know yourself... > > > > Gerrit is just wonderful. What's hard isn't gerrit itself, is the way we > > are processing the auth, which is another problem. > > > > As for IRC bouncer, have you ever tried Quassel? It comes with: > > - a heavy client on all major platforms (Linux, Windows, Mac) > > - a mobile client (which is quite nice, really...) > > - an irc bouncer that's so easy to setup > > > > Do I get it right that you're suggesting that YOU will maintain a free IRC > bouncer for everyone who needs it for OpenStack business? Or do you suggest > that everyone sets up their own one? Even outreachy interns, drive-by > contributors and non-coding contributors? > > In other words, "just use an IRC bouncer" shifts the problem from us to > those who want to talk to us. So much for inclusiveness. for what its worth i have work on openstack for 8 year or so now and i do not and never have used an irc bouncer. granted i tend to leave my laptop connected to irc most of the time but even when i dont i dont think being conected 24/7 is required or even nessalry a good thing. again im sure my usage patterns differ form use an i get most of the benifti of a bounce by just leaving weechat open in a window 24/7 but i do take your point that ne contibuter may not have irc, they also may not have pathform X if we were to choose a different one. this is now well off the orginal topic but what woudl you propose we use if we were to replace irc? > > Dmitry > > > > > > all of that integrated, in a single package. It's super super easy to > > setup and I love it. Please do not replace this wonder by Slack or one > > of its clones... > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > > From zaitcev at redhat.com Thu May 20 18:33:57 2021 From: zaitcev at redhat.com (Pete Zaitcev) Date: Thu, 20 May 2021 13:33:57 -0500 Subject: Freenode and libera.chat In-Reply-To: References: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> Message-ID: <20210520133357.07c18cb4@suzdal.zaitcev.lan> On Thu, 20 May 2021 18:02:43 +0200 Dmitry Tantsur wrote: > On Thu, May 20, 2021 at 5:50 PM Thomas Goirand wrote: > > Threads is the most horrible concept ever invented for chat. You get > > 100s of them, and when someone replies, you never know in which thread. > Also, do I understand you right that when you have 3 conversations going on > at the same time, you always have an easy time understanding which one a > ping corresponds to? I doubt it. Threads make the situation strictly > better, assuming people don't go overboard with them. We have Google Chat at work at Red Hat and it's an absolutele shitshow with threads. Replies are routinely lost and missed unless you ping the recipient _anyway_. And, it's impossible to see what's going on in a channel. So it's not just Slack. Google tries to mitigate it by rotating the thread with the latest reply to the bottom. But even that only works if you're watching them like a hawk and do nothing productive. Works great for chat junkies or perhaps Chat Power Users, I suppose. -- Pete From zigo at debian.org Thu May 20 18:37:02 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 20:37:02 +0200 Subject: Freenode and libera.chat In-Reply-To: References: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> Message-ID: <3ed32203-e812-bebf-201d-f99cde0d0755@debian.org> On 5/20/21 6:02 PM, Dmitry Tantsur wrote: > > > On Thu, May 20, 2021 at 5:50 PM Thomas Goirand > wrote: > > On 5/20/21 2:35 PM, Dmitry Tantsur wrote: > > Well, it could be a norm for us. Pretty much every IRC meeting someone > > interrupts with their question. If the meeting was in a thread, it > > wouldn't be an issue. Interleaving communications also happen very > often. > > Threads is the most horrible concept ever invented for chat. You get > 100s of them, and when someone replies, you never know in which thread. > Slack is really horrible for this... > > > Problems of Slack UI are not problems with threading. Slack is terrible, > no disagreement here. > > Also, do I understand you right that when you have 3 conversations going > on at the same time, you always have an easy time understanding which > one a ping corresponds to? I doubt it. Threads make the situation > strictly better, assuming people don't go overboard with them. Today, I just had to ask my colleague from which thread it was, then he replied once more in the thread, and I could click fast enough in the notification bubble. If you don't do that then, here's the UI disaster... : Typically, in a single day, many threads starts. Then someone reply to one of the early threads. I get the notification, but I have no idea from which thread it comes from. Then I waste a lot of time searching for it. Yes, there's the "threads" entry on top left, but it takes a long time to use too (at least 3 clicks, each of them opening a new screen). Compare this to IRC: someone highlights my name, I just click in the notification area, and Quassel opens on the correct channel, with the line with my name highlighted. Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu May 20 18:52:30 2021 From: zigo at debian.org (Thomas Goirand) Date: Thu, 20 May 2021 20:52:30 +0200 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> Message-ID: <5ac09dfd-ed26-f4e8-8543-cabb8164c643@debian.org> On 5/20/21 6:06 PM, Dmitry Tantsur wrote: > Do I get it right that you're suggesting that YOU will maintain a free > IRC bouncer for everyone who needs it for OpenStack business? Or do you > suggest that everyone sets up their own one? Even outreachy interns, > drive-by contributors and non-coding contributors? Of course not. A bouncer has never been a requirement, it's a convenience. You can also run a text based version of IRC on a screen in any Unix. If price is a problem, AWS is providing 1/2 CPU for free, if I'm not mistaking. That's enough for a bouncer or a screen session. > In other words, "just use an IRC bouncer" shifts the problem from us to > those who want to talk to us. So much for inclusiveness. "just use an IRC bouncer" is just the answer to "I'm not always connected, and I don't have the logs when I'm not there". The answer to it could also be: well, when you sleep, you can't answer anyway, can you? It's supposed to be instant messaging, what's the point then? For async stuff, there's email... Anyway, that's not the main point. The main point is that I very much prefer to have the chat logs under *MY* hands than on the one of *ANY* operator. I'm happy that the IRC bouncer that *I* own does the logging, and that logging isn't a feature of the server side. No, I do not want that anyone keeps any logs of the things on IRC. I don't like it. I would prefer if the Foundation was not keeping any IRC log using some bots. If that's useful, then please erase these logs after a month. Otherwise, it's like putting a microphone in my office, and listening to what I say to my colleagues (at least, it's the same feeling). Cheers, Thomas Goirand (zigo) From dtantsur at redhat.com Thu May 20 19:28:26 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 21:28:26 +0200 Subject: Freenode and libera.chat In-Reply-To: <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> Message-ID: On Thu, May 20, 2021 at 7:20 PM Sean Mooney wrote: > On Thu, 2021-05-20 at 18:06 +0200, Dmitry Tantsur wrote: > > On Thu, May 20, 2021 at 6:01 PM Thomas Goirand wrote: > > > > > On 5/19/21 6:22 PM, Artem Goncharov wrote: > > > > Yes, pool would be great. > > > > > > > > Please do not take this offensive, but just stating IRC survived till > > > now and thus we should keep it is not really productive from my pov. > > > > > > What about: everything else than IRC is just plain crap? Seriously, > > > that's plain truth... > > > > > > > So is IRC. Seriously, I like bashing Slack as much as anyone, but this > goes > > a bit overboard. You're disrespecting a decent number of FOSS projects > that > > put good effort in making next generation communication platforms. > its not about slack bashing but there are types of applciation that are > centralised > and comercailed that have usage paridimes that many it seams in the > opentack comunity today > do not like. i actully had assume if we were to ever move away form irc we > would move to > a decentrialed plathform like matrix or similar but i think the thread has > show that wile > many fine the limitation of irc frustrating there are also many that find > its simplicty uesful. > It's simplicity from the hacker's perspective, not from the user's. Yes, I'm also fascinated how simple and reliable the technology is, but that's not the point. > > if we did not have gerrit, email, pastbin and etherpads to use in addtion > to irc i definetly would > want somethign more and again im not agaisnt something like matix or other > protocol now but im also not convice > those solution are more inclusive. > Same comment as above. Look at this problem from the perspective of a person who doesn't have a single clue how IRC is different from ICQ and what a bouncer is. > > > > > > > > > > Why is everything what OpenStack doing/using is so complex? (Please > do > > > not comment on the items below, I’m not really interested in any > > > answers/explanations. This is a rhetorical question) > > > > - gerrit. Yes it is great, yes it is fulfilling our needs. But how > much > > > we would lower the entry barrier for the contributions not using such > > > complex setup that we have. > > > > - irc. Yes it survived till now. Yes it does simple things the best > way. > > > When I am online - everything is perfect (except of often connection > > > drops). But the fun starts when I am not online (one of the simplest > things > > > for the communication platform with normally 60% of the day duration). > Why > > > should anyone care of searching any reasonably maintained IRC bouncer > (or > > > grep through eavesdrop logs), would should anyone pay for a simple > mobile > > > client? > > > > - issue tracker. You know yourself... > > > > > > Gerrit is just wonderful. What's hard isn't gerrit itself, is the way > we > > > are processing the auth, which is another problem. > > > > > > As for IRC bouncer, have you ever tried Quassel? It comes with: > > > - a heavy client on all major platforms (Linux, Windows, Mac) > > > - a mobile client (which is quite nice, really...) > > > - an irc bouncer that's so easy to setup > > > > > > > Do I get it right that you're suggesting that YOU will maintain a free > IRC > > bouncer for everyone who needs it for OpenStack business? Or do you > suggest > > that everyone sets up their own one? Even outreachy interns, drive-by > > contributors and non-coding contributors? > > > > In other words, "just use an IRC bouncer" shifts the problem from us to > > those who want to talk to us. So much for inclusiveness. > for what its worth i have work on openstack for 8 year or so now and i do > not > and never have used an irc bouncer. granted i tend to leave my laptop > connected to irc most of the time > but even when i dont i dont think being conected 24/7 is required or even > nessalry a good thing. > Well, if somebody asks a question on #openstack-ironic at 5am my time, the only chance they'll get an answer is if they stay online. Or realize they need to come online at a different, potentially inconvenient time. (Or use email, but sigh. People nowadays use emails as the last resort :( ) And I only learn about their question because *I* have a bouncer. If I did not, I would have to go through the channel logs (I even forget to check the logs from our meetings, soo.. unlikely). > > again im sure my usage patterns differ form use an i get most of the > benifti of a bounce by just leaving weechat > open in a window 24/7 but i do take your point that ne contibuter may not > have irc, they also may not have pathform > X if we were to choose a different one. > Most platforms have a web client and server-side history, so this is not an issue. There is IRCCloud, but the free plan seems to offer only 2 hours of "offline" time. > > this is now well off the orginal topic but what woudl you propose we use > if we were to replace irc? > This is, indeed, not quite on-topic, but I'm advocating for Matrix, mostly because that's where Mozilla went and because it seems to check all the boxes. I definitely do *not* suggest Slack, at least because it's proprietary. Dmitry > > > > Dmitry > > > > > > > > > > all of that integrated, in a single package. It's super super easy to > > > setup and I love it. Please do not replace this wonder by Slack or one > > > of its clones... > > > > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 20 19:35:01 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 May 2021 19:35:01 +0000 Subject: Freenode and libera.chat In-Reply-To: <5ac09dfd-ed26-f4e8-8543-cabb8164c643@debian.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <5ac09dfd-ed26-f4e8-8543-cabb8164c643@debian.org> Message-ID: <20210520193500.hc6csrge3oj2laf3@yuggoth.org> On 2021-05-20 20:52:30 +0200 (+0200), Thomas Goirand wrote: [...] > No, I do not want that anyone keeps any logs of the things on IRC. I > don't like it. I would prefer if the Foundation was not keeping any IRC > log using some bots. If that's useful, then please erase these logs > after a month. Otherwise, it's like putting a microphone in my office, > and listening to what I say to my colleagues (at least, it's the same > feeling). "The foundation" isn't doing anything in particular related to IRC. The OpenDev Collaboratory on the other hand runs an IRC bot which channels can opt into for long-term logging of discussions. Those logs are published here: http://eavesdrop.openstack.org/irclogs/ Some, like #openstack-dev, have (quite literally) a decade of historical channel logs. I see it as an important archival record of the project's open development and design, almost all the way back to its very origins. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 20 19:39:06 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 May 2021 19:39:06 +0000 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> Message-ID: <20210520193905.ocmmestgdi22lx7l@yuggoth.org> On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: [...] > This is, indeed, not quite on-topic, but I'm advocating for > Matrix, mostly because that's where Mozilla went and because it > seems to check all the boxes. [...] It seems, from what little I've read, that Matrix servers can integrate with IRC server networks and bridge channels fairly seamlessly. If there were a Matrix bridge providing access to the same channels as people were participating in via IRC (wherever that happened to be), would that address your concerns? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Thu May 20 19:58:52 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 20 May 2021 21:58:52 +0200 Subject: Freenode and libera.chat In-Reply-To: <20210520193905.ocmmestgdi22lx7l@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: On Thu, May 20, 2021 at 9:41 PM Jeremy Stanley wrote: > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > [...] > > This is, indeed, not quite on-topic, but I'm advocating for > > Matrix, mostly because that's where Mozilla went and because it > > seems to check all the boxes. > [...] > > It seems, from what little I've read, that Matrix servers can > integrate with IRC server networks and bridge channels fairly > seamlessly. If there were a Matrix bridge providing access to the > same channels as people were participating in via IRC (wherever that > happened to be), would that address your concerns? > Not quite. It would probably solve the issues *for me*, but I've already solved most of my issues. If we still say "our official communication channel is IRC", that's what people will (try to) use. If we do get an easy "hey, press this button to talk to ironic folks", it may change my mind though. Right now it seems to be IRCCloud, but I'm a bit uneasy about recommending everyone to go through some private service, especially about putting something like that in our contributor's guide. Dmitry > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Thu May 20 20:04:27 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 22:04:27 +0200 Subject: Freenode and libera.chat In-Reply-To: <20210520193905.ocmmestgdi22lx7l@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: On Thu, May 20, 2021, 21:42 Jeremy Stanley wrote: > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > [...] > > This is, indeed, not quite on-topic, but I'm advocating for > > Matrix, mostly because that's where Mozilla went and because it > > seems to check all the boxes. > [...] > > It seems, from what little I've read, that Matrix servers can > integrate with IRC server networks and bridge channels fairly > seamlessly. If there were a Matrix bridge providing access to the > same channels as people were participating in via IRC (wherever that > happened to be), would that address your concerns? > Same can Zulip do (our company runs own instance). But generally yes, this would address at least my concerns. Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 20 20:09:37 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 May 2021 20:09:37 +0000 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: <20210520200937.5eyxs4r7a3qchja7@yuggoth.org> On 2021-05-20 21:58:52 +0200 (+0200), Dmitry Tantsur wrote: > On Thu, May 20, 2021 at 9:41 PM Jeremy Stanley wrote: [...] > > It seems, from what little I've read, that Matrix servers can > > integrate with IRC server networks and bridge channels fairly > > seamlessly. If there were a Matrix bridge providing access to the > > same channels as people were participating in via IRC (wherever that > > happened to be), would that address your concerns? > > Not quite. It would probably solve the issues *for me*, but I've already > solved most of my issues. If we still say "our official communication > channel is IRC", that's what people will (try to) use. > > If we do get an easy "hey, press this button to talk to ironic folks", it > may change my mind though. Right now it seems to be IRCCloud, but I'm a bit > uneasy about recommending everyone to go through some private service, > especially about putting something like that in our contributor's guide. Well, I guess what I'm asking is, if there were a Matrix bridge for the #openstack-ironic IRC channel, would that be sufficient for Ironic to be able to update its documentation with an "easy button" pointed at Matrix instead of IRC, and connect new users on Matrix with traditional IRC users for discussion? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 20 20:11:18 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 20 May 2021 20:11:18 +0000 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: <20210520201118.a2zu7pfzzzypa5zp@yuggoth.org> On 2021-05-20 22:04:27 +0200 (+0200), Artem Goncharov wrote: > On Thu, May 20, 2021, 21:42 Jeremy Stanley wrote: > > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > [...] > > > This is, indeed, not quite on-topic, but I'm advocating for > > > Matrix, mostly because that's where Mozilla went and because it > > > seems to check all the boxes. > > [...] > > > > It seems, from what little I've read, that Matrix servers can > > integrate with IRC server networks and bridge channels fairly > > seamlessly. If there were a Matrix bridge providing access to the > > same channels as people were participating in via IRC (wherever that > > happened to be), would that address your concerns? > > > > Same can Zulip do (our company runs own instance). But generally yes, this > would address at least my concerns. Similarly, it seems like projects could then add a Zulip/IRC bridge for their channels if they wanted (and maybe that would even three-way bridge IRC, Matrix, and Zulip users?). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From smooney at redhat.com Thu May 20 20:26:58 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 21:26:58 +0100 Subject: Freenode and libera.chat In-Reply-To: <20210520201118.a2zu7pfzzzypa5zp@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520201118.a2zu7pfzzzypa5zp@yuggoth.org> Message-ID: <43115e78b4105df93203b20ded9415cac5139d76.camel@redhat.com> On Thu, 2021-05-20 at 20:11 +0000, Jeremy Stanley wrote: > On 2021-05-20 22:04:27 +0200 (+0200), Artem Goncharov wrote: > > On Thu, May 20, 2021, 21:42 Jeremy Stanley wrote: > > > > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > > [...] > > > > This is, indeed, not quite on-topic, but I'm advocating for > > > > Matrix, mostly because that's where Mozilla went and because it > > > > seems to check all the boxes. > > > [...] > > > > > > It seems, from what little I've read, that Matrix servers can > > > integrate with IRC server networks and bridge channels fairly > > > seamlessly. If there were a Matrix bridge providing access to the > > > same channels as people were participating in via IRC (wherever that > > > happened to be), would that address your concerns? > > > > > > > Same can Zulip do (our company runs own instance). But generally yes, this > > would address at least my concerns. > > Similarly, it seems like projects could then add a Zulip/IRC bridge > for their channels if they wanted (and maybe that would even > three-way bridge IRC, Matrix, and Zulip users?). matix has a number of clinets https://matrix.org/clients/ including what looks like two web based ones https://element.io/get-started and https://fluffychat.im/ not sure how well they work in practic it kind of sound like a matix bridge and a hosted version fo say https://matrix.org/docs/projects/client/element whic is apache 2 liscents woudl sovle that use case since ironci could say go to webchat.o.o and connect to the ironci channel or soemthing like that kindo fo like how https://webchat.freenode.net/ works as web interface to irc. if you click the try now button on https://matrix.org/ it send you to https://element.io/get-started where you can open in bowser or down lond the moble or desktop aps so it looks liek its endorced by matix as how new user can get started quickly. you could proably even user there hsoted version at https://app.element.io/?pk_vid=162154222536360a#/welcome matrix have bridge to freenod and oftc already https://github.com/matrix-org/matrix-appservice-irc/wiki/Bridged-IRC-networks not really sure about zulip but there is a brdige bot https://matrix.org/docs/projects/bridge/matrix-zulip-bridgebot From artem.goncharov at gmail.com Thu May 20 20:43:13 2021 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Thu, 20 May 2021 22:43:13 +0200 Subject: Freenode and libera.chat In-Reply-To: <43115e78b4105df93203b20ded9415cac5139d76.camel@redhat.com> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520201118.a2zu7pfzzzypa5zp@yuggoth.org> <43115e78b4105df93203b20ded9415cac5139d76.camel@redhat.com> Message-ID: On Thu, May 20, 2021, 22:30 Sean Mooney wrote: > On Thu, 2021-05-20 at 20:11 +0000, Jeremy Stanley wrote: > > On 2021-05-20 22:04:27 +0200 (+0200), Artem Goncharov wrote: > > > On Thu, May 20, 2021, 21:42 Jeremy Stanley wrote: > > > > > > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > > > [...] > > > > > This is, indeed, not quite on-topic, but I'm advocating for > > > > > Matrix, mostly because that's where Mozilla went and because it > > > > > seems to check all the boxes. > > > > [...] > > > > > > > > It seems, from what little I've read, that Matrix servers can > > > > integrate with IRC server networks and bridge channels fairly > > > > seamlessly. If there were a Matrix bridge providing access to the > > > > same channels as people were participating in via IRC (wherever that > > > > happened to be), would that address your concerns? > > > > > > > > > > Same can Zulip do (our company runs own instance). But generally yes, > this > > > would address at least my concerns. > > > > Similarly, it seems like projects could then add a Zulip/IRC bridge > > for their channels if they wanted (and maybe that would even > > three-way bridge IRC, Matrix, and Zulip users?). > > matix has a number of clinets > https://matrix.org/clients/ including what looks like two web based ones > https://element.io/get-started and https://fluffychat.im/ > not sure how well they work in practic it kind of sound like a matix > bridge and > a hosted version fo say https://matrix.org/docs/projects/client/element > whic is apache 2 liscents > woudl sovle that use case since ironci could say go to webchat.o.o and > connect to the ironci channel > or soemthing like that kindo fo like how https://webchat.freenode.net/ > works as web interface to irc. > > if you click the try now button on https://matrix.org/ it send you to > https://element.io/get-started where you can > open in bowser or down lond the moble or desktop aps so it looks liek its > endorced by matix as how new user can get started quickly. > > you could proably even user there hsoted version at > https://app.element.io/?pk_vid=162154222536360a#/welcome > > matrix have bridge to freenod and oftc already > > https://github.com/matrix-org/matrix-appservice-irc/wiki/Bridged-IRC-networks > > > not really sure about zulip but there is a brdige bot > https://matrix.org/docs/projects/bridge/matrix-zulip-bridgebot Thanks guys, finally we reached constructive discussion and not the religious fight. As long as bridge is repeating messages in both directions I think it can address most concerns. This feels cool, feels democratic, but not an unified way (not saying it's bad). Agree with Dmitriy that it feels like each team using some own stuff, and not as a single community. But giving teams flexibility is also great. -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu May 20 21:11:04 2021 From: smooney at redhat.com (Sean Mooney) Date: Thu, 20 May 2021 22:11:04 +0100 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520201118.a2zu7pfzzzypa5zp@yuggoth.org> <43115e78b4105df93203b20ded9415cac5139d76.camel@redhat.com> Message-ID: <7ae76e8e1abaf450c2d2dc4b4e50cdd55ba8546f.camel@redhat.com> On Thu, 2021-05-20 at 22:43 +0200, Artem Goncharov wrote: > On Thu, May 20, 2021, 22:30 Sean Mooney wrote: > > > On Thu, 2021-05-20 at 20:11 +0000, Jeremy Stanley wrote: > > > On 2021-05-20 22:04:27 +0200 (+0200), Artem Goncharov wrote: > > > > On Thu, May 20, 2021, 21:42 Jeremy Stanley wrote: > > > > > > > > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > > > > [...] > > > > > > This is, indeed, not quite on-topic, but I'm advocating for > > > > > > Matrix, mostly because that's where Mozilla went and because it > > > > > > seems to check all the boxes. > > > > > [...] > > > > > > > > > > It seems, from what little I've read, that Matrix servers can > > > > > integrate with IRC server networks and bridge channels fairly > > > > > seamlessly. If there were a Matrix bridge providing access to the > > > > > same channels as people were participating in via IRC (wherever that > > > > > happened to be), would that address your concerns? > > > > > > > > > > > > > Same can Zulip do (our company runs own instance). But generally yes, > > this > > > > would address at least my concerns. > > > > > > Similarly, it seems like projects could then add a Zulip/IRC bridge > > > for their channels if they wanted (and maybe that would even > > > three-way bridge IRC, Matrix, and Zulip users?). > > > > matix has a number of clinets > > https://matrix.org/clients/ including what looks like two web based ones > > https://element.io/get-started and https://fluffychat.im/ > > not sure how well they work in practic it kind of sound like a matix > > bridge and > > a hosted version fo say https://matrix.org/docs/projects/client/element > > whic is apache 2 liscents > > woudl sovle that use case since ironci could say go to webchat.o.o and > > connect to the ironci channel > > or soemthing like that kindo fo like how https://webchat.freenode.net/ > > works as web interface to irc. > > > > if you click the try now button on https://matrix.org/ it send you to > > https://element.io/get-started where you can > > open in bowser or down lond the moble or desktop aps so it looks liek its > > endorced by matix as how new user can get started quickly. > > > > you could proably even user there hsoted version at > > https://app.element.io/?pk_vid=162154222536360a#/welcome > > > > matrix have bridge to freenod and oftc already > > > > https://github.com/matrix-org/matrix-appservice-irc/wiki/Bridged-IRC-networks > > > > > > not really sure about zulip but there is a brdige bot > > https://matrix.org/docs/projects/bridge/matrix-zulip-bridgebot just to keep pepole in the loop the irc bridge in matix works pretty well other then needint to auth to nickserve because we require regeistered nics which is documented here https://github.com/matrix-org/matrix-appservice-irc/wiki/Guide:-How-to-use-Matrix-to-participate-in-IRC-rooms i was able to connect vai https://app.element.io/ and talk to in the infra channel with an old matix account i have http://eavesdrop.openstack.org/irclogs/%23openstack-infra/latest.log.html#t2021-05-20T20:35:01 the sign in process and account creation process is also pretty trival since you can use github or an exisitng google account amoung several others. so for those looking to use matrix you can do that today. i previously used this on my phone with the riot client in the past but just went fo driect irc more recently not aht i use etiehr on mobile frequently. just an fyi for those that were interested. > > > > Thanks guys, finally we reached constructive discussion and not the > religious fight. > > As long as bridge is repeating messages in both directions I think it can > address most concerns. This feels cool, feels democratic, but not an > unified way (not saying it's bad). Agree with Dmitriy that it feels like > each team using some own stuff, and not as a single community. But giving > teams flexibility is also great. From renat.akhmerov at gmail.com Fri May 21 04:31:21 2021 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 21 May 2021 11:31:21 +0700 Subject: [mistral] glance image upload In-Reply-To: References: Message-ID: <9b6daaf0-aa3b-4e8f-bc3c-a693fa7202ed@Spark> Hi, Essentially the “glance.images_upload” action is the same command available on the Python client for Glance and it should take the same parameters. So just go ahead and try. The only problem may be that the action may have gone out of sync with the latest version of Python Glance client since I believe nobody has updated it for a while. Thanks Renat Akhmerov On 20 May 2021, 16:34 +0700, Arnaud Morin , wrote: > Hey team, > > Is there any way to upload/download an image to/from glance using a mistral workflow? > > I am seeing the glance.images_upload and glance.images_data, but I cant > figure out how it works, I was thinking that we could provide some sort > of swift URL to download/upload from. > > Is there any example available somewhere or maybe it's not yet > implemented? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Fri May 21 05:02:11 2021 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 21 May 2021 01:02:11 -0400 Subject: OVN ovsdb clustering question Message-ID: Folks, I have 3 controller nodes and am trying to setup clustering for OVN deployment, but lack enough documentation and have some issues and questions regarding it. I found this document but again its little confusing and very old - https://mail.openvswitch.org/pipermail/ovs-discuss/2018-March/046470.html # controller 1 /usr/share/ovn/scripts/ovn-ctl --db-nb-addr=172.30.40.93 \ --db-nb-create-insecure-remote=yes \ --db-sb-addr=172.30.40.93 \ --db-sb-create-insecure-remote=yes \ --db-nb-cluster-local-addr=172.30.40.93 \ --db-sb-cluster-local-addr=172.30.40.93 \ --ovn-northd-nb-db=tcp:172.30.40.93:6641,tcp:172.30.40.25:6641,tcp:172.30.40.177:6641 \ --ovn-northd-sb-db=tcp:172.30.40.93:6642,tcp:172.30.40.25:6642,tcp:172.30.40.177:6642 \ start_northd # controller 2 /usr/share/ovn/scripts/ovn-ctl --db-nb-addr=172.30.40.25 \ --db-nb-create-insecure-remote=yes \ --db-sb-addr=172.30.40.25 \ --db-sb-create-insecure-remote=yes \ --db-nb-cluster-local-addr=172.30.40.25 \ --db-sb-cluster-local-addr=172.30.40.25 \ --db-nb-cluster-remote-addr=172.30.40.93 \ --db-sb-cluster-remote-addr=172.30.40.93 \ --ovn-northd-nb-db=tcp:172.30.40.93:6641,tcp:172.30.40.25:6641,tcp:172.30.40.177:6641 \ --ovn-northd-sb-db=tcp:172.30.40.93:6642,tcp:172.30.40.25:6642,tcp:172.30.40.177:6642 \ start_northd # controller 3 /usr/share/ovn/scripts/ovn-ctl --db-nb-addr=172.30.40.177 \ --db-nb-create-insecure-remote=yes \ --db-nb-cluster-local-addr=172.30.40.177 \ --db-sb-addr=172.30.40.177 \ --db-sb-create-insecure-remote=yes \ --db-sb-cluster-local-addr=172.30.40.177 \ --db-nb-cluster-remote-addr=172.30.40.93 \ --db-sb-cluster-remote-addr=172.30.40.93 \ --ovn-northd-nb-db=tcp:172.30.40.93:6641,tcp:172.30.40.25:6641,tcp:172.30.40.177:6641 \ --ovn-northd-sb-db=tcp:172.30.40.93:6642,tcp:172.30.40.25:6642,tcp:172.30.40.177:6642 \ start_northd ## Validation steps controller-2# export\ remote="tcp:172.30.40.93:6641,tcp:172.30.40.25:6641,tcp:172.30.40.177:6641" controller-2# ovn-nbctl --db=$remote show controller-2# In the above command i am seeing output only when it hit controller-1 node, but for node-2 and note-3 giving me empty output that means data replication doesn't work. what is the command to verify synchronization working between all 3 nodes? Do I need to restart any other services? From skaplons at redhat.com Fri May 21 06:37:12 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 21 May 2021 08:37:12 +0200 Subject: [neutron][stadium][stable] Proposal to make stable/ocata and stable/pike branches EOL In-Reply-To: References: <15209060.0YdeOJI3E6@p1> Message-ID: <70518669.DcBL264VIf@p1> Hi, Dnia czwartek, 20 maja 2021 15:53:42 CEST Akihiro Motoki pisze: > Thanks for raising this. > > I would vote to mark them as EOL together regardless of that they were > released as part of the official OpenStack release. > All of these repositories depend on neutron and they are expected to > run with a same release of neutron. > so there is no big reason not to mark them as EOL. +1 to make all of them EOL. I think it's good approach for those projects. > > ocata branch of some repositories are not part of neutron official > releases as of ocata. > For such branches, we need to handle them separately (i.e, we cannot > mark it as EOL using the releases repository). > > As far as I checked, all ocata releases under the neutron governance > are already marked as EOL. > Here is the list: > > - networking-ovn > - neutron-dynamic-routing > - neutron-fwaas > - neutron-lbaas > - neutron-lib > - neutron > > >> Now that neutron's stable/ocata is deleted some non-stadium project's stable periodic job also started to fail (on stable/ocata) [1]: > Repositories pointed out by Előd are repositories which were not under > the neutron governance as of ocata release. > They need to be EOL'ed separately. They all are under the neutron > governance, so I think we can discuss EOL > of them as the neutron team and all changes will be approved by the > neutron-stable-maint team. > > >> - networking-bagpipe > >> - networking-bgpvpn > >> - networking-midonet > >> - neutron-vpnaas > > Thanks, > Akihiro Motoki (irc: amotoki) > > On Thu, May 20, 2021 at 8:26 PM Lajos Katona wrote: > > Hi, > > Thanks for bringing this up. > > I would vote on the 1. option to transition to ocata-eol, but would do for all Neutron Stadium projects: > > https://governance.openstack.org/tc/reference/projects/neutron.html > > > > networking-bagpipe > > networking-bgpvpn > > networking-midonet > > networking-odl > > networking-ovn > > networking-sfc > > neutron-fwaas > > > > neutron-fwaas and networking-midonet are not maintained but the process can work for the old branches I suppose. > > > > Lajos Katona (lajoskatona) > > > > Előd Illés ezt írta (időpont: 2021. máj. 20., Cs, 11:58): > >> Hi, > >> > >> Now that neutron's stable/ocata is deleted some non-stadium project's stable periodic job also started to fail (on stable/ocata) [1]: > >> > >> - networking-bagpipe > >> - networking-bgpvpn > >> - networking-midonet > >> - neutron-vpnaas > >> > >> There are two options to fix this: > >> > >> 1. transition these projects also to ocata-eol > >> 2. fix them by using neutron's ocata-eol tag > >> > >> (I guess the 1st option is the easiest, considering that these projects are not among the most active ones) > >> > >> Does neutron team have a plan how to continue with these? > >> > >> Thanks, > >> > >> Előd > >> > >> [1] > >> http://logstash.openstack.org/#/dashboard/file/logstash.json? query=message:%5C%22error: %20pathspec%20'stable%2Focata'%20did%20not%20match%20any%20file(s)% > >> 20known%20to%20git.%5C%22&from=86400s > >> > >> > >> On 2021. 05. 14. 22:18, Előd Illés wrote: > >> > >> Hi, > >> > >> The patch was merged, so the ocata-eol tags were created for neutron projects. After the successful tagging I have executed the branch deletion, which has > >> the following result: > >> > >> Branch stable/ocata successfully deleted from openstack/networking-ovn! > >> Branch stable/ocata successfully deleted from openstack/neutron-dynamic- routing! > >> Branch stable/ocata successfully deleted from openstack/neutron-fwaas! > >> Branch stable/ocata successfully deleted from openstack/neutron-lbaas! > >> Branch stable/ocata successfully deleted from openstack/neutron-lib! > >> Branch stable/ocata successfully deleted from openstack/neutron! > >> > >> Thanks, > >> > >> Előd > >> > >> > >> On 2021. 05. 12. 9:22, Slawek Kaplonski wrote: > >> > >> Hi, > >> > >> Dnia środa, 5 maja 2021 20:35:48 CEST Előd Illés pisze: > >> > Hi, > >> > > >> > > >> > > >> > Ocata is unfortunately unmaintained for a long time as some general test > >> > > >> > jobs are broken there, so as a stable-maint-core member I support to tag > >> > > >> > neutron's stable/ocata as End of Life. After the branch is tagged, > >> > > >> > please ping me and I can arrange the deletion of the branch. > >> > > >> > > >> > > >> > For Pike, I volunteered at the PTG in 2020 to help with reviews there, I > >> > > >> > still keep that offer, however I am clearly not enough to keep it > >> > > >> > maintained, besides backports are not arriving for stable/pike in > >> > > >> > neutron. Anyway, if the gate is functional there, then I say we could > >> > > >> > keep it open (but as far as I see how gate situation is worsen now, as > >> > > >> > more and more things go wrong, I don't expect that will take long). If > >> > > >> > not, then I only ask that let's do the EOL'ing first with Ocata and when > >> > > >> > it is done, then continue with neutron's stable/pike. > >> > > >> > > >> > > >> > For the process please follow the steps here: > >> > > >> > https://docs.openstack.org/project-team-guide/stable-branches.html#end-of-life > >> > > >> > (with the only exception, that in the last step, instead of infra team, > >> > > >> > please turn to me/release team - patch for the documentation change is > >> > > >> > on the way: > >> > > >> > https://review.opendev.org/c/openstack/project-team-guide/+/789932 ) > >> > >> Thx. I just proposed patch https://review.opendev.org/c/openstack/ releases/+/790904 to make ocata-eol in all neutron projects. > >> > >> > Thanks, > >> > > >> > > >> > > >> > Előd > >> > > >> > On 2021. 05. 05. 16:13, Slawek Kaplonski wrote: > >> > > Hi, > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > I checked today that stable/ocata and stable/pike branches in both > >> > > > >> > > Neutron and neutron stadium projects are pretty inactive since long time. > >> > > > >> > > > >> > > > >> > > * according to [1], last patch merged patch in Neutron for stable/ pike > >> > > > >> > > was in July 2020 and in ocata October 2019, > >> > > > >> > > > >> > > > >> > > * for stadium projects, according to [2] it was September 2020. > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > According to [3] and [4] there are no opened patches for any of those > >> > > > >> > > branches for Neutron and any stadium project except neutron-lbaas. > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > So based on that info I want to propose that we will close both those > >> > > > >> > > branches are EOL now and before doing that, I would like to know if > >> > > > >> > > anyone would like to keep those branches to be open still. > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > [1] > >> > > > >> > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ (branch:stable/ocata+OR+branch:stable/pike)+status:merged > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > [2] > >> > > > >> > > https://review.opendev.org/q/(project:openstack/ ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ neutron-.*+OR+project:%255Eopenstack/net > >> > > worki > >> > > > >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged > >> > > > >> > > >> > > twor > >> > > > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:merged> > >> > > > >> > > > >> > > > >> > > [3] > >> > > > >> > > https://review.opendev.org/q/project:%255Eopenstack/neutron+ (branch:stable/ocata+OR+branch:stable/pike)+status:open > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > [4] > >> > > > >> > > https://review.opendev.org/q/(project:openstack/ ovsdbapp+OR+project:openstack/os-ken+OR+project:%255Eopenstack/ neutron-.*+OR+project:%255Eopenstack/net > >> > > worki > >> > > > >> > > ng-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open > >> > > > >> > > >> > > twor > >> > > > >> > > king-.*)+(branch:stable/ocata+OR+branch:stable/pike)+status:open> > >> > > > >> > > > >> > > > >> > > > >> > > > >> > > -- > >> > > > >> > > > >> > > > >> > > Slawek Kaplonski > >> > > > >> > > > >> > > > >> > > Principal Software Engineer > >> > > > >> > > > >> > > > >> > > Red Hat > >> > >> -- > >> > >> Slawek Kaplonski > >> > >> Principal Software Engineer > >> > >> Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From syedammad83 at gmail.com Fri May 21 07:20:47 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Fri, 21 May 2021 12:20:47 +0500 Subject: [wallaby][trove] Instance Volume Resize In-Reply-To: References: Message-ID: Thanks Lingxian, it worked fine. I have resize the volume of mysql datastore from 17GB to 18GB. In guest agent logs it said that command has executed successfully. 2021-05-21 07:11:03.527 1062 INFO trove.guestagent.datastore.manager [-] Resizing the filesystem at /var/lib/mysql, online: True 2021-05-21 07:11:03.528 1062 DEBUG trove.guestagent.volume [-] Checking if /dev/sdb exists. _check_device_exists /home/ubuntu/trove/trove/guestagent/volume.py:217 2021-05-21 07:11:03.528 1062 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo blockdev --getsize64 /dev/sdb execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 2021-05-21 07:11:03.545 1062 DEBUG oslo_concurrency.processutils [-] CMD "sudo blockdev --getsize64 /dev/sdb" returned: 0 in 0.016s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 2021-05-21 07:11:03.546 1062 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sudo resize2fs /dev/sdb execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 2021-05-21 07:11:03.577 1062 DEBUG oslo_concurrency.processutils [-] CMD "sudo resize2fs /dev/sdb" returned: 0 in 0.031s execute /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 But the /var/lib/mysql still shows 17GB. I have manually executed resize2fs on /dev/sdb. After manual execution, /var/lib/mysql has updated to 18GB. Not sure if I am missing something. - Ammad On Thu, May 20, 2021 at 4:39 PM Lingxian Kong wrote: > Modify trove service config file: > > [DEFAULT] > max_accepted_volume_size = > > 10 is the default value if the config option is not specified. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Wed, May 19, 2021 at 9:56 PM Ammad Syed wrote: > >> Hi, >> >> I am using wallaby / trove on ubuntu 20.04. I am trying to extend volume >> of database instance. Its having trouble that instance cannot exceed volume >> size of 10GB. >> >> My flavor has 2vcpus 4GB RAM and 10GB disk. I created a database instance >> with 5GB database size and mysql datastore. The deployment has created 10GB >> root and 5GB /var/lib/mysql. I have tried to extend volume to 11GB, it >> failed with error that "Volume 'size' cannot exceed maximum of 10 GB, 11 >> cannot be accepted". >> >> I want to keep root disk size to 10GB and only want to extend >> /var/lib/mysql keeping the same flavor. Is it possible or should I need to >> upgrade flavor as well ? >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From giacomo.lanciano at sns.it Fri May 21 08:51:56 2021 From: giacomo.lanciano at sns.it (Giacomo Lanciano) Date: Fri, 21 May 2021 10:51:56 +0200 Subject: [Senlin][Octavia] Health policy - LB_STATUS_POLLING detection mode In-Reply-To: References: <9815228e-07b2-540b-761b-62e26d0c2c45@sns.it> Message-ID: <64023500-08a9-d7ff-d88b-513ad1209300@sns.it> Hi Duc, many thanks for your feedback. On 17/05/2021 18:08, Duc Truong wrote: > H Giacomo, > > The patch set that you linked is 4 years old, so I don't remember what > the issue was with implementing the LB poll in the Senlin health > check. I don't see a problem with implementing that feature so feel > free to check the contribution guide and submit some code. A few > things to think about are: > > - Would the LB_STATUS_POLLING require the user to use the Senlin Load > Balancer as well? Yes, I am assuming that this feature could only work when using the LB policy as well. > - How would Senlin know which health monitor to check for a given Senlin node? > > A common use case for Senlin is autoscaling, so nodes will be added or > deleted on the fly. Most likely that requires that the > LB_STATUS_POLLING needs to be tied to the Senlin LB policy that is > responsible for adding/removing LB pools and LB health monitors. If > we go down that route, it is easy to retrieve the LB health monitor ID > because that is stored in the Senlin node properties. It looks like the LB health monitor ID is not actually stored in the node properties. At least, it doesn't show up when looking at node details via CLI: ``` $ openstack cluster node show node-uHgcqJz9 +---------------+--------------------------------------------------------------- | Field         | Value +---------------+--------------------------------------------------------------- | cluster_id    | 6cd9973c-0387-4fdf-a20c-4471ec905c67 | created_at    | 2021-05-13T11:28:46Z | data          | { |               |   "internal_ports": [ |               | { |               |       "fixed_ips": [ |               | { |               |           "ip_address": "10.0.0.175", |               |           "subnet_id": "2867ceb3-240d-4694-b177-57d521deec4d" |               | } |               | ], |               |       "id": "1ab76390-3301-4caa-b4da-7a0f2947a40a", |               |       "network_id": "cf177807-7da1-4312-9f41-3ba7de0d5cc7", |               |       "remove": true, |               |       "security_group_ids": [ |               | "699b230a-2ddd-4d28-9cb2-95c83ca3e2a8" |               | ] |               | } |               | ], |               |   "lb_member": "a11c5118-4d85-4c5f-b4ee-47bf32482f19", |               |   "placement": { |               |     "zone": "nova" |               | } |               | } [...] ``` However, it should be feasible to get nodes' health status, as detected by the LB, in this way: 1) Fetch the LB policy attached to the cluster: ``` $ openstack cluster policy binding list 6cd9973c --filter "policy_type=senlin.policy.loadbalance-1.3" +-----------+--------------------------------------+-------------------------------+------------+ | policy_id | policy_name                          | policy_type                   | is_enabled | +-----------+--------------------------------------+-------------------------------+------------+ | 70b95b84  | senlin-lb_policy-bifia6bjfn4h        | senlin.policy.loadbalance-1.3 | True       | +-----------+--------------------------------------+-------------------------------+------------+ ``` 2) Get the LB ID from the policy spec: ``` $ openstack cluster policy show 70b95b84 +------------+------------------------------------------------------------- | Field      | Value +------------+------------------------------------------------------------- [...] | spec       | { |            |   "properties": { |            |     "health_monitor": { |            |       "id": "db7c9243-1bd7-4ba1-b7ff-0be1d410c0de", |            |       "type": "TCP" |            | }, |            |     "lb_status_timeout": 300, |            |     "loadbalancer": "d228e578-a413-44d7-96f5-09663402e40a", |            |     "pool": { |            |       "id": "a2d3023d-956f-4ee8-af5e-ab852769675f", |            |       "lb_method": "ROUND_ROBIN", |            |       "protocol": "TCP", |            |       "protocol_port": 5001, |            |       "subnet": "self-serv-subnet-1" |            | }, |            |     "vip": { |            |       "protocol": "TCP", |            |       "protocol_port": 5001, |            |       "subnet": "provider-subnet" |            | } |            | }, |            |   "type": "senlin.policy.loadbalance", |            |   "version": "1.3" |            | } [...] ``` 3) For each LB member, get the value of "operating_status" property: ``` $ openstack loadbalancer status show d228e578-a413-44d7-96f5-09663402e40a {     "loadbalancer": {         [...]         "listeners": [             {                 [...]                 "pools": [                     {                         [...]                         "health_monitor": {                             [...]                         },                         "members": [                             {                                 "id": "a11c5118-4d85-4c5f-b4ee-47bf32482f19",                                 "name": "",                                 "operating_status": "ONLINE",                                 "provisioning_status": "ACTIVE",                                 "address": "10.0.0.175",                                 "protocol_port": 5001                             },                             {                                 "id": "0da3d20b-9fb1-42cd-814d-29812c6c2783",                                 "name": "",                                 "operating_status": "ONLINE",                                 "provisioning_status": "ACTIVE",                                 "address": "10.0.0.222",                                 "protocol_port": 5001                             }                         ]                     }                 ]             }         ]     } } ``` 4) Associate each LB member status to the corresponding node in the cluster, by leveraging on the "lb_member" property included in node properties. Such an approach seems sound to you? Kind regards. Giacomo > > Anyways, just some things to think about. > > Duc > > On Fri, May 14, 2021 at 2:51 AM Giacomo Lanciano > wrote: >> Hi folks, >> >> I'd like to know what is the status of making the LB_STATUS_POLLING >> detection mode available for the Senlin Health Policy [1]. According to >> the docs and this patch [2], the implementation of this feature is >> blocked by some issue on LBaaS/Octavia side, but I could not find any >> details on what this issue really is. >> >> As the docs state, it would be really useful to have this detection mode >> available, as it is much more reliable than the others at evaluating the >> status of an application. If I can be of any help, I would be willing to >> contribute. >> >> Thanks in advance. >> >> Kind regards. >> >> Giacomo >> >> [1] >> https://docs.openstack.org/senlin/latest/contributor/policies/health_v1.html#failure-detection >> [2] https://review.opendev.org/c/openstack/senlin/+/423012 -- Giacomo Lanciano Ph.D. Student in Data Science Scuola Normale Superiore, Pisa, Italy https://www.linkedin.com/in/giacomolanciano From ekuvaja at redhat.com Fri May 21 11:24:12 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 21 May 2021 12:24:12 +0100 Subject: [mistral] glance image upload In-Reply-To: <9b6daaf0-aa3b-4e8f-bc3c-a693fa7202ed@Spark> References: <9b6daaf0-aa3b-4e8f-bc3c-a693fa7202ed@Spark> Message-ID: On Fri, May 21, 2021 at 5:38 AM Renat Akhmerov wrote: > Hi, > > Essentially the “glance.images_upload” action is the same command > available on the Python client for Glance and it should take the same > parameters. So just go ahead and try. The only problem may be that the > action may have gone out of sync with the latest version of Python Glance > client since I believe nobody has updated it for a while. > AFAIK we have not broken anything on the client since moving to Images API v2 so even older integrations should work just fine. (So sorry if we did, please let us know the details so we can be more careful in the future.) Obviously might be lacking some features. 'web-download' is part of the Interoperable Image Import flow so getting an image from an external source requires the 'web-download' plugin being configured on the service side and using the import call on the client. I have no idea if there is Mistral integration for that. - jokke > > Thanks > > Renat Akhmerov > On 20 May 2021, 16:34 +0700, Arnaud Morin , wrote: > > Hey team, > > Is there any way to upload/download an image to/from glance using a > mistral workflow? > > I am seeing the glance.images_upload and glance.images_data, but I cant > figure out how it works, I was thinking that we could provide some sort > of swift URL to download/upload from. > > Is there any example available somewhere or maybe it's not yet > implemented? > > Thanks > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri May 21 11:31:37 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 21 May 2021 13:31:37 +0200 Subject: Freenode and libera.chat In-Reply-To: <20210520200937.5eyxs4r7a3qchja7@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520200937.5eyxs4r7a3qchja7@yuggoth.org> Message-ID: On Thu, May 20, 2021 at 10:11 PM Jeremy Stanley wrote: > On 2021-05-20 21:58:52 +0200 (+0200), Dmitry Tantsur wrote: > > On Thu, May 20, 2021 at 9:41 PM Jeremy Stanley > wrote: > [...] > > > It seems, from what little I've read, that Matrix servers can > > > integrate with IRC server networks and bridge channels fairly > > > seamlessly. If there were a Matrix bridge providing access to the > > > same channels as people were participating in via IRC (wherever that > > > happened to be), would that address your concerns? > > > > Not quite. It would probably solve the issues *for me*, but I've already > > solved most of my issues. If we still say "our official communication > > channel is IRC", that's what people will (try to) use. > > > > If we do get an easy "hey, press this button to talk to ironic folks", it > > may change my mind though. Right now it seems to be IRCCloud, but I'm a > bit > > uneasy about recommending everyone to go through some private service, > > especially about putting something like that in our contributor's guide. > > Well, I guess what I'm asking is, if there were a Matrix bridge for > the #openstack-ironic IRC channel, would that be sufficient for > Ironic to be able to update its documentation with an "easy button" > pointed at Matrix instead of IRC, and connect new users on Matrix > with traditional IRC users for discussion? > Probably yes, assuming we can integrate Matrix with NickServ or drop the NickServ requirement. Dmitry > -- > Jeremy Stanley > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Fri May 21 11:36:29 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 21 May 2021 12:36:29 +0100 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Thu, May 20, 2021 at 1:21 PM Mohammed Naser wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > > > Hi all, > > > > For those of you who have not woken up to this sad day yet. Andrew Lee > has taken his stance as owner of freenode ltd. and by the (one sided) story > of the former volunteer staff members basically forced the whole community > out. > > > > As there is history of LTM shutting down networks before (snoonet), it > is appropriate to expect that the intentions here are not aligned with the > communities and specially the users who's data he has access to via this > administrative takeover. > > > > I think it's our time to take swift action and show our support to all > the hard working volunteers who were behind freenode and move all our > activities to irc.libera.chat. > > > > Please see https://twitter.com/freenodestaff and Christian's letter > which links to the others as well > https://fuchsnet.ch/freenode-resign-letter.txt > > > > Best, > > Erno 'jokke' Kuvaja > > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss > > I recommend that so long that we don't have any problems, we keep things > as is. > > -- > Mohammed Naser > VEXXHOST, Inc. > > I'm clearly not the only one concerned about this approach. Just to reiterate that Mr. Lee has history of these things (snoonet for example) and after the volunteer staff walked out and sponsors moved their servers, this is effectively a new network with an old domain name and like he points out in that release, he has no intentions to keep things as they were. Looking at the movement over the past day, it seems like we're the only hesitant party here. Rest of the communities have either moved to libera.chat or OFTC. I'd strongly advise us to do the same before things turn sour. - jokke -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Fri May 21 11:43:31 2021 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 21 May 2021 12:43:31 +0100 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520200937.5eyxs4r7a3qchja7@yuggoth.org> Message-ID: On Fri, May 21, 2021 at 12:35 PM Dmitry Tantsur wrote: > > > On Thu, May 20, 2021 at 10:11 PM Jeremy Stanley wrote: > >> On 2021-05-20 21:58:52 +0200 (+0200), Dmitry Tantsur wrote: >> > On Thu, May 20, 2021 at 9:41 PM Jeremy Stanley >> wrote: >> [...] >> > > It seems, from what little I've read, that Matrix servers can >> > > integrate with IRC server networks and bridge channels fairly >> > > seamlessly. If there were a Matrix bridge providing access to the >> > > same channels as people were participating in via IRC (wherever that >> > > happened to be), would that address your concerns? >> > >> > Not quite. It would probably solve the issues *for me*, but I've already >> > solved most of my issues. If we still say "our official communication >> > channel is IRC", that's what people will (try to) use. >> > >> > If we do get an easy "hey, press this button to talk to ironic folks", >> it >> > may change my mind though. Right now it seems to be IRCCloud, but I'm a >> bit >> > uneasy about recommending everyone to go through some private service, >> > especially about putting something like that in our contributor's guide. >> >> Well, I guess what I'm asking is, if there were a Matrix bridge for >> the #openstack-ironic IRC channel, would that be sufficient for >> Ironic to be able to update its documentation with an "easy button" >> pointed at Matrix instead of IRC, and connect new users on Matrix >> with traditional IRC users for discussion? >> > > Probably yes, assuming we can integrate Matrix with NickServ or drop the > NickServ requirement. > > Great, so now when we have established that Ironic can point to matrix, slack or whatever the current IM buzzword is (oh wait, that's right slack killed their IRC bridge after reaching the critical mass, fortunately that can't happen to any of the others 'cause how confusing that would be for the new contributors). Can we please focus on the issue at the hand and do something about the network change? I'd be still pro for libera but looks like OFTC would be just fine too (expect some nick changes 'though). - jokke > Dmitry > > >> -- >> Jeremy Stanley >> > > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri May 21 11:53:45 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 21 May 2021 13:53:45 +0200 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> <20210520200937.5eyxs4r7a3qchja7@yuggoth.org> Message-ID: On Fri, May 21, 2021 at 1:43 PM Erno Kuvaja wrote: > On Fri, May 21, 2021 at 12:35 PM Dmitry Tantsur > wrote: > >> >> >> On Thu, May 20, 2021 at 10:11 PM Jeremy Stanley >> wrote: >> >>> On 2021-05-20 21:58:52 +0200 (+0200), Dmitry Tantsur wrote: >>> > On Thu, May 20, 2021 at 9:41 PM Jeremy Stanley >>> wrote: >>> [...] >>> > > It seems, from what little I've read, that Matrix servers can >>> > > integrate with IRC server networks and bridge channels fairly >>> > > seamlessly. If there were a Matrix bridge providing access to the >>> > > same channels as people were participating in via IRC (wherever that >>> > > happened to be), would that address your concerns? >>> > >>> > Not quite. It would probably solve the issues *for me*, but I've >>> already >>> > solved most of my issues. If we still say "our official communication >>> > channel is IRC", that's what people will (try to) use. >>> > >>> > If we do get an easy "hey, press this button to talk to ironic folks", >>> it >>> > may change my mind though. Right now it seems to be IRCCloud, but I'm >>> a bit >>> > uneasy about recommending everyone to go through some private service, >>> > especially about putting something like that in our contributor's >>> guide. >>> >>> Well, I guess what I'm asking is, if there were a Matrix bridge for >>> the #openstack-ironic IRC channel, would that be sufficient for >>> Ironic to be able to update its documentation with an "easy button" >>> pointed at Matrix instead of IRC, and connect new users on Matrix >>> with traditional IRC users for discussion? >>> >> >> Probably yes, assuming we can integrate Matrix with NickServ or drop the >> NickServ requirement. >> >> Great, so now when we have established that Ironic can point to matrix, > slack or whatever the current IM buzzword is (oh wait, that's right slack > killed their IRC bridge after reaching the critical mass, fortunately that > can't happen to any of the others 'cause how confusing that would be for > the new contributors). Can we please focus on the issue at the hand and do > something about the network change? I'd be still pro for libera but looks > like OFTC would be just fine too (expect some nick changes 'though). > That bloody newcomers with their buzzwords, should we just forget about them? Maybe have a bot that asks a tricky git question on join and bans whoever cannot respond to it? Seriously, the attitude in this thread is disturbing. FOSS is no longer an realm of bearded dudes with messy hair (hey, I only have messy hair!). People matter. People who cannot install an IRC bouncer matter. People who are too busy to play with our favourite toys matter. Such a small thing as "how easy it is to talk to them" can be a deciding factor between us and not-us. I'm fine with focusing on the issue at hand, but I'm not fine pretending that we're not explicitly exclusive with our choice of tools. Necessarily so in case of gerrit, purely by choice in case of IRC. Dmitry > > - jokke > > >> Dmitry >> >> >>> -- >>> Jeremy Stanley >>> >> >> >> -- >> Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, >> Commercial register: Amtsgericht Muenchen, HRB 153243, >> Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael >> O'Neill >> > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri May 21 12:08:25 2021 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 21 May 2021 14:08:25 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: On Thu, May 20, 2021 at 08:21:14AM -0400, Mohammed Naser wrote: > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: [...] > There is two sides to each story, this is the other one: > > https://freenode.net/news/freenode-is-foss There may be two sides; but in this specific case, the said person's erratic behaviour is clearly and utterly unreliable. > I recommend that so long that we don't have any problems, we keep > things as is. I doubt if it's sensible to "keep things as-is". I think we should go with the low-friction options (the Infra team is small and are already overloaded) that were already discussed: (a) move to Libera.Chat (this also shows solidarity with the former Freenode staff, now at Libera, who were doing a lot of unthankful, but important work); or (b) move to OFTC network Obviously, there's no rush to implement this. And with *either* of these options, if there's time and energy, also have the Matrix bridge available to accomodate those who don't prefer IRC. But this is a really-nice-to-have. (And let's not forget: _if_ we do move, there's other grunt work like updating all public documentation, clear communication, etc.) * * * For comparison, this is Fedora's on-going proposal: https://pagure.io/Fedora-Council/tickets/issue/372 https://pagure.io/Fedora-Council/tickets/issue/371 -- /kashyap From xxxcloudlearner at gmail.com Fri May 21 13:08:35 2021 From: xxxcloudlearner at gmail.com (cloud learner) Date: Fri, 21 May 2021 18:38:35 +0530 Subject: unable to get spice console Message-ID: Dear all, Unable to get spice console on victoria single node. Kindly help. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Fri May 21 13:30:40 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Fri, 21 May 2021 16:30:40 +0300 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: пт, 21 мая 2021 г. в 15:16, Kashyap Chamarthy : > On Thu, May 20, 2021 at 08:21:14AM -0400, Mohammed Naser wrote: > > On Wed, May 19, 2021 at 9:23 AM Erno Kuvaja wrote: > > [...] > > > There is two sides to each story, this is the other one: > > > > https://freenode.net/news/freenode-is-foss > > There may be two sides; but in this specific case, the said person's > erratic behaviour is clearly and utterly unreliable. > > > I recommend that so long that we don't have any problems, we keep > > things as is. > > I doubt if it's sensible to "keep things as-is". I think we should go > with the low-friction options (the Infra team is small and are already > overloaded) that were already discussed: > > (a) move to Libera.Chat (this also shows solidarity with the former > Freenode staff, now at Libera, who were doing a lot of unthankful, > but important work); or > > (b) move to OFTC network > > Obviously, there's no rush to implement this. And with *either* of > these options, if there's time and energy, also have the Matrix bridge > available to accomodate those who don't prefer IRC. > who don't prefer IRC Why everyone points to third-party solutions for those who don't like IRC? Why the modern chat-platform can be used as a main solution and those who want IRC should look for third-party bridges to make it work in the good old way? My small experience: Long time ago (4 years ago?!), I moved Rally community from IRC to Gitter. I don't regret it. There was a bot for synchronizing messages between Gitter and IRC, so no one was offended or ignored. Gitter (as like many modern chat-platforms that are mentioned in this thread) provides web, mobile and native clients. You just install or open browser tab it and it works. No need to think about installing bouncer, configuring IRC client to do not send disconnect signal to bouncer, and so on. From the beginning, most newcomers & users that don't care much about openstack community workflows started writing at Gitter, because it was much simpler (several clicks and you can ask for help) and the trend persisted. I'm not saying that we need to use Gitter, it is going to die at some point, but I would like to raise one more time an idea that IRC is a good technology(as like ADSL) that will be alive for long long years, but there are a lot of interesting powerful solutions (i.e fiber networks). But this is a > really-nice-to-have. (And let's not forget: _if_ we do move, there's > other grunt work like updating all public documentation, clear > communication, etc.) > > * * * > > For comparison, this is Fedora's on-going proposal: > https://pagure.io/Fedora-Council/tickets/issue/372 > https://pagure.io/Fedora-Council/tickets/issue/371 > > > -- > /kashyap > > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri May 21 13:39:32 2021 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 21 May 2021 15:39:32 +0200 Subject: Freenode and libera.chat In-Reply-To: <3ed32203-e812-bebf-201d-f99cde0d0755@debian.org> References: <98d25fb5-9cd5-96bf-33a3-db17bb236cfd@debian.org> <3ed32203-e812-bebf-201d-f99cde0d0755@debian.org> Message-ID: <9edec9c9-1343-757e-bd05-7e9532cdcc90@openstack.org> Thomas Goirand wrote: > [...] > Typically, in a single day, many threads starts. Then someone reply to > one of the early threads. I get the notification, but I have no idea > from which thread it comes from. Then I waste a lot of time searching > for it. Yes, there's the "threads" entry on top left, but it takes a > long time to use too (at least 3 clicks, each of them opening a new screen). > > Compare this to IRC: someone highlights my name, I just click in the > notification area, and Quassel opens on the correct channel, with the > line with my name highlighted. Threads are not universally bad. There are thread-first chat systems (like Zulip[1]) which are pretty good at delivering a threaded chat experience, but they tend to be confusing to people who are used to channel-based chat systems. [1] https://zulip.com/why-zulip/ -- Thierry Carrez (ttx) From ltoscano at redhat.com Fri May 21 13:47:38 2021 From: ltoscano at redhat.com (Luigi Toscano) Date: Fri, 21 May 2021 15:47:38 +0200 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <109641848.nniJfEyVGO@whitebase.usersys.redhat.com> On Friday, 21 May 2021 15:30:40 CEST Andrey Kurilin wrote: > пт, 21 мая 2021 г. в 15:16, Kashyap Chamarthy : > > > > > > who don't prefer IRC > > Why everyone points to third-party solutions for those who don't like IRC? > Why the modern chat-platform can be used as a main solution and those who > want IRC should look for third-party bridges to make it work in the good > old way? On the technical side, as pointed out by several people, this could be done with matrix.org and its bridges, without having to leave IRC behind. > > My small experience: > Long time ago (4 years ago?!), I moved Rally community from IRC to Gitter. I > don't regret it. There was a bot for synchronizing messages between Gitter > and IRC, so no one was offended or ignored. Gitter (as like many modern > chat-platforms that are mentioned in this thread) provides web, mobile and > native clients. You just install or open browser tab it and it works. No > need to think about installing bouncer, configuring IRC client to do not > send disconnect signal to bouncer, and so on. From the beginning, most > newcomers & users that don't care much about openstack community workflows > started writing at Gitter, because it was much simpler (several clicks and > you can ask for help) and the trend persisted. > > I'm not saying that we need to use Gitter, it is going to die at some > point, but I would like to raise one more time an idea that IRC is a > good technology(as > like ADSL) that will be alive for long long years, but there are a lot of > interesting powerful solutions (i.e fiber networks). You may know already, but gitter is now based on matrix.org: https://matrix.org/blog/2020/12/07/gitter-now-speaks-matrix -- Luigi From james.slagle at gmail.com Fri May 21 13:57:19 2021 From: james.slagle at gmail.com (James Slagle) Date: Fri, 21 May 2021 09:57:19 -0400 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> Message-ID: On Wed, May 19, 2021 at 4:07 AM Jiri Podivin wrote: > > Which brings me to my point. The openstack/requirements does provide one > rather essential service for us. In the form of upper-constraints for our > pip builds. > While we are mostly installing software through rpm, many CI jobs use pip > in some fashion. Without upper constraints, pip pulls aggressively the > newest version available and compatible with other packages. > Which causes several issues, noted by even pip people. > > There is also a question of security. There is a possibility of a bad > actor introducing a package with an extremely high version number. > Such a package would get precedence over the legitimate releases. In fact, > just this sort of attack was spotted in the wild.[1] > It should be noted however that upper-constraints.txt only applies in CI. If you "pip install python-tripleoclient" in a fresh virtualenv, you get latest releases, assuming they satisfy other dependencies. > Now, nothing is preventing us from using upper requirements, without being > in the openstack/requirements projects. > On the other hand, if we remove ourselves from the covenant, nothing is > stopping the openstack/requirements people from changing versions of the > accepted packages > without considering the impact it could have on our projects. > This would mean you could potentially have issues pip installing with the rest of OpenStack that have accepted the requirements contract. It goes back to my original point that I don't think we care. Overall, I don't get the sense there is broad agreement about this change, and it is not completely understood, myself included here. We should likely hold off on making any decisions until time allows for a more thorough deep dive into the implications. In the meantime however, I do think tripleo-validations needs the check-requirements job added since it depends on tripleo-common (in g-r) and tripleo-validations is itself in projects.txt. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri May 21 14:10:14 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 May 2021 14:10:14 +0000 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> On 2021-05-21 16:30:40 +0300 (+0300), Andrey Kurilin wrote: [...] > Why everyone points to third-party solutions for those who don't > like IRC? Why the modern chat-platform can be used as a main > solution and those who want IRC should look for third-party > bridges to make it work in the good old way? [...] It's all a matter of perspective, and you're paying attention to how it's phrased by people who are already using IRC (the bulk of our current community). You could just as easily phrase it as "some projects are moving to Matrix, but taking advantage of the available Matrix/IRC bridge so that users of the old IRC channels aren't left behind." Technically the solution is the same one as "let's recommend a Matrix/IRC bridge to anyone who wants to talk with people on IRC without using IRC." The main difference is in how it's documented and communicated. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri May 21 14:14:30 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 May 2021 14:14:30 +0000 Subject: Freenode and libera.chat In-Reply-To: References: Message-ID: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: [...] > Looking at the movement over the past day, it seems like we're the > only hesitant party here. Rest of the communities have either > moved to libera.chat or OFTC. I'd strongly advise us to do the > same before things turn sour. OpenStack isn't the only community taking a careful and measured approach to the decision. Ansible deferred deciding what to do about their IRC channels until Wednesday of this coming week: https://github.com/ansible-community/community-topics/issues/19 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From andr.kurilin at gmail.com Fri May 21 14:17:29 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Fri, 21 May 2021 17:17:29 +0300 Subject: Freenode and libera.chat In-Reply-To: <109641848.nniJfEyVGO@whitebase.usersys.redhat.com> References: <109641848.nniJfEyVGO@whitebase.usersys.redhat.com> Message-ID: пт, 21 мая 2021 г. в 16:47, Luigi Toscano : > On Friday, 21 May 2021 15:30:40 CEST Andrey Kurilin wrote: > > пт, 21 мая 2021 г. в 15:16, Kashyap Chamarthy : > > > > > > > > > > who don't prefer IRC > > > > Why everyone points to third-party solutions for those who don't like > IRC? > > Why the modern chat-platform can be used as a main solution and those who > > want IRC should look for third-party bridges to make it work in the good > > old way? > > On the technical side, as pointed out by several people, this could be > done > with matrix.org and its bridges, without having to leave IRC behind. > > Yes, but as far as I understand most speakers suggest using matrix to workaround missing features of IRC, but not to make a new chat-platform to be used with IRC clients. > > > > My small experience: > > Long time ago (4 years ago?!), I moved Rally community from IRC to > Gitter. I > > don't regret it. There was a bot for synchronizing messages between > Gitter > > and IRC, so no one was offended or ignored. Gitter (as like many modern > > chat-platforms that are mentioned in this thread) provides web, mobile > and > > native clients. You just install or open browser tab it and it works. No > > need to think about installing bouncer, configuring IRC client to do not > > send disconnect signal to bouncer, and so on. From the beginning, most > > newcomers & users that don't care much about openstack community > workflows > > started writing at Gitter, because it was much simpler (several clicks > and > > you can ask for help) and the trend persisted. > > > > I'm not saying that we need to use Gitter, it is going to die at some > > point, but I would like to raise one more time an idea that IRC is a > > good technology(as > > like ADSL) that will be alive for long long years, but there are a lot of > > interesting powerful solutions (i.e fiber networks). > > You may know already, but gitter is now based on matrix.org: > https://matrix.org/blog/2020/12/07/gitter-now-speaks-matrix > That is why I mentioned that I do not suggest Gitter and it is going to die at some point. ;) > -- > Luigi > > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From franck.vedel at univ-grenoble-alpes.fr Fri May 21 14:24:13 2021 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 21 May 2021 16:24:13 +0200 Subject: [kolla][kolla-absible][cinder][iscsi] Message-ID: <098E9026-3E35-4294-99D5-97A99CC9AC13@univ-grenoble-alpes.fr> Hello. First, sorry…poor english… so it’s a google translation. I hope you could understand my problem. Following the installation (with kolla ansiible) of an openstack (wallaby) on a physical server (under centos8), we were able to see all the possibilities offered in our teaching by Openstack. (i’m working in a french university). For various reasons, we need to extend this manipulation. We will have 3 nodes (dell R740) and a Dell bay for storage.(dell compellent) After having mounted a first test with 3 servers (T340), and put the storage (LVM) on node 3 (without the bay therefore) to test whether we had understood certain things correctly (in particular the parameter of the « multinode" file and that of « globals.xml"), we want to test with the Compellent bay. My question is as follows: knowing that the 3 nodes are three identical servers, in the multinode file, how to configure [storage]… should one of the 3 servers be put, add the iscsid docker… or else put the IP of the bay . I admit that this aspect is problematic for me. what about « enable_iscsii » and « enable_backends_lvm » parameters ? In a configuration like mine, would you put a "controller" or 2 or 3? Should we instead put an LVM on one of the servers and with iscsi make this LVM point the bay? Maybe these are silly questions, I'm not sure. This use of the bay is new and I don't know how best to do it. Between cinder, lvm, iscsi, the bay, the multinode file and the options of globals.xml, it is still not easy at first Franck VEDEL -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Fri May 21 14:28:38 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Fri, 21 May 2021 17:28:38 +0300 Subject: Freenode and libera.chat In-Reply-To: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> References: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> Message-ID: пт, 21 мая 2021 г. в 17:17, Jeremy Stanley : > On 2021-05-21 16:30:40 +0300 (+0300), Andrey Kurilin wrote: > [...] > > Why everyone points to third-party solutions for those who don't > > like IRC? Why the modern chat-platform can be used as a main > > solution and those who want IRC should look for third-party > > bridges to make it work in the good old way? > [...] > > It's all a matter of perspective, and you're paying attention to how > it's phrased by people who are already using IRC (the bulk of our > current community). You could just as easily phrase it as "some > projects are moving to Matrix, but taking advantage of the available > Matrix/IRC bridge so that users of the old IRC channels aren't left > behind." Technically the solution is the same one as "let's > recommend a Matrix/IRC bridge to anyone who wants to talk with > people on IRC without using IRC." The main difference is in how it's > documented and communicated. > -- > Jeremy Stanley > > It's all a matter of perspective It is True without context. I may be wrong, but I do not remember any big change in OpenStack community (maybe only the 4 opens and nova-net -> neutron, but it's earlier days). If something was used/developed/decided 10 years ago, we will live with that forever. That is why I read all suggestions of using matrix as "if you don't like the chosen way, we are very sorry, but please find a way to leave with it. this is the way." :) -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri May 21 14:48:05 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Fri, 21 May 2021 16:48:05 +0200 Subject: [nova] Weekly meeting moved to Tuesday 16:00 UTC Message-ID: <54PGTQ.EIEK3EPXHG3R2@est.tech> Hi, Based on the poll[1] and the discussion on the weekly meeting[2] we agreed to move the nova weekly IRC meeting to Tuesday 16:00 UTC from now on. Cheers, gibi [1]http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022440.html [2]http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-20-16.00.log.html#l-113 From hberaud at redhat.com Fri May 21 15:01:27 2021 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 21 May 2021 17:01:27 +0200 Subject: [release] Release countdown for week R-19, May 24 - May 28 Message-ID: Development Focus ----------------- The Xena-1 milestone is next week, on 27 May, 2021! Project team plans for the Xena cycle should now be solidified. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library which had changes but has not been otherwise released since the Wallaby release. PTL's or release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well, by posting a -1. If we do not hear anything at all by the end of the week, we will assume things are OK to proceed. NB: If one of your libraries is still releasing 0.x versions, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. Upcoming Deadlines & Dates -------------------------- Xena-1 milestone: 27 May, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Fri May 21 15:04:37 2021 From: jpodivin at redhat.com (Jiri Podivin) Date: Fri, 21 May 2021 17:04:37 +0200 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> Message-ID: > In the meantime however, I do think tripleo-validations needs the check-requirements job added since it depends on tripleo-common (in g-r) and tripleo-validations is itself in projects.txt. Good point. I'm going to submit it today. On Fri, May 21, 2021 at 3:57 PM James Slagle wrote: > > > On Wed, May 19, 2021 at 4:07 AM Jiri Podivin wrote: > >> >> Which brings me to my point. The openstack/requirements does provide one >> rather essential service for us. In the form of upper-constraints for our >> pip builds. >> While we are mostly installing software through rpm, many CI jobs use pip >> in some fashion. Without upper constraints, pip pulls aggressively the >> newest version available and compatible with other packages. >> Which causes several issues, noted by even pip people. >> >> There is also a question of security. There is a possibility of a bad >> actor introducing a package with an extremely high version number. >> Such a package would get precedence over the legitimate releases. In >> fact, just this sort of attack was spotted in the wild.[1] >> > > It should be noted however that upper-constraints.txt only applies in CI. > If you "pip install python-tripleoclient" in a fresh virtualenv, you get > latest releases, assuming they satisfy other dependencies. > > >> Now, nothing is preventing us from using upper requirements, without >> being in the openstack/requirements projects. >> On the other hand, if we remove ourselves from the covenant, nothing is >> stopping the openstack/requirements people from changing versions of the >> accepted packages >> without considering the impact it could have on our projects. >> > > This would mean you could potentially have issues pip installing with the > rest of OpenStack that have accepted the requirements contract. It goes > back to my original point that I don't think we care. > > Overall, I don't get the sense there is broad agreement about this change, > and it is not completely understood, myself included here. We should likely > hold off on making any decisions until time allows for a more thorough deep > dive into the implications. > > In the meantime however, I do think tripleo-validations needs the > check-requirements job added since it depends on tripleo-common (in g-r) > and tripleo-validations is itself in projects.txt. > > -- > -- James Slagle > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri May 21 15:40:58 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 21 May 2021 08:40:58 -0700 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> Message-ID: <0ff538e5-689f-4d3f-a67d-35971a0cc652@www.fastmail.com> On Fri, May 21, 2021, at 7:28 AM, Andrey Kurilin wrote: > > > пт, 21 мая 2021 г. в 17:17, Jeremy Stanley : > > On 2021-05-21 16:30:40 +0300 (+0300), Andrey Kurilin wrote: > > [...] > > > Why everyone points to third-party solutions for those who don't > > > like IRC? Why the modern chat-platform can be used as a main > > > solution and those who want IRC should look for third-party > > > bridges to make it work in the good old way? > > [...] > > > > It's all a matter of perspective, and you're paying attention to how > > it's phrased by people who are already using IRC (the bulk of our > > current community). You could just as easily phrase it as "some > > projects are moving to Matrix, but taking advantage of the available > > Matrix/IRC bridge so that users of the old IRC channels aren't left > > behind." Technically the solution is the same one as "let's > > recommend a Matrix/IRC bridge to anyone who wants to talk with > > people on IRC without using IRC." The main difference is in how it's > > documented and communicated. > > -- > > Jeremy Stanley > > > It's all a matter of perspective > > It is True without context. > I may be wrong, but I do not remember any big change in OpenStack > community (maybe only the 4 opens and nova-net -> neutron, but it's > earlier days). If something was used/developed/decided 10 years ago, we > will live with that forever. I feel like a big part of this is lots of people have very grand ideas, but no time and willingness to invest in them. We have done a number of large changes since nova-net -> neutron including an entire Zuul v3 rewrite, the massive Gerrit upgrade last year, deployment and use of global-requirements and later constraints and their later modifications, fully automated most details of the OpenStack release process, and so on. I am sure there are many more, but I've got a bias due to the things I'm exposed to. A key detail with all of those is they found champions who worked through them, got necessary consensus and implemented the changes. > That is why I read all suggestions of using matrix as "if you don't > like the chosen way, we are very sorry, but please find a way to leave > with it. this is the way." :) I would characterize it more as "if you don't like the chosen way and have no willingness to help change things then it is unlikely that anything will change". From my (again biased) perspective it seems more and more that when people show up with ideas there is an assumption that someone else (often me) will simply whip something together for them and when that doesn't happen it is because the idea is rejected upfront rather than needing investment. Matrix as an IRC alternative has been brought up a number of times in the past, but it has always lacked someone or a group of someones that are able to PoC it, determine what would be necessary to switch, make the necessary changes, then guide the project through a transition if the decision is made to move. This isn't as simple as registering on the service and joining channels either. You'll need ops/moderators, channel management, updates to existing bots that people want to keep, privacy policies may need to be considered, etc. The suggestion to use the matrix IRC bridge is a good way to simplify all of this though. For this reason I think it would be useful to shift the conversation back to whether or not Freenode is viable going forward. If the consensus for that is "yes" then we start a completely separate conversation on whether or not we want to move to an alternative protocol and take our time. If the answer is "no" then it is probably best to make an "easy" move using consistent tooling for now, then start a conversation on whether or not a move to another set of tools longer term makes sense separately. But again all of these options require effort and effort requires humans. Let's try to address the immediate problem first without conflating issues which only causes confusion and will make it more difficult to solve the problem in front of us. Then once that is behind us, bring up the other discussions in a productive manner (this includes acknowledging the other side might have an opinion worth listening to and that the other side doesn't make choices simply because they have grown long beards). Note: I've addressed some of the other ideas in the larger thread in this response, but they aren't necessarily the views of those I am directly responding to. > > -- > Best regards, > Andrey Kurilin. From andr.kurilin at gmail.com Fri May 21 17:28:30 2021 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Fri, 21 May 2021 20:28:30 +0300 Subject: Freenode and libera.chat In-Reply-To: <0ff538e5-689f-4d3f-a67d-35971a0cc652@www.fastmail.com> References: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> <0ff538e5-689f-4d3f-a67d-35971a0cc652@www.fastmail.com> Message-ID: пт, 21 мая 2021 г. в 18:48, Clark Boylan : > On Fri, May 21, 2021, at 7:28 AM, Andrey Kurilin wrote: > > > > > > пт, 21 мая 2021 г. в 17:17, Jeremy Stanley : > > > On 2021-05-21 16:30:40 +0300 (+0300), Andrey Kurilin wrote: > > > [...] > > > > Why everyone points to third-party solutions for those who don't > > > > like IRC? Why the modern chat-platform can be used as a main > > > > solution and those who want IRC should look for third-party > > > > bridges to make it work in the good old way? > > > [...] > > > > > > It's all a matter of perspective, and you're paying attention to how > > > it's phrased by people who are already using IRC (the bulk of our > > > current community). You could just as easily phrase it as "some > > > projects are moving to Matrix, but taking advantage of the available > > > Matrix/IRC bridge so that users of the old IRC channels aren't left > > > behind." Technically the solution is the same one as "let's > > > recommend a Matrix/IRC bridge to anyone who wants to talk with > > > people on IRC without using IRC." The main difference is in how it's > > > documented and communicated. > > > -- > > > Jeremy Stanley > > > > > It's all a matter of perspective > > > > It is True without context. > > I may be wrong, but I do not remember any big change in OpenStack > > community (maybe only the 4 opens and nova-net -> neutron, but it's > > earlier days). If something was used/developed/decided 10 years ago, we > > will live with that forever. > > I feel like a big part of this is lots of people have very grand ideas, > but no time and willingness to invest in them. We have done a number of > large changes since nova-net -> neutron including an entire Zuul v3 > rewrite, yes, Zuul is another thing that occurred to me as soon as I sent an email, but it was late to change anything. > the massive Gerrit upgrade last year, deployment and use of > global-requirements and later constraints and their later modifications, > fully automated most details of the OpenStack release process, and so on. I > am sure there are many more, but I've got a bias due to the things I'm > exposed to. > Most of the listed things are just required things to do. It doesn't simplify them or decreases greatness. > A key detail with all of those is they found champions who worked through > them, got necessary consensus and implemented the changes. > Sure thing. I do not want to hurt anyone, I know that there are a lot of people in our community that implemented a bunch of great, necessary and important tasks. And thanks to everyone for that. The problem is in the over-complexity of doing such changes. > > That is why I read all suggestions of using matrix as "if you don't > > like the chosen way, we are very sorry, but please find a way to leave > > with it. this is the way." :) > > I would characterize it more as "if you don't like the chosen way and have > no willingness to help change things then it is unlikely that anything will > change". From my (again biased) perspective it seems more and more that > when people show up with ideas there is an assumption that someone else > (often me) will simply whip something together for them and when that > doesn't happen it is because the idea is rejected upfront rather than > needing investment. > I understand what you are talking about, but only one or two emails mentioned the complexity of implementation and load of infra-team. Most emails of the topic cover only theoretical point of view "whether we want one or another solution". > Matrix as an IRC alternative has been brought up a number of times in the > past, but it has always lacked someone or a group of someones that are able > to PoC it, determine what would be necessary to switch, make the necessary > changes, then guide the project through a transition if the decision is > made to move. This isn't as simple as registering on the service and > joining channels either. You'll need ops/moderators, channel management, > updates to existing bots that people want to keep, privacy policies may > need to be considered, etc. The suggestion to use the matrix IRC bridge is > a good way to simplify all of this though. > > For this reason I think it would be useful to shift the conversation back > to whether or not Freenode is viable going forward. If the consensus for > that is "yes" then we start a completely separate conversation on whether > or not we want to move to an alternative protocol and take our time. If the > answer is "no" then it is probably best to make an "easy" move using > consistent tooling for now, then start a conversation on whether or not a > move to another set of tools longer term makes sense separately. > I dislike questions with binary answers. Such questions are too limited. For example, if you ask me for a binary answer for the topic - I would just ignore answering because I use IRC rarely and only for openstack purpose. Should we move from Freenode to another IRC network? I don't care much, that is my answer... But again all of these options require effort and effort requires humans. > Let's try to address the immediate problem first without conflating issues > which only causes confusion and will make it more difficult to solve the > problem in front of us. Then once that is behind us, bring up the other > discussions in a productive manner (this includes acknowledging the other > side might have an opinion worth listening to and that the other side > doesn't make choices simply because they have grown long beards). > > Note: I've addressed some of the other ideas in the larger thread in this > response, but they aren't necessarily the views of those I am directly > responding to. > > > > > -- > > Best regards, > > Andrey Kurilin. > > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Fri May 21 17:51:07 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Fri, 21 May 2021 13:51:07 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> <0ff538e5-689f-4d3f-a67d-35971a0cc652@www.fastmail.com> Message-ID: I agree that openstack chat should get off freenode. I don't claim to understand what happened there, but clearly Mr Lee is attempting to appear like the "owner" of IRC in general (see https://www.irc.com/lets-take-irc-further) something I find repellant. I'm not qualified to comment on libera chat vs. oftc, except that Jeremy has already prepared technically for a possible move to OFTC and established with the community that it was a suitable replacement. On that basis it seems like the best next step to me +1 [I found getting into IRC hard myself - it's resolutely banned where I work, but IRCcloud is good enough. I'm by no means an IRC old-timer.] Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri May 21 19:42:38 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 21 May 2021 19:42:38 +0000 Subject: [all] OpenDev Collaboratory IRC services Message-ID: <20210521194238.javvmpoo3o3egqlt@yuggoth.org> I know lots of people are discussing the recent Freenode IRC upheaval and what it means for their projects. On behalf of the OpenDev sysadmins I've started a thread on the service-discuss at lists.opendev.org mailing list to get feedback from communities currently utilizing the OpenDev Collaboratory's IRC bot services in their channels on Freenode (meeting minutes/logging, Gerrit change events, OpenDev service status information, channel operator assistance): http://lists.opendev.org/pipermail/service-discuss/2021-May/000236.html Please follow up there with your thoughts, to assist in our decision making, so we can take the needs of your project into account. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Fri May 21 19:45:25 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 21 May 2021 19:45:25 +0000 Subject: [Interop] Next 2 meetings on May 28 and June 4 are CANCELLED Message-ID: Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri May 21 19:52:05 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 21 May 2021 19:52:05 +0000 Subject: [Interop] new meetup pointer Message-ID: Team, The meetings will be held going forward at https://meetpad.opendev.org/interop at 4pm UTC every Friday Convert time to your timezone: https://mytime.io/16pm/UTC Thanks, Arkady Kanevsky, Ph.D. SP Chief Technologist & DE Dell Technologies office of CTO Dell Inc. One Dell Way, MS PS2-91 Round Rock, TX 78682, USA Phone: 512 7204955 -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri May 21 20:26:55 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 21 May 2021 22:26:55 +0200 Subject: [all][stable] Delete $series-eol tagged branches In-Reply-To: References: <96e1cb5a-99c5-7a05-01c0-1635743d9c1d@est.tech> Message-ID: Hi, Thanks for cleaning up the open patches. All the remaining *-eol tagged branches have been deleted, see the list: http://paste.openstack.org/show/805576/ * except one: stable/ocata of openstack/os-cloud-config, as it seems the tag is missing there Thanks, Előd On 2021. 05. 17. 17:59, Marios Andreou wrote: > On Fri, May 14, 2021 at 11:11 PM Előd Illés wrote: >> Hi, >> >> As I wrote previously [1] the long-waited deletion of $series-eol tagged >> branches started with the removal of ocata-eol tagged ones first (for >> the list of deleted branches, see: [2]) >> Then I also sent out a warning [3] about the next step, to delete >> pike-eol tagged branches, which finally happened today (for the list of >> deleted branches, see: [4]). >> >> So now I'm sending out a *warning* again, that as a 3rd step, the >> deletion of already tagged but still open branches will continue in 1 or >> 2 weeks time frame. If everything works as expected, then branches with >> *queens-eol*, *rocky-eol* and *stein-eol* tags can be processed in one >> batch. >> >> Also I would like to ask the teams who have $series-eol tagged branches >> to abandon all open patches on those branches, otherwise the branch >> cannot be deleted. > Thank you for this Elod, > > I did a quick survey on our tripleo repos for rocky (tagged eol > waiting for deletion) > https://releases.openstack.org/teams/tripleo.html#rocky > > I found a few patches in progress and commented there > > * https://review.opendev.org/c/openstack/paunch/+/764901/4#message-6ad7eb01c314c908164cab541318388eb460121d > * https://review.opendev.org/c/openstack/python-tripleoclient/+/723634/2#message-dacb09ed164e1c44bdea86b56a42d08c892fd4df > * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/601598/12#message-dcaf8c787e4c2da8bc70c0a234efd811e1c2d484 > * https://review.opendev.org/c/openstack/tripleo-heat-templates/+/724148/4#message-0d019bb76d384ff32b1a5ea3b3bc2f34f9d29bae > > let's hope the authors will respond in time - I will follow up and try > to reach out to them again if they don't > > regards, marios > > > >> Thanks, >> >> Előd >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2021-April/021949.html >> [2] http://paste.openstack.org/show/804953/ >> [3] >> http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022173.html >> [4] http://paste.openstack.org/show/805404/ >> >> >> >> From elod.illes at est.tech Fri May 21 20:29:09 2021 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 21 May 2021 22:29:09 +0200 Subject: [octavia][tripleo][kolla][stable][release] $series-eol delete problem In-Reply-To: References: Message-ID: <9a4205b1-9d1f-c026-d467-f0586f3d335a@est.tech> Thanks for all the replies, the branches have been deleted. Előd On 2021. 05. 19. 18:29, Gregory Thiemonge wrote: > > > On Fri, May 14, 2021 at 10:41 PM Előd Illés wrote: > > Hi teams in $SUBJECT, > > during the deletion of $series-eol tagged branches it turned out that > the below listed branches / repositories contains merged patches > on top > of $series-eol tag. The issue is with this that whenever the > branch is > deleted only the $series-eol (and other) tags can be checked out, > so the > changes that were merged after the eol tags, will be *lost*. > > There are two options now: > > 1. Create another tag (something like: "$series-eol-extra"), so > that the > extra patches will not be lost completely, because they can be > checked > out with the newly created tags > > 2. Delete the branch anyway and don't care about the lost patch(es) > > Here are the list of such branches, please consider which option > is good > for the team and reply to this mail: > > openstack/octavia > * stable/stein has patches on top of the stein-eol tag > * stable/queens has patches on top of the queens-eol tag > > > As discussed during today's Octavia weekly meeting, we agreed to > delete those branches. > Those patches shouldn't have been backported here. > > Thanks, > > openstack/kolla > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/tripleo-common > * stable/rocky has patches on top of the rocky-eol tag > > openstack/os-apply-config > * stable/pike has patches on top of the pike-eol tag > * stable/ocata has patches on top of the ocata-eol tag > > openstack/os-cloud-config > stable/ocata has patches on top of the ocata-eol tag > > Thanks, > > Előd > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Shih at obspm.fr Fri May 21 21:49:53 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Fri, 21 May 2021 23:49:53 +0200 Subject: [victoria] Reboot problem Message-ID: Hi I'm running (trying) a openstack (version victoria) with a Dell Unity storage unit. When I'm create a instance, everything work fine, the block volume are created on the Unity, the mount (with iscsi) work on the compute and the instance boot normaly. But if I shutdown a instance and try to restart it, it's failed. It's just like the block volume cannot be mount. I check on the unity eveyrthing are ok In the log of the compute /var/log/nova/nova-compute.log I can see : Any idea ? Regards 2021-05-21 21:39:15.849 2663 WARNING os_brick.initiator.connectors.iscsi [req-42f8147b-b006-4af7-8dfb-31c39b46b728 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 223c116cac324ae19fd74877a6c06d27] LUN 5 on iSCSI portal 10.15.23.252:3260 not found on sysfs after logging in. 2021-05-21 21:39:15.952 2663 WARNING os_brick.initiator.connectors.iscsi [req-42f8147b-b006-4af7-8dfb-31c39b46b728 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 223c116cac324ae19fd74877a6c06d27] Couldn't find iSCSI nodes because iscsiadm err: iscsiadm: No records found : os_brick.exception.VolumeDeviceNotFound: Volume device not found at . 2021-05-21 21:39:15.960 2663 WARNING os_brick.initiator.connectors.iscsi [req-42f8147b-b006-4af7-8dfb-31c39b46b728 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 223c116cac324ae19fd74877a6c06d27] iscsiadm stderr output when getting sessions: iscsiadm: No active sessions. : os_brick.exception.VolumeDeviceNotFound: Volume device not found at . 2021-05-21 21:39:16.231 2663 INFO nova.compute.manager [req-42f8147b-b006-4af7-8dfb-31c39b46b728 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 223c116cac324ae19fd74877a6c06d27] [instance: ec79c0d6-259d-466f-94da-def4ce318d3a] Successfully reverted task state from powering-on on failure for instance. 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server [req-42f8147b-b006-4af7-8dfb-31c39b46b728 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 223c116cac324ae19fd74877a6c06d27] Exception during message handling: os_brick.exception.VolumeDeviceNotFound: Volume device not found at . 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in dispatch 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 196, in _do_dispatch 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 77, in wrapped 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server _emit_exception_notification( 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise value 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in wrapped 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return f(self, context, *args, **kw) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 188, in decorated_function 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server LOG.warning("Failed to revert task state for instance. " 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise value 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 159, in decorated_function 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1456, in decorated_function 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 216, in decorated_function 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server compute_utils.add_instance_fault_from_exc(context, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise value 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 205, in decorated_function 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return function(self, context, *args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3152, in start_instance 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self._power_on(context, instance) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3120, in _power_on 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self.driver.power_on(context, instance, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3429, in power_on 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self._hard_reboot(context, instance, network_info, block_device_info, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3294, in _hard_reboot 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server xml = self._get_guest_xml(context, instance, network_info, disk_info, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6364, in _get_guest_xml 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server conf = self._get_guest_config(instance, network_info, image_meta, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6006, in _get_guest_config 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server storage_configs = self._get_guest_storage_config(context, 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4750, in _get_guest_storage_config 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self._connect_volume(context, connection_info, instance) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1623, in _connect_volume 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server vol_driver.connect_volume(connection_info, instance) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/nova/virt/libvirt/volume/iscsi.py", line 64, in connect_volume 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server device_info = self.connector.connect_volume(connection_info['data']) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/os_brick/utils.py", line 137, in trace_logging_wrapper 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 359, in inner 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return f(*args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", line 519, in connect_volume 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self._cleanup_connection(connection_properties, force=True) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server self.force_reraise() 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server six.reraise(self.type_, self.value, self.tb) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise value 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", line 513, in connect_volume 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return self._connect_single_volume(connection_properties) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/os_brick/utils.py", line 61, in _wrapper 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return r.call(f, *args, **kwargs) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/retrying.py", line 212, in call 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise attempt.get() 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/retrying.py", line 247, in get 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server six.reraise(self.value[0], self.value[1], self.value[2]) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/six.py", line 703, in reraise 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise value 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/retrying.py", line 200, in call 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server attempt = Attempt(fn(*args, **kwargs), attempt_number, False) 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", line 591, in _connect_single_volume 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise exception.VolumeDeviceNotFound(device='') 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server os_brick.exception.VolumeDeviceNotFound: Volume device not found at . 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server 2021-05-21 21:43:45.556 2663 WARNING nova.compute.manager [req-6d91e265-b6b2-4522-826d-176e0d24a2d3 - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor. 2021-05-21 21:43:45.598 2663 INFO nova.compute.manager [-] [instance: ec79c0d6-259d-466f-94da-def4ce318d3a] During _sync_instance_power_state the DB power_state (4) does not match the vm_power_state from the hypervisor (0). Updating power_state in the DB to match the hypervisor. -- Albert SHIH Observatoire de Paris France xmpp: jas at obspm.fr Heure local/Local time: Fri May 21 11:43:30 PM CEST 2021 From dvd at redhat.com Sat May 22 02:07:52 2021 From: dvd at redhat.com (David Vallee Delisle) Date: Fri, 21 May 2021 22:07:52 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141013.nk2caw72ixkn3nhd@yuggoth.org> <0ff538e5-689f-4d3f-a67d-35971a0cc652@www.fastmail.com> Message-ID: Considering how the freenode takeover was done, I don't believe the statuquo is a viable option here. This is below the belt and I don't think they deserve our userbase. Maybe this is a grayzone and this is borderline legal, as a FOSS community, we have the responsibility to take a stance against this kind of behavior by dropping our support to the new freenode.That's the least we can do. I understand that moving away from current freenode to either libera, or whatever other medium the community will choose is going to be a hassle to everyone. We're going to waste quite a lot of time to move everything and transition. It shouldn't be rushed if we want to change medium. We need to evaluate options, maybe we should vote, and/or deploy stable infrastructure if necessary. I still believe that moving to libera should be quite trivial for most of us and should probably be done ASAP and from there we can decide where we want to go next. DVD On Fri, May 21, 2021 at 1:55 PM Chris Morgan wrote: > I agree that openstack chat should get off freenode. I don't claim to > understand what happened there, but clearly Mr Lee is attempting to appear > like the "owner" of IRC in general (see > https://www.irc.com/lets-take-irc-further) something I find repellant. > I'm not qualified to comment on libera chat vs. oftc, except that Jeremy > has already prepared technically for a possible move to OFTC and > established with the community that it was a suitable replacement. On that > basis it seems like the best next step to me +1 > > [I found getting into IRC hard myself - it's resolutely banned where I > work, but IRCcloud is good enough. I'm by no means an IRC old-timer.] > > Chris > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat May 22 14:27:52 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 22 May 2021 10:27:52 -0400 Subject: [victoria] Reboot problem In-Reply-To: References: Message-ID: I would recommend turning up the debug logs for nova on the compute and trying again. It could be something where the ISCSI session is not properly closed when you shutdown the VM and it creates issues when the VM is started again. On Fri, May 21, 2021 at 5:57 PM Albert Shih wrote: > Hi > > > I'm running (trying) a openstack (version victoria) with a Dell Unity > storage unit. > > When I'm create a instance, everything work fine, the block volume are > created on the Unity, the mount (with iscsi) work on the compute and the > instance boot normaly. > > But if I shutdown a instance and try to restart it, it's failed. > > It's just like the block volume cannot be mount. I check on the unity > eveyrthing are ok > > In the log of the compute > > /var/log/nova/nova-compute.log > > I can see : > > Any idea ? > > Regards > > > > 2021-05-21 21:39:15.849 2663 WARNING os_brick.initiator.connectors.iscsi > [req-42f8147b-b006-4af7-8dfb-31c39b46b728 > 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 > 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 > 223c116cac324ae19fd74877a6c06d27] LUN 5 on iSCSI portal 10.15.23.252:3260 > not found on sysfs after logging in. > 2021-05-21 21:39:15.952 2663 WARNING os_brick.initiator.connectors.iscsi > [req-42f8147b-b006-4af7-8dfb-31c39b46b728 > 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 > 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 > 223c116cac324ae19fd74877a6c06d27] Couldn't find iSCSI nodes because > iscsiadm err: iscsiadm: No records found > : os_brick.exception.VolumeDeviceNotFound: Volume device not found at . > 2021-05-21 21:39:15.960 2663 WARNING os_brick.initiator.connectors.iscsi > [req-42f8147b-b006-4af7-8dfb-31c39b46b728 > 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 > 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 > 223c116cac324ae19fd74877a6c06d27] iscsiadm stderr output when getting > sessions: iscsiadm: No active sessions. > : os_brick.exception.VolumeDeviceNotFound: Volume device not found at . > 2021-05-21 21:39:16.231 2663 INFO nova.compute.manager > [req-42f8147b-b006-4af7-8dfb-31c39b46b728 > 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 > 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 > 223c116cac324ae19fd74877a6c06d27] [instance: > ec79c0d6-259d-466f-94da-def4ce318d3a] Successfully reverted task state from > powering-on on failure for instance. > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > [req-42f8147b-b006-4af7-8dfb-31c39b46b728 > 868dde297576ce232570ea549928d02d58f543007825d199eb48f30a10139c96 > 416ac2de9ed74316920e9cbe3d376bb3 - 223c116cac324ae19fd74877a6c06d27 > 223c116cac324ae19fd74877a6c06d27] Exception during message handling: > os_brick.exception.VolumeDeviceNotFound: Volume device not found at . > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server Traceback > (most recent call last): > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in > _process_incoming > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server res = > self.dispatcher.dispatch(message) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line > 276, in dispatch > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > self._do_dispatch(endpoint, method, ctxt, args) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line > 196, in _do_dispatch > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server result = > func(ctxt, **new_args) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 77, in > wrapped > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > _emit_exception_notification( > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > value > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in > wrapped > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > f(self, context, *args, **kw) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 188, in > decorated_function > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > LOG.warning("Failed to revert task state for instance. " > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > value > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 159, in > decorated_function > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1456, in > decorated_function > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 216, in > decorated_function > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > compute_utils.add_instance_fault_from_exc(context, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > value > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 205, in > decorated_function > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > function(self, context, *args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3152, in > start_instance > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self._power_on(context, instance) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 3120, in > _power_on > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self.driver.power_on(context, instance, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3429, in > power_on > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self._hard_reboot(context, instance, network_info, block_device_info, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 3294, in > _hard_reboot > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server xml = > self._get_guest_xml(context, instance, network_info, disk_info, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6364, in > _get_guest_xml > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server conf = > self._get_guest_config(instance, network_info, image_meta, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 6006, in > _get_guest_config > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > storage_configs = self._get_guest_storage_config(context, > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4750, in > _get_guest_storage_config > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self._connect_volume(context, connection_info, instance) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1623, in > _connect_volume > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > vol_driver.connect_volume(connection_info, instance) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/nova/virt/libvirt/volume/iscsi.py", line > 64, in connect_volume > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > device_info = self.connector.connect_volume(connection_info['data']) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/os_brick/utils.py", line 137, in > trace_logging_wrapper > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > f(*args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py", line 359, > in inner > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > f(*args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", > line 519, in connect_volume > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self._cleanup_connection(connection_properties, force=True) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in > __exit__ > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > self.force_reraise() > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in > force_reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > six.reraise(self.type_, self.value, self.tb) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > value > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", > line 513, in connect_volume > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > self._connect_single_volume(connection_properties) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/os_brick/utils.py", line 61, in _wrapper > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server return > r.call(f, *args, **kwargs) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/retrying.py", line 212, in call > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > attempt.get() > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/retrying.py", line 247, in get > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > six.reraise(self.value[0], self.value[1], self.value[2]) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/six.py", line 703, in reraise > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > value > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/retrying.py", line 200, in call > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server attempt = > Attempt(fn(*args, **kwargs), attempt_number, False) > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server File > "/usr/lib/python3/dist-packages/os_brick/initiator/connectors/iscsi.py", > line 591, in _connect_single_volume > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server raise > exception.VolumeDeviceNotFound(device='') > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > os_brick.exception.VolumeDeviceNotFound: Volume device not found at . > 2021-05-21 21:39:16.237 2663 ERROR oslo_messaging.rpc.server > 2021-05-21 21:43:45.556 2663 WARNING nova.compute.manager > [req-6d91e265-b6b2-4522-826d-176e0d24a2d3 - - - - -] While synchronizing > instance power states, found 1 instances in the database and 0 instances on > the hypervisor. > 2021-05-21 21:43:45.598 2663 INFO nova.compute.manager [-] [instance: > ec79c0d6-259d-466f-94da-def4ce318d3a] During _sync_instance_power_state the > DB power_state (4) does not match the vm_power_state from the hypervisor > (0). Updating power_state in the DB to match the hypervisor. > > -- > Albert SHIH > Observatoire de Paris > France > xmpp: jas at obspm.fr > Heure local/Local time: > Fri May 21 11:43:30 PM CEST 2021 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Sat May 22 14:29:42 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sat, 22 May 2021 10:29:42 -0400 Subject: [largescale-sig][neutron] What driver are you using? In-Reply-To: References: Message-ID: We are "lucky" that external connectivity needs are limited. We have between 50-100 IP per L2 usually. We do not have huge pools of public IPs which are harder to handle/scale as with a public cloud. On Mon, May 17, 2021 at 10:59 AM Arnaud Morin wrote: > Hi Laurent, > > Thanks for your reply! > I agree that it depends on the scale usage. > About the VLAN you are using for external networks, do you have/want to > share the number of public IP you have in this L2 for a region? > > Cheers, > > On 11.05.21 - 19:21, Laurent Dumont wrote: > > I feel like it depends a lot on the scale/target usage (public vs private > > cloud). > > > > But at $dayjob, we are leveraging > > > > - vlans for external networking (linux-bridge + OVS) > > - vxlans for internal Openstack networks. > > > > We like the simplicity of vxlan with minimal overlay configuration. There > > are some scaling/performance issues with stuff like l2 population. > > > > VLANs are okay but it's hard to predict the next 5 years of growth. > > > > On Mon, May 10, 2021 at 8:34 AM Arnaud Morin > wrote: > > > > > Hey large-scalers, > > > > > > We had a discusion in my company (OVH) about neutron drivers. > > > We are using a custom driver based on BGP for public networking, and > > > another custom driver for private networking (based on vlan). > > > > > > Benefits from this are obvious: > > > - we maintain the code > > > - we do what we want, not more, not less > > > - it fits perfectly to the network layer our company is using > > > - we have full control of the networking stack > > > > > > But it also have some downsides: > > > - we have to maintain the code... (rebasing, etc.) > > > - we introduce bugs that are not upstream (more code, more bugs) > > > - a change in code is taking longer, we have few people working on this > > > (compared to a community based) > > > - this is not upstream (so not opensource) > > > - we are not sharing (bad) > > > > > > So, we were wondering which drivers are used upstream in large scale > > > environment (not sure a vlan driver can be used with more than 500 > > > hypervisors / I dont know about vxlan or any other solution). > > > > > > Is there anyone willing to share this info? > > > > > > Thanks in advance! > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun May 23 00:40:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 22 May 2021 19:40:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 21th May, 21: Reading: 5 min Message-ID: <17996aa8b8a.daef89e586281.5262030218675308325@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= Project updates: ------------------- * None for this week. Other updates: ------------------ * None for this week. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-05-20-15.00.log.html * We will have next week's meeting on May 27th, Thursday 15:00 UTC[1]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[2] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Four open reviews for ongoing activities[3]. Starting the 'Y' release naming process --------------------------------------------- * Y release naming process is started[4]. Nomination is open until June 10th feel free to propose names in below wiki ** https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Replacing ATC terminology with AC (Active Contributors) ------------------------------------------------------------------- * As UC is merged into TC, this is to include the AUC into ATC so that they can be eligible for TC election voting. We are having a good amount of discussion on Gerrit[5], feel free to review the patch if you have any points regarding this. * In the last TC meeting, we will write a TC resolution to map the ATC with the new term AC from Bylaws' perspective. Retiring sushy-cli -------------------- * Ironic project is retiring the sushy-cli[6] Discussion on moving from Freenode -------------------------------------------- * As you know there are changes in Freenode organization/governance strategy and policies, we are having a lot of discussion over ML on this[7][8]. * In TC, we many members think to monitor this situation and not to make any quick decision in hurry. This also needs to decide which platform is better for the long term. Until then we will continue on Freenode. * As Naser mentioned in the ML thread if anyone feels a very strong opinion to be discussed on priority, feel free to add this in next week's TC meeting agenda. 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[9]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [10] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [11] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://etherpad.opendev.org/p/tc-xena-tracker [3] https://review.opendev.org/q/project:openstack/governance+status:open [4] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html [5] https://review.opendev.org/c/openstack/governance/+/790092 [6] https://review.opendev.org/c/openstack/governance/+/792348 [7] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022468.html [8] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022539.html [9] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [10] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [11] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From fungi at yuggoth.org Sun May 23 14:05:52 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 23 May 2021 14:05:52 +0000 Subject: Freenode and libera.chat In-Reply-To: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> Message-ID: <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: > [...] > > Looking at the movement over the past day, it seems like we're the > > only hesitant party here. Rest of the communities have either > > moved to libera.chat or OFTC. I'd strongly advise us to do the > > same before things turn sour. > > OpenStack isn't the only community taking a careful and measured > approach to the decision. Ansible deferred deciding what to do about > their IRC channels until Wednesday of this coming week: > > https://github.com/ansible-community/community-topics/issues/19 In a similar vein, I've noticed that #python, #python-dev, #pypa and so on haven't moved off Freenode yet, though the topic in #python suggests there's some ongoing discussion to determine whether they should. Unfortunately it doesn't say where that's being discussed though, maybe on the Python community mailing lists or Discourse, however cursory searches I've performed turn up nothing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From anlin.kong at gmail.com Sun May 23 21:23:09 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 24 May 2021 09:23:09 +1200 Subject: [wallaby][trove] Instance Volume Resize In-Reply-To: References: Message-ID: Hi Ammad, Do you mind reporting the issue to trove storyboard https://storyboard.openstack.org/#!/project/openstack/trove with the detailed steps to reproduce? It'd be great if you could provide the openstack service versions you are using as well. Thanks. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Fri, May 21, 2021 at 7:19 PM Ammad Syed wrote: > Thanks Lingxian, it worked fine. > > I have resize the volume of mysql datastore from 17GB to 18GB. In guest > agent logs it said that command has executed successfully. > > 2021-05-21 07:11:03.527 1062 INFO trove.guestagent.datastore.manager [-] > Resizing the filesystem at /var/lib/mysql, online: True > 2021-05-21 07:11:03.528 1062 DEBUG trove.guestagent.volume [-] Checking if > /dev/sdb exists. _check_device_exists > /home/ubuntu/trove/trove/guestagent/volume.py:217 > 2021-05-21 07:11:03.528 1062 DEBUG oslo_concurrency.processutils [-] > Running cmd (subprocess): sudo blockdev --getsize64 /dev/sdb execute > /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > 2021-05-21 07:11:03.545 1062 DEBUG oslo_concurrency.processutils [-] CMD > "sudo blockdev --getsize64 /dev/sdb" returned: 0 in 0.016s execute > /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > 2021-05-21 07:11:03.546 1062 DEBUG oslo_concurrency.processutils [-] > Running cmd (subprocess): sudo resize2fs /dev/sdb execute > /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 > 2021-05-21 07:11:03.577 1062 DEBUG oslo_concurrency.processutils [-] CMD > "sudo resize2fs /dev/sdb" returned: 0 in 0.031s execute > /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 > > But the /var/lib/mysql still shows 17GB. I have manually executed > resize2fs on /dev/sdb. After manual execution, /var/lib/mysql has updated > to 18GB. Not sure if I am missing something. > > - Ammad > > On Thu, May 20, 2021 at 4:39 PM Lingxian Kong > wrote: > >> Modify trove service config file: >> >> [DEFAULT] >> max_accepted_volume_size = >> >> 10 is the default value if the config option is not specified. >> >> --- >> Lingxian Kong >> Senior Cloud Engineer (Catalyst Cloud) >> Trove PTL (OpenStack) >> OpenStack Cloud Provider Co-Lead (Kubernetes) >> >> >> On Wed, May 19, 2021 at 9:56 PM Ammad Syed wrote: >> >>> Hi, >>> >>> I am using wallaby / trove on ubuntu 20.04. I am trying to extend volume >>> of database instance. Its having trouble that instance cannot exceed volume >>> size of 10GB. >>> >>> My flavor has 2vcpus 4GB RAM and 10GB disk. I created a database >>> instance with 5GB database size and mysql datastore. The deployment has >>> created 10GB root and 5GB /var/lib/mysql. I have tried to extend volume to >>> 11GB, it failed with error that "Volume 'size' cannot exceed maximum of 10 >>> GB, 11 cannot be accepted". >>> >>> I want to keep root disk size to 10GB and only want to extend >>> /var/lib/mysql keeping the same flavor. Is it possible or should I need to >>> upgrade flavor as well ? >>> >>> -- >>> Regards, >>> >>> >>> Syed Ammad Ali >>> >> > > -- > Regards, > > > Syed Ammad Ali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon May 24 08:30:08 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 24 May 2021 09:30:08 +0100 Subject: [kolla] Reorganization of kolla-ansible documentation In-Reply-To: References: Message-ID: On Fri, 14 May 2021 at 21:27, Klemen Pogacnik wrote: > > Hello! Hi Klemen, Thank you for your evaluation of the documentation. I think a lot of it aligns with the discussions we had in the Kolla Kalls [1] some time ago. I'll add notes inline. It's worth looking at other similar projects for inspiration, e.g. OSA [2] and TripleO [3]. [1] https://etherpad.opendev.org/p/kollakall [2] https://docs.openstack.org/openstack-ansible/latest/ [3] https://docs.openstack.org/tripleo-docs/latest/ Mark > > I promised to prepare my view as a user of kolla-ansible on its documentation. In my opinion the division between admin guides and user guides is artificial, as the user of kolla-ansible is actually the cloud administrator. Absolutely agreed. > > Maybe it would be good to think about reorganizing the structure of documentation. Many good chapters are already written, they only have to be positioned in the right place to be found more easily. Agreed also. We now have redirect support [4] in place to keep old links working, assuming only whole pages are moved. [4] doc/source/_extra/.htaccess > > So here is my proposal of kolla-ansible doc's structure: > > 1. Introduction > 1.1. mission > 1.2. benefits > 1.3. support matrix How about a 'getting started' page, similar to [5]? [5] https://docs.openstack.org/kayobe/latest/getting-started.html > 2. Architecture > 2.1. basic architecture > 2.2. HA architecture > 2.3. network architecture > 2.4. storage architecture > 3. Workflows > 3.1. preparing the surroundings (networking, docker registry, ...) > 3.2. preparing servers (packages installation) Installation of kolla-ansible should go here. > 3.3. configuration (of kolla-ansible and description of basic logic for configuration of Openstack modules) > 3.4. 1st day procedures (bootstrap, deploy, destroy) > 3.5. 2nd day procedures (reconfigure, upgrade, add, remove nodes ...) > 3.6. multiple regions > 3.7. multiple cloud > 3.8. security > 3.9. troubleshooting (how to check, if cloud works, what to do, if it doesn't) > 4. Use Cases > 4.1. all-in-one > 4.2. basic vm multinode > 4.3. some production use cases What do these pages contain? Something like the current quickstart? > 5. Reference guide > Mostly the same structure as already is. Except it would be desirable that description of each module has: > - purpose of the module > - configuration of the module > - how to use it with links to module docs > - basic troubleshooting > 6. Contributor guide > > > The documentation also needs figures, pictures, diagrams to be more understandable. So at least in the first chapters some of them shall be added. This is a common request from users. We have lots of reference documentation, but need more high level architectural information and diagrams. Unfortunately this type of documentation is quite hard to create, but we would welcome improvements. > > > I'm also thinking about convergence of documentation of kayobe, kolla and kolla-ansible projects. It's true that there's no strict connection between kayobe and other two and kolla containers can be used without kolla-ansible playbooks. But the real benefit the user can get is to use all three projects together. But let's leave that for the second phase. > I'm not so sure about converging them into one set of docs. They are each fairly separate tools. We added a short section [6] to each covering related projects. Perhaps we should make this a dedicated page, and provide more information about the Kolla ecosystem? [6] https://docs.openstack.org/kolla/latest/#related-projects > > > So please comment on this proposal. Do you think it's going in the right direction? If yes, I can refine it. > > From syedammad83 at gmail.com Mon May 24 08:39:25 2021 From: syedammad83 at gmail.com (Ammad Syed) Date: Mon, 24 May 2021 13:39:25 +0500 Subject: [wallaby][trove] Instance Volume Resize In-Reply-To: References: Message-ID: Hi, I have created the issue to storyboard. https://storyboard.openstack.org/#!/story/2008916 - Ammad On Mon, May 24, 2021 at 2:23 AM Lingxian Kong wrote: > Hi Ammad, > > Do you mind reporting the issue to trove storyboard > https://storyboard.openstack.org/#!/project/openstack/trove with the > detailed steps to reproduce? It'd be great if you could provide the > openstack service versions you are using as well. Thanks. > > --- > Lingxian Kong > Senior Cloud Engineer (Catalyst Cloud) > Trove PTL (OpenStack) > OpenStack Cloud Provider Co-Lead (Kubernetes) > > > On Fri, May 21, 2021 at 7:19 PM Ammad Syed wrote: > >> Thanks Lingxian, it worked fine. >> >> I have resize the volume of mysql datastore from 17GB to 18GB. In guest >> agent logs it said that command has executed successfully. >> >> 2021-05-21 07:11:03.527 1062 INFO trove.guestagent.datastore.manager [-] >> Resizing the filesystem at /var/lib/mysql, online: True >> 2021-05-21 07:11:03.528 1062 DEBUG trove.guestagent.volume [-] Checking >> if /dev/sdb exists. _check_device_exists >> /home/ubuntu/trove/trove/guestagent/volume.py:217 >> 2021-05-21 07:11:03.528 1062 DEBUG oslo_concurrency.processutils [-] >> Running cmd (subprocess): sudo blockdev --getsize64 /dev/sdb execute >> /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 >> 2021-05-21 07:11:03.545 1062 DEBUG oslo_concurrency.processutils [-] CMD >> "sudo blockdev --getsize64 /dev/sdb" returned: 0 in 0.016s execute >> /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 >> 2021-05-21 07:11:03.546 1062 DEBUG oslo_concurrency.processutils [-] >> Running cmd (subprocess): sudo resize2fs /dev/sdb execute >> /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:384 >> 2021-05-21 07:11:03.577 1062 DEBUG oslo_concurrency.processutils [-] CMD >> "sudo resize2fs /dev/sdb" returned: 0 in 0.031s execute >> /opt/guest-agent-venv/lib/python3.6/site-packages/oslo_concurrency/processutils.py:423 >> >> But the /var/lib/mysql still shows 17GB. I have manually executed >> resize2fs on /dev/sdb. After manual execution, /var/lib/mysql has updated >> to 18GB. Not sure if I am missing something. >> >> - Ammad >> >> On Thu, May 20, 2021 at 4:39 PM Lingxian Kong >> wrote: >> >>> Modify trove service config file: >>> >>> [DEFAULT] >>> max_accepted_volume_size = >>> >>> 10 is the default value if the config option is not specified. >>> >>> --- >>> Lingxian Kong >>> Senior Cloud Engineer (Catalyst Cloud) >>> Trove PTL (OpenStack) >>> OpenStack Cloud Provider Co-Lead (Kubernetes) >>> >>> >>> On Wed, May 19, 2021 at 9:56 PM Ammad Syed >>> wrote: >>> >>>> Hi, >>>> >>>> I am using wallaby / trove on ubuntu 20.04. I am trying to extend >>>> volume of database instance. Its having trouble that instance cannot exceed >>>> volume size of 10GB. >>>> >>>> My flavor has 2vcpus 4GB RAM and 10GB disk. I created a database >>>> instance with 5GB database size and mysql datastore. The deployment has >>>> created 10GB root and 5GB /var/lib/mysql. I have tried to extend volume to >>>> 11GB, it failed with error that "Volume 'size' cannot exceed maximum of 10 >>>> GB, 11 cannot be accepted". >>>> >>>> I want to keep root disk size to 10GB and only want to extend >>>> /var/lib/mysql keeping the same flavor. Is it possible or should I need to >>>> upgrade flavor as well ? >>>> >>>> -- >>>> Regards, >>>> >>>> >>>> Syed Ammad Ali >>>> >>> >> >> -- >> Regards, >> >> >> Syed Ammad Ali >> > -- Regards, Syed Ammad Ali -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon May 24 11:15:13 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 24 May 2021 12:15:13 +0100 Subject: [kolla] master branch open for development Message-ID: Hi, Apologies, this email should have been sent some time ago. I wanted to notify the community that the master branch of Kolla projects is open for development. Note that review focus will still be on patches relevant to Wallaby until it is released. Thanks, Mark From mark at stackhpc.com Mon May 24 11:22:56 2021 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 24 May 2021 12:22:56 +0100 Subject: [kolla] Draft release process Message-ID: Hi, At the Xena PTG [1] we discussed changing the early phases of our release process to provide more stability while developing features for Kolla projects. We also agreed to firm up timing and improve documentation around some of the steps of the release process nearer to the release. I have drafted a new release process/schedule [2] based on these ideas. Please share your comments, thoughts and concerns either here or in the Etherpad. Thanks, Mark [1] https://etherpad.opendev.org/p/kolla-xena-ptg [2] https://etherpad.opendev.org/p/kolla-release-process-draft From juliaashleykreger at gmail.com Mon May 24 13:28:16 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 24 May 2021 06:28:16 -0700 Subject: Freenode and libera.chat In-Reply-To: <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: On Sun, May 23, 2021 at 7:09 AM Jeremy Stanley wrote: > > On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: > > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: > > [...] > > > Looking at the movement over the past day, it seems like we're the > > > only hesitant party here. Rest of the communities have either > > > moved to libera.chat or OFTC. I'd strongly advise us to do the > > > same before things turn sour. > > > > OpenStack isn't the only community taking a careful and measured > > approach to the decision. Ansible deferred deciding what to do about > > their IRC channels until Wednesday of this coming week: > > > > https://github.com/ansible-community/community-topics/issues/19 > > In a similar vein, I've noticed that #python, #python-dev, #pypa and > so on haven't moved off Freenode yet, though the topic in #python > suggests there's some ongoing discussion to determine whether they > should. Unfortunately it doesn't say where that's being discussed > though, maybe on the Python community mailing lists or Discourse, > however cursory searches I've performed turn up nothing. > -- > Jeremy Stanley I suspect they, like everyone else who has seen some of the latest rules changes[0] and a report[1] of abuses of power, are looking at treading lightly in order to do the best thing for their community. I suspect we're going to see a mass migration to other platforms or tools, regardless at this point in time. The rules changes are just going to make it more difficult to keep the existing channels as redirects. I firmly believe this is no longer a matter of should, but we now have an imperative to ensure community communication continuity. If the higher level project doesn't wish to come to quick consensus, then I believe individual projects will make their own decisions and we'll end up fragmenting the communication channels until things settle down. [0]: https://github.com/freenode/web-7.0/pull/513/commits/2037126831a84c57f978268f090fc663cf43ed7a#diff-0e382b024f696a3b7a0ff3bce24ae3166cc6f383d059c7cc61e0a3ccdeed522c [1]: https://www.devever.net/~hl/freenode_abuse From skaplons at redhat.com Mon May 24 14:32:07 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 24 May 2021 16:32:07 +0200 Subject: [neutron][interop][refstack] New tests and capabilities to track in interop In-Reply-To: References: Message-ID: <6595086.PSTg7GmUaj@p1> Hi, Dnia poniedziałek, 26 kwietnia 2021 17:48:08 CEST Martin Kopec pisze: > Hi everyone, > > I would like to further discuss the topics we covered with the neutron team > during > the PTG [1]. > > * adding address_group API capability > It's tested by tests in neutron-tempest-plugin. First question is if tests > which are > not directly in tempest can be a part of a non-add-on marketing program? > It's possible to move them to tempest though, by the time we do so, could > they be > marked as advisory? > > * Shall we include QoS tempest tests since we don't know what share of > vendors > enable QoS? Could it be an add-on? > These tests are also in neutron-tempest-plugin, I assume we're talking about > neutron_tempest_plugin.api.test_qos tests. > If we want to include these tests, which program should they belong to? Do > we wanna > create a new one? > > [1] https://etherpad.opendev.org/p/neutron-xena-ptg > > Thanks, > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA First of all, sorry that it took so long for me but I finally looked into Neutron related tests and capabilities and I think we can possibly add few things there: - For "networks-security-groups-CRUD" we can add "address_groups" API. It is now supported by ML2 plugin [1]. In the neutron-tempest-plugin we just have some scenario test [2] but we would probably need also API tests for that, correct? - For networks-l3-CRUD we can optionally add port_forwarding API. This can be added by service plugin [3] so it may not be enabled in all deployments. But maybe there is some "optional feature" category in the RefStack, and if so, this could be included there. Tests for that are in neutron-tempest-plugin [4] and [5]. - There are also 2 other service plugins, which I think could be included as "optional feature" in the RefStack, but IMO don't fit exactly in any of the existing groups. Those are QoS [6] and Trunks [7]. Tests for both are in the neutron-tempest-plugin as well: Qos: [8] and [9], Trunk [10], [11] and [12]. Please let me know what do You think about it and if that would be ok and if You want me to propose some patches with that or maybe You will propose them. [1] https://review.opendev.org/c/openstack/neutron-lib/+/741784[1] [2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/777833[2] [3] https://github.com/openstack/neutron/blob/master/neutron/services/portforwarding/ pf_plugin.py[3] [4] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_port_forwardings.py[4] [5] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_port_forwarding_negative.py[5] [6] https://github.com/openstack/neutron/blob/master/neutron/services/qos/ qos_plugin.py[6] [7] https://github.com/openstack/neutron/blob/master/neutron/services/trunk/ plugin.py[7] [8] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_qos.py[8] [9] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_qos_negative.py[9] [10] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk.py[10] [11] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk_details.py[11] [12] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk_negative.py[12] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://review.opendev.org/c/openstack/neutron-lib/+/741784 [2] https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/777833 [3] https://github.com/openstack/neutron/blob/master/neutron/services/portforwarding/ pf_plugin.py [4] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_port_forwardings.py [5] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_port_forwarding_negative.py [6] https://github.com/openstack/neutron/blob/master/neutron/services/qos/ qos_plugin.py [7] https://github.com/openstack/neutron/blob/master/neutron/services/trunk/plugin.py [8] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_qos.py [9] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_qos_negative.py [10] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk.py [11] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk_details.py [12] https://github.com/openstack/neutron-tempest-plugin/blob/master/ neutron_tempest_plugin/api/test_trunk_negative.py -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From marios at redhat.com Mon May 24 16:06:31 2021 From: marios at redhat.com (Marios Andreou) Date: Mon, 24 May 2021 19:06:31 +0300 Subject: [TripleO] Opting out of global-requirements.txt In-Reply-To: References: <124843068.26462828.1621006405455.JavaMail.zimbra@redhat.com> Message-ID: On Fri, May 21, 2021 at 6:04 PM Jiri Podivin wrote: > > > > In the meantime however, I do think tripleo-validations needs the check-requirements job added since it depends on tripleo-common (in g-r) and tripleo-validations is itself in projects.txt. > > Good point. I'm going to submit it today. > > On Fri, May 21, 2021 at 3:57 PM James Slagle wrote: >> >> >> >> On Wed, May 19, 2021 at 4:07 AM Jiri Podivin wrote: >>> >>> >>> Which brings me to my point. The openstack/requirements does provide one rather essential service for us. In the form of upper-constraints for our pip builds. >>> While we are mostly installing software through rpm, many CI jobs use pip in some fashion. Without upper constraints, pip pulls aggressively the newest version available and compatible with other packages. >>> Which causes several issues, noted by even pip people. >>> >>> There is also a question of security. There is a possibility of a bad actor introducing a package with an extremely high version number. >>> Such a package would get precedence over the legitimate releases. In fact, just this sort of attack was spotted in the wild.[1] >> >> >> It should be noted however that upper-constraints.txt only applies in CI. If you "pip install python-tripleoclient" in a fresh virtualenv, you get latest releases, assuming they satisfy other dependencies. >> >>> >>> Now, nothing is preventing us from using upper requirements, without being in the openstack/requirements projects. >>> On the other hand, if we remove ourselves from the covenant, nothing is stopping the openstack/requirements people from changing versions of the accepted packages >>> without considering the impact it could have on our projects. >> >> >> This would mean you could potentially have issues pip installing with the rest of OpenStack that have accepted the requirements contract. It goes back to my original point that I don't think we care. >> >> Overall, I don't get the sense there is broad agreement about this change, and it is not completely understood, myself included here. We should likely hold off on making any decisions until time allows for a more thorough deep dive into the implications. >> >> In the meantime however, I do think tripleo-validations needs the check-requirements job added since it depends on tripleo-common (in g-r) and tripleo-validations is itself in projects.txt yeah I brought this up in my earlier reply if we're not removing (and seems like we aren't for now) then we need to add coverage in tripleo-validations and tripleo-ansible which is posted there https://review.opendev.org/c/openstack/tripleo-ansible/+/792830 regards, marios >> >> -- >> -- James Slagle >> -- From jimmy at openstack.org Mon May 24 16:53:50 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 24 May 2021 11:53:50 -0500 Subject: [interop] Interop Testing Guidelines Message-ID: <59BD4681-3D7C-4B45-8C2B-82445B042443@getmailspring.com> Hi all - I noticed that the most recent testing guidelines [1] are from 2016. Is there a plan to update those to the 2020 guidelines? They still mention Chris Hoge, who is no longer in the community, and I'm assuming the guidelines there are outdated as well. Cheers, Jimmy https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From bcafarel at redhat.com Mon May 24 18:06:20 2021 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 24 May 2021 20:06:20 +0200 Subject: [neutron] Bug deputy report (week starting on 2021-05-17) Message-ID: Hey neutrinos, we are starting a new bug deputy rotation (new dates available as usual at [0]), here are the bugs that were reported last week. Skipping invalid/duplicate bugs, this was relatively quiet, most bugs have fixes or assignees. OVN eyes are welcome on https://bugs.launchpad.net/bugs/1929197 (unconfirmed section). In details: Critical * Fullstack test TestUninterruptedConnectivityOnL2AgentRestart failing often with LB agent - https://bugs.launchpad.net/bugs/1928764 Causing FT failures, lajos looking into it in https://review.opendev.org/c/openstack/neutron/+/792507 * Devstack "fixup_ubuntu" method removed - https://bugs.launchpad.net/bugs/1928805 Fixed with https://review.opendev.org/c/openstack/neutron/+/791983 * Docs job is broken in neutron - https://bugs.launchpad.net/bugs/1928913 pyroute2 issue with newer versions, https://review.opendev.org/c/openstack/neutron/+/792180 fixes it for older branches, and https://review.opendev.org/c/openstack/neutron/+/792077 in master (dependency will be added to pyroute2 directly soon) Medium * "test_reserve_provider_segment_without_physical_network" failing randomly - https://bugs.launchpad.net/bugs/1929190 Proposed test fix https://review.opendev.org/c/openstack/neutron/+/792559 Low * Missing packages in openSuse installation steps - https://bugs.launchpad.net/bugs/1929012 Small suggested doc fix on packages to install for L3 agent Unconfirmed * [OVN] SB connection unreliable - https://bugs.launchpad.net/bugs/1929197 Needs some OVN eyes, error on "AttributeError: 'NoneType' object has no attribute 'chassis_exists'" Incomplete * [stein][neutron] l3 agent error - https://bugs.launchpad.net/bugs/1928675 After a migration from queens, controllers go OOM with HA routers "Respawning ip_monitor" logs Mentioned env was cleaned, we may get more information on next env update [0] https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 24 18:29:16 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 24 May 2021 13:29:16 -0500 Subject: [ptl][tc] Project's feedback on Freenode situation Message-ID: <1799fa365be.b18b8ae0158537.8334892981210962275@ghanshyammann.com> Hello PTLs/Release Liaisons, As you know, there is a lot of discussion going on for Freenode situation, If you are not aware of that, these are the ML thread to read[1][2]. Most TC members (I mentioned in my weekly summary email also[3]) think to wait for more time and monitor the situation to make any decision. But in today's discussion on openstack-tc, a few projects (Ironic, Kolla) are in favour of making the decision soon instead of waiting. To proceed further, TC would like to get feedback from each project. I request PTLs to discuss this in your team and write your feedback in below etherpad: - https://etherpad.opendev.org/p/feedback-on-freenode [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022468.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022539.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022663.html -gmann From whayutin at redhat.com Mon May 24 20:26:10 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 24 May 2021 14:26:10 -0600 Subject: [tripleo][ci] mirror issues failing jobs Message-ID: https://bugs.launchpad.net/tripleo/+bug/1929461 something is going on w/ either centos or the infra mirrors... I suspect centos atm. Mirror monitoring is here: http://cacti.openstack.org/cacti/graph_view.php -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon May 24 20:34:46 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 24 May 2021 13:34:46 -0700 Subject: [tripleo][ci] mirror issues failing jobs In-Reply-To: References: Message-ID: <17c2d152-ed15-4165-8171-3db0cee8ea6d@www.fastmail.com> On Mon, May 24, 2021, at 1:26 PM, Wesley Hayutin wrote: > https://bugs.launchpad.net/tripleo/+bug/1929461 > > something is going on w/ either centos or the infra mirrors... I > suspect centos atm. > Mirror monitoring is here: > http://cacti.openstack.org/cacti/graph_view.php > Looks like https://mirror.ord.rax.opendev.org/centos/8-stream/AppStream/x86_64/os/repodata/repomd.xml updated to point at repodata/d0435a46fff272cacc6c2d5433e8e2b0b2d70d57141116c8b9fa7624aaf01aaf-filelists.xml.gz but that filelist wasn't present in the mirror we sync from. Looking at the location we sync from they appear to be suffering the same issue: http://mirror.dal10.us.leaseweb.net/centos/8-stream/AppStream/x86_64/os/repodata/ To fix this we can wait for our upstreams to get in sync or switch to an upstream that is in sync (if one exists). Clark From luke.camilleri at zylacomputing.com Mon May 24 20:44:59 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Mon, 24 May 2021 22:44:59 +0200 Subject: [victoria][magnum][octavia] in amphora Message-ID: I have configured magnum with a service type of LoadBalancer and it is successfully deploying an external LoadBalancer via Octavia. The problem is that I cannot reach the public IP address but can see the entries in the haproxy.log on the amphora instance and the log shows 0 SC-- at the end of each entry when it is being accessed (via a browser for example). So the Octavia part seems to be fine, the config shows the correct LB --> listener --> pool --> member and the nodeports that the service should be listening on (I am assuming that the same nodeports are also used for the healthchecks) The haproxy.cfg in the amphora instance shows the below in the pool members section:    server 43834d2f-4e22-4065-b448-ddf0713f2ced 192.168.1.191:31765 weight 1 check inter 60s fall 3 rise 3    server 3a733c48-24dd-426e-8394-699a908121ee 192.168.1.36:31765 weight 1 check inter 60s fall 3 rise 3    server 8d093783-79c9-4094-b3a2-8d31b1c4567f 192.168.1.99:31765 weight 1 check inter 60s fall 3 rise 3 and the status of the pool's members (# openstack loadbalancer member list) is as follows: +------------------------------------------------------------+---------------------------+-----------------+-----------------------+---------------------------+ | id     | provisioning_status | address       |     protocol_port |     operating_status | 8d093783-79c9-4094-b3a2-8d31b1c4567f      ACTIVE     192.168.1.99      31765                 ERROR 43834d2f-4e22-4065-b448-ddf0713f2ced        ACTIVE     192.168.1.191    31765                 ERROR 3a733c48-24dd-426e-8394-699a908121ee     ACTIVE     192.168.1.36      31765                 ERROR From the below loadbalancer healthmonitor show command I can see that the health checks are being done via TCP on the same port and can confirm that the security group allows the nodeports range (ALLOW IPv4 30000-32767/tcp from 0.0.0.0/0) +---------------------+--------------------------------------+ | Field                             | Value +---------------------+--------------------------------------+ | provisioning_status     | ACTIVE | type                            | TCP | id                                | 86604638-27db-47d2-ad9c-0594564a44be | operating_status        | ONLINE +---------------------+--------------------------------------+ $ kubectl describe service nginx-service Name:                     nginx-service Namespace:                default Labels:                   Annotations:              Selector:                 app=nginx Type:                     LoadBalancer IP Families:              IP:                       10.254.255.232 IPs:                      LoadBalancer Ingress:     185.89.239.217 Port:                       80/TCP TargetPort:               8080/TCP NodePort:                   31765/TCP Endpoints: 10.100.4.11:8080,10.100.4.12:8080,10.100.5.10:8080 Session Affinity:         None External Traffic Policy:  Cluster Events:   Type    Reason                Age   From                Message   ----    ------                ----  ----                -------   Normal  EnsuringLoadBalancer  29m   service-controller  Ensuring load balancer   Normal  EnsuredLoadBalancer   28m   service-controller  Ensured load balancer The Kubernetes deployment file: apiVersion: apps/v1 kind: Deployment metadata:   name: nginx-deployment   labels:     app: nginx spec:   replicas: 3   selector:     matchLabels:       app: nginx   template:     metadata:       labels:         app: nginx     spec:       containers:       - name: nginx         image: nginx:latest         ports:         - containerPort: 8080 --- apiVersion: v1 kind: Service metadata:   name: nginx-service spec:   selector:     app: nginx   type: LoadBalancer   ports:   - protocol: TCP     port: 80     targetPort: 8080 Does ayone have any pointers as to why the amphora is not able to reach the nodeports of the kubernetes workers? Thanks in advance From fungi at yuggoth.org Mon May 24 22:36:28 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 24 May 2021 22:36:28 +0000 Subject: [automation-sig][freezer][i18n-sig][powervmstacker-sig][public-cloud-sig] IRC channel cleanup In-Reply-To: <20210519232914.t2im6yugufgcyg7y@yuggoth.org> References: <20210519232914.t2im6yugufgcyg7y@yuggoth.org> Message-ID: <20210524223627.wifytwitoraxol2d@yuggoth.org> On 2021-05-19 23:29:14 +0000 (+0000), Jeremy Stanley wrote: > I've proposed a pair of changes removing the OpenDev Collaboratory > IRC bots (including logging) from inactive channels. Some of these > channels correspond to current groups within the OpenStack > community, so it's only fair to give them a heads up and let them > know. The following channels have had no human discussion at all (no > comments from anything other than our bots) for all of 2021 so far: > > * #openstack-auto-scaling > * #openstack-fr > * #openstack-freezer > * #openstack-powervm > * #openstack-publiccloud > * #openstack-self-healing [...] > https://review.opendev.org/792301 > https://review.opendev.org/792302 > > If you're concerned and would like to resume using one or more of > these channels, feel free to follow up here on the mailing list or > with review comments some time in the next few days. If there are no > objections I plan to merge these no later than Monday, May 24. Of > course it's quite easy to reinstate services on a channel at any > time, so if you don't see this until the changes have already > merged, please follow up anyway and we can restore the bots as > needed. The indicated changes have now merged. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 24 22:42:11 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 24 May 2021 22:42:11 +0000 Subject: [all][infra] Topic change request for the retired projects IRC channel In-Reply-To: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> References: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> Message-ID: <20210524224211.wabz72xi52qamp7p@yuggoth.org> On 2021-05-19 19:44:48 -0500 (-0500), Ghanshyam Mann wrote: > We have retired the few projects in OpenStack and their IRC > channel are still there. I chatted with fungi about it on TC > channel and he suggested that unregistering these channels is not > a good idea or recommendation. > > In that case, can we change the Topic of these channels to > something saying that "XYZ project is retired and this channel is > also not active, contact openstack-discuss ML for any query" > > Below is the list of retired projects channel: > > #openstack-karbor > #openstack-searchlight > #openstack-qinling > #openstack-tricircle > #congress I've done this just now. Note that the old Tricircle channel was just #tricircle and not #openstack-tricircle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From johnsomor at gmail.com Tue May 25 00:12:17 2021 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 24 May 2021 17:12:17 -0700 Subject: [victoria][magnum][octavia] in amphora In-Reply-To: References: Message-ID: Hi Luke, >From the snippet you provided, it looks like the load balancer is healthy and working as expected, but the health check of the member endpoint is failing. I also see that the health monitor is configured for a type of TCP and the members are configured for port 31765. This assumes that the members do not have "monitor_address" or "monitor_port" configured to override the health check endpoint. One thing to check is the subnet_id that was configured on the members in Octavia. The members (192.168.1.99:31765, etc.) must be reachable from that subnet_id. If no subnet_id was used when creating the members, it will use the VIP subnet of the load balancer. Sometimes users forget to specify the subnet_id, that a member should be reachable from, when creating the member on the load balancer. Michael On Mon, May 24, 2021 at 1:48 PM Luke Camilleri wrote: > > I have configured magnum with a service type of LoadBalancer and it is > successfully deploying an external LoadBalancer via Octavia. The problem > is that I cannot reach the public IP address but can see the entries in > the haproxy.log on the amphora instance and the log shows 0 SC-- > at the end of each entry when it is being accessed (via a browser for > example). > > So the Octavia part seems to be fine, the config shows the correct LB > --> listener --> pool --> member and the nodeports that the service > should be listening on (I am assuming that the same nodeports are also > used for the healthchecks) > > The haproxy.cfg in the amphora instance shows the below in the pool > members section: > > server 43834d2f-4e22-4065-b448-ddf0713f2ced 192.168.1.191:31765 > weight 1 check inter 60s fall 3 rise 3 > server 3a733c48-24dd-426e-8394-699a908121ee 192.168.1.36:31765 > weight 1 check inter 60s fall 3 rise 3 > server 8d093783-79c9-4094-b3a2-8d31b1c4567f 192.168.1.99:31765 > weight 1 check inter 60s fall 3 rise 3 > > and the status of the pool's members (# openstack loadbalancer member > list) is as follows: > > +------------------------------------------------------------+---------------------------+-----------------+-----------------------+---------------------------+ > | id | provisioning_status | address | protocol_port | > operating_status | > > 8d093783-79c9-4094-b3a2-8d31b1c4567f ACTIVE 192.168.1.99 > 31765 ERROR > 43834d2f-4e22-4065-b448-ddf0713f2ced ACTIVE 192.168.1.191 > 31765 ERROR > 3a733c48-24dd-426e-8394-699a908121ee ACTIVE 192.168.1.36 > 31765 ERROR > > From the below loadbalancer healthmonitor show command I can see that > the health checks are being done via TCP on the same port and can > confirm that the security group allows the nodeports range (ALLOW IPv4 > 30000-32767/tcp from 0.0.0.0/0) > > +---------------------+--------------------------------------+ > | Field | Value > +---------------------+--------------------------------------+ > | provisioning_status | ACTIVE > | type | TCP > | id | 86604638-27db-47d2-ad9c-0594564a44be > | operating_status | ONLINE > +---------------------+--------------------------------------+ > > $ kubectl describe service nginx-service > Name: nginx-service > Namespace: default > Labels: > Annotations: > Selector: app=nginx > Type: LoadBalancer > IP Families: > IP: 10.254.255.232 > IPs: > LoadBalancer Ingress: 185.89.239.217 > Port: 80/TCP > TargetPort: 8080/TCP > NodePort: 31765/TCP > Endpoints: 10.100.4.11:8080,10.100.4.12:8080,10.100.5.10:8080 > Session Affinity: None > External Traffic Policy: Cluster > Events: > Type Reason Age From Message > ---- ------ ---- ---- ------- > Normal EnsuringLoadBalancer 29m service-controller Ensuring load > balancer > Normal EnsuredLoadBalancer 28m service-controller Ensured load > balancer > > The Kubernetes deployment file: > > apiVersion: apps/v1 > kind: Deployment > metadata: > name: nginx-deployment > labels: > app: nginx > spec: > replicas: 3 > selector: > matchLabels: > app: nginx > template: > metadata: > labels: > app: nginx > spec: > containers: > - name: nginx > image: nginx:latest > ports: > - containerPort: 8080 > --- > apiVersion: v1 > kind: Service > metadata: > name: nginx-service > spec: > selector: > app: nginx > type: LoadBalancer > ports: > - protocol: TCP > port: 80 > targetPort: 8080 > > Does ayone have any pointers as to why the amphora is not able to reach > the nodeports of the kubernetes workers? > > Thanks in advance > From iwienand at redhat.com Tue May 25 03:22:41 2021 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 25 May 2021 13:22:41 +1000 Subject: [tripleo][ci] mirror issues failing jobs In-Reply-To: References: Message-ID: On Mon, May 24, 2021 at 02:26:10PM -0600, Wesley Hayutin wrote: 65;6401;1c> something is going on w/ either centos or the infra mirrors... I suspect > centos atm. Just FYI, logs of all mirroring processes are available at https://static.opendev.org/mirror/logs/ (centos is under "rsync-mirrors"). This is all driven by https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror-update If anything ever seems out of sync, these are the places to start. -i From mrunge at matthias-runge.de Tue May 25 06:18:39 2021 From: mrunge at matthias-runge.de (Matthias Runge) Date: Tue, 25 May 2021 08:18:39 +0200 Subject: [telemetry] Retire panko In-Reply-To: References: <035722a2-7860-6469-be83-240aa4a72ff3@matthias-runge.de> Message-ID: On Mon, May 10, 2021 at 09:33:33AM +0200, Matthias Runge wrote: > On Tue, Apr 27, 2021 at 11:29:37AM +0200, Matthias Runge wrote: > > Hi there, > > > > over the past couple of cycles, we have seen decreasing interest on panko. > > Also it has some debts, which were just carried over from the early days. > > > > We discussed over at the PTG and didn't really found a reason to keep it > > alive or included under OpenStack. We are tracking the effort of deprecating panko https://review.opendev.org/q/topic:%22retire_panko%22+(status:open%20OR%20status:merged) Matthias -- Matthias Runge From suzhengwei at inspur.com Tue May 25 07:10:53 2021 From: suzhengwei at inspur.com (=?utf-8?B?U2FtIFN1ICjoi4/mraPkvJ8p?=) Date: Tue, 25 May 2021 07:10:53 +0000 Subject: =?utf-8?B?562U5aSNOiBbTm92YV0gTWVldGluZyB0aW1lIHBvbGw=?= In-Reply-To: References: Message-ID: <56c234f49e7c41bb86a86af824f41187@inspur.com> Hi, gibi: I'm very sorry for respone later. A meeting around 8:00 UTC seems very appropriate to us. It is afternoon work time in East Asian when 8:00 UTC. Now my colleague, have some work on Cyborg across with Nova, passthroug device, TPM and so on. If they can join the irc meeting talking with the community , it will be much helpful. -----邮件原件----- 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] 发送时间: 2021年5月14日 14:12 收件人: Sam Su (苏正伟) 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org 主题: Re: [Nova] Meeting time poll On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) wrote: > From: Sam Su (苏正伟) > Sent: Friday, May 14, 2021 03:23 > To: alifshit at redhat.com > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [Nova] Meeting time poll > > Hi, Nova team: Hi Sam! > There are many asian developers for Openstack community. I > found the current IRC time of Nova is not friendly to them, especially > to East Asian. > If they > can take part in the IRC meeting, the Nova may have more developers. > Of > cource, Central Europe and NA West Coast is firstly considerable. If > the team could schedule the meeting once per month, time suitable for > asians, more people would participate in the meeting discussion. You have a point. In the past Nova had alternating meeting time slots one for EU+NA and one for the NA+Asia timezones. Our experience was that the NA+Asia meeting time slot was mostly lacking participants. So we merged the two slots. But I can imagine that the situation has changed since and there might be need for an alternating meeting again. We can try what you suggest and do an Asia friendly meeting once a month. The next question is what time you would like to have that meeting. Or more specifically which part of the nova team you would like to meet more? * Do a meeting around 8:00 UTC to meet Nova devs from the EU * Do a meeting around 0:00 UTC to meet Nova devs from North America If we go for the 0:00 UTC time slot then I need somebody to chair that meeting as I'm from the EU. Alternatively to having a formal meeting I can offer to hold a free style office hour each Thursday 8:00 UTC in #openstack-nova. I made the same offer when we moved the nova meeting to be a non alternating one. But honestly I don't remember ever having discussion happening specifically due to that office hour in #openstack-nova. Cheers, gibi p.s.: the smime in your mail is not really mailing list friendly. Your mail does not appear properly in the archive. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3606 bytes Desc: not available URL: From Albert.Shih at obspm.fr Tue May 25 09:43:27 2021 From: Albert.Shih at obspm.fr (Albert Shih) Date: Tue, 25 May 2021 11:43:27 +0200 Subject: [victoria] Reboot problem In-Reply-To: References: Message-ID: Le 22/05/2021 à 10:27:52-0400, Laurent Dumont a écrit Hi, > I would recommend turning up the debug logs for nova on the compute and trying > again. It could be something where the ISCSI session is not properly closed > when you shutdown the VM and it creates issues when the VM is started again. Thanks for your answer. I find the problem, and «a» solution. Not sure it's the good one but it's work. It seem each time nova shutdown the instance and try to restart the instance it's unable to find the volume. So I put in /etc/multipath.conf defaults { user_friendly_names no } event I don't use multipath. But with that. It's working fine. Regards -- Albert SHIH Observatoire de Paris xmpp: jas at obspm.fr Heure local/Local time: Tue May 25 11:41:00 AM CEST 2021 From whayutin at redhat.com Tue May 25 12:57:28 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 25 May 2021 06:57:28 -0600 Subject: [tripleo][ci] mirror issues failing jobs In-Reply-To: References: Message-ID: On Mon, May 24, 2021 at 9:22 PM Ian Wienand wrote: > On Mon, May 24, 2021 at 02:26:10PM -0600, Wesley Hayutin wrote: > 65;6401;1c> something is going on w/ either centos or the infra mirrors... > I suspect > > centos atm. > > Just FYI, logs of all mirroring processes are available at > > https://static.opendev.org/mirror/logs/ > > (centos is under "rsync-mirrors"). > > This is all driven by > > > https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror-update > > If anything ever seems out of sync, these are the places to start. > > -i > > Thanks Clark and Ian! I took some notes on those links. Things are recovering but it's still a little choppy atm, but improving. TripleO folks.. probably want to keep their +2 / wf to a minimum today until we're in the clear. Thanks all! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue May 25 13:26:49 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 May 2021 08:26:49 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 27th at 1500 UTC Message-ID: <179a3b4db35.de0a1f7d210013.1168818962286123773@ghanshyammann.com> Hello Everyone, Technical Committee's next weekly meeting is scheduled for May 27th at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, May 26th, at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From mkopec at redhat.com Tue May 25 13:56:03 2021 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 25 May 2021 15:56:03 +0200 Subject: [interop] Interop Testing Guidelines In-Reply-To: <59BD4681-3D7C-4B45-8C2B-82445B042443@getmailspring.com> References: <59BD4681-3D7C-4B45-8C2B-82445B042443@getmailspring.com> Message-ID: Hi Jimmy, there are newer guidelines, see: * the latest one: https://opendev.org/osf/interop/src/branch/master/2020.11.json * the one before: https://opendev.org/osf/interop/src/branch/master/2020.06.json and etc ... They aren't in a directory as they were until 2016. I can't tell you why the format change happened, I wasn't around at that time. Anyway, new guidelines are still created approx twice a year. Add-ons guidelines have been recently added as well, which refstack server presents too (see OpenStack Marketing Programs list): https://refstack.openstack.org/#/ Currently there is an ongoing effort to make sure that we track all the relevant tests. We have also reached out to the teams during the Xena PTG and were asking if there are any new tests worth being included in the next guideline. On Mon, 24 May 2021 at 19:01, Jimmy McArthur wrote: > Hi all - > > I noticed that the most recent testing guidelines [1] are from 2016. Is > there a plan to update those to the 2020 guidelines? They still mention > Chris Hoge, who is no longer in the community, and I'm assuming the > guidelines there are outdated as well. > > Cheers, > Jimmy > > https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst > -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue May 25 14:16:35 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 25 May 2021 10:16:35 -0400 Subject: [cinder] reminder: this week's meeting in video+IRC Message-ID: Quick reminder that this week's Cinder team meeting on Wednesday 26 May, being the final meeting of the month, will be held in both videoconference and IRC at the regularly scheduled time of 1400 UTC. These are the video meeting rules we've agreed to: * Everyone will keep IRC open during the meeting. * We'll take notes in IRC to leave a record similar to what we have for our regular IRC meetings. * Some people are more comfortable communicating in written English. So at any point, any attendee may request that the discussion of the current topic be conducted entirely in IRC. * The meeting will be recorded. connection info: https://bluejeans.com/3228528973 meeting agenda: https://etherpad.opendev.org/p/cinder-xena-meetings cheers, brian From gagehugo at gmail.com Tue May 25 14:32:44 2021 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 25 May 2021 09:32:44 -0500 Subject: [openstack-helm] Meeting Cancelled Message-ID: Hey team, Since there are no agenda items [0] for the IRC meeting today, the meeting is cancelled. Our next meeting will be June 1st. Thanks [0] https://etherpad.opendev.org/p/openstack-helm-weekly-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Tue May 25 15:20:28 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 25 May 2021 11:20:28 -0400 Subject: [cinder] priority reviews for milestone -1 Message-ID: <320f22c3-dca0-14bc-66f1-b4b96521edc5@gmail.com> Xena milestone-1 is this week. One of the Cinder project priorities for M-1 is the long-awaited removal of the Block Storage API v2 and its support in the python-cinderclient. Patches are available for your reviewing pleasure: https://review.opendev.org/q/topic:%22drop-v2%22+status:open cheers, brian From jimmy at openstack.org Tue May 25 16:21:47 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 25 May 2021 11:21:47 -0500 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: Message-ID: Hey Martin, Apologies my message was confusing. I'm aware of the latest guidelines, but I was referring to the Procedures around the Guidelines that haven't been updated: https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst My understanding is the instructions differ there from the latest tests and it's where we point people from openstack.org/interop. Is there a different place we should be pointing? Thank you, Jimmy On May 25 2021, at 8:56 am, Martin Kopec wrote: > Hi Jimmy, > > there are newer guidelines, see: > * the latest one: https://opendev.org/osf/interop/src/branch/master/2020.11.json > * the one before: https://opendev.org/osf/interop/src/branch/master/2020.06.json > and etc ... > > > They aren't in a directory as they were until 2016. I can't tell you why the format change happened, I wasn't around at that time. > > Anyway, new guidelines are still created approx twice a year. Add-ons guidelines have been recently added as well, which refstack server presents too (see OpenStack Marketing Programs list): > https://refstack.openstack.org/#/ > > Currently there is an ongoing effort to make sure that we track all the relevant tests. We have also reached out to the teams during the Xena PTG and were asking if there are any new tests worth being included in the next guideline. > > On Mon, 24 May 2021 at 19:01, Jimmy McArthur wrote: > > Hi all - > > > > I noticed that the most recent testing guidelines [1] are from 2016. Is there a plan to update those to the 2020 guidelines? They still mention Chris Hoge, who is no longer in the community, and I'm assuming the guidelines there are outdated as well. > > Cheers, > > Jimmy > > > > https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst > > -- > Martin Kopec > Senior Software Quality Engineer > Red Hat EMEA > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue May 25 16:23:39 2021 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 25 May 2021 18:23:39 +0200 Subject: [kolla][kolla-absible][cinder][iscsi] In-Reply-To: <098E9026-3E35-4294-99D5-97A99CC9AC13@univ-grenoble-alpes.fr> References: <098E9026-3E35-4294-99D5-97A99CC9AC13@univ-grenoble-alpes.fr> Message-ID: <20210525162339.decbc3grknpr36lr@localhost> On 21/05, Franck VEDEL wrote: > Hello. > First, sorry…poor english… so it’s a google translation. I hope you could understand my problem. > > Following the installation (with kolla ansiible) of an openstack (wallaby) on a physical server (under centos8), we were able to see all the possibilities offered in our teaching by Openstack. (i’m working in a french university). > > For various reasons, we need to extend this manipulation. We will have 3 nodes (dell R740) and a Dell bay for storage.(dell compellent) > > After having mounted a first test with 3 servers (T340), and put the storage (LVM) on node 3 (without the bay therefore) to test whether we had understood certain things correctly (in particular the parameter of the « multinode" file and that of « globals.xml"), we want to test with the Compellent bay. > Hi, I have never used kolla ansible, so I can only offer general Cinder pointers. > My question is as follows: knowing that the 3 nodes are three identical servers, in the multinode file, how to configure [storage]… should one of the 3 servers be put, add the iscsid docker… or else put the IP of the bay . I admit that this aspect is problematic for me. If your storage is iSCSI, then you will need iscsid on ALL your nodes. Compute nodes need it to attach volumes to instances, and controller nodes where Cinder is running will need it for create volume from image and some migration/retype operations. > what about « enable_iscsii » and « enable_backends_lvm » parameters ? > It's important to realize that your 2 scenarios (LVM & Dell) are quite different. LVM storage is local to the host where cinder is working (no HA option is possible), but in the Dell case storage is external, so you can do HA. According to the docs [1] for the Dell storage you'll need to use "enable_cinder_backend_iscsi", but for LVM you should use "enable_cinder_backend_lvm" instead. > In a configuration like mine, would you put a "controller" or 2 or 3? It depends on what you want to do, if those 3 nodes are only going to be used for the controllers, then I would recommend Cinder to be deployed on the 3 nodes for the Dell case, and only on 1 for the LVM case. If the 3 nodes are also going to be used for computing, then you have to decide what is best, having more resources available for your VMs (only 1 cinder node) or have better resiliency (> 1 cinder node). > Should we instead put an LVM on one of the servers and with iscsi make this LVM point the bay? That's more limiting, so I would only go that way if the storage bay was not supported by Cinder. > > Maybe these are silly questions, I'm not sure. This use of the bay is new and I don't know how best to do it. Between cinder, lvm, iscsi, the bay, the multinode file and the options of globals.xml, it is still not easy at first > > > Franck VEDEL > I don't know if your storage array has multiple interfaces, but if it does you'll want to enable multipathing, with "enable_multipathd". Cheers, Gorka. [1]: https://github.com/openstack/kolla-ansible/blob/stable/wallaby/doc/source/reference/storage/cinder-guide.rst#cinder-backend-with-external-iscsi-storage From gmann at ghanshyammann.com Tue May 25 19:04:55 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 May 2021 14:04:55 -0500 Subject: [all][infra] Topic change request for the retired projects IRC channel In-Reply-To: <20210524224211.wabz72xi52qamp7p@yuggoth.org> References: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> <20210524224211.wabz72xi52qamp7p@yuggoth.org> Message-ID: <179a4ea67aa.d25b0f3a12610.2545883089920632556@ghanshyammann.com> ---- On Mon, 24 May 2021 17:42:11 -0500 Jeremy Stanley wrote ---- > On 2021-05-19 19:44:48 -0500 (-0500), Ghanshyam Mann wrote: > > We have retired the few projects in OpenStack and their IRC > > channel are still there. I chatted with fungi about it on TC > > channel and he suggested that unregistering these channels is not > > a good idea or recommendation. > > > > In that case, can we change the Topic of these channels to > > something saying that "XYZ project is retired and this channel is > > also not active, contact openstack-discuss ML for any query" > > > > Below is the list of retired projects channel: > > > > #openstack-karbor > > #openstack-searchlight > > #openstack-qinling > > #openstack-tricircle > > #congress > > I've done this just now. Note that the old Tricircle channel was > just #tricircle and not #openstack-tricircle. Thanks for that, it is fine now. For tricircle, it was #openstack-tricircle only but I saw you did clean up that last year only, so all good here too - http://lists.openstack.org/pipermail/openstack-discuss/2020-March/013257.html -gmann > -- > Jeremy Stanley > From fungi at yuggoth.org Tue May 25 19:56:28 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 25 May 2021 19:56:28 +0000 Subject: [all][infra] Topic change request for the retired projects IRC channel In-Reply-To: <179a4ea67aa.d25b0f3a12610.2545883089920632556@ghanshyammann.com> References: <179873b6816.10baeb36a28384.6822408342638897985@ghanshyammann.com> <20210524224211.wabz72xi52qamp7p@yuggoth.org> <179a4ea67aa.d25b0f3a12610.2545883089920632556@ghanshyammann.com> Message-ID: <20210525195627.pbli5bo64kez5d6h@yuggoth.org> On 2021-05-25 14:04:55 -0500 (-0500), Ghanshyam Mann wrote: [...] > For tricircle, it was #openstack-tricircle only but I saw you did > clean up that last year only, so all good here too [...] We might have been logging it at one point in time, but it was never registered so there was no topic to change. However #tricircle was registered and had a persistent topic set, so I updated that. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From luke.camilleri at zylacomputing.com Tue May 25 20:11:40 2021 From: luke.camilleri at zylacomputing.com (Luke Camilleri) Date: Tue, 25 May 2021 22:11:40 +0200 Subject: [victoria][magnum][octavia] in amphora In-Reply-To: References: Message-ID: <40169e6b-e861-2ae7-5328-2af5a3f71983@zylacomputing.com> HI Michael and thanks for your reply. Below please find my answers: Yes the load balancer is healthy and the health check of the member endpoint in failing Yes, none of the members have monitor_address or monitor_port configured to override the health check endpoint (below details of one of the members) | Field               | Value +---------------------+-------------------------------------------------------------------------------------------------------------------+ | address             | 192.168.1.99 | admin_state_up      | True | created_at          | 2021-05-24T20:07:56 | id                  | 8d093783-79c9-4094-b3a2-8d31b1c4567f | name                | member_0_k8s-c1-final-yrlns2q7qo2w-node-1_kube_service_f48f5572-0011-4b66-b9ab-63190d0c00b3_default_nginx-service | operating_status    | ERROR | project_id          | 35a0fa65de1741619709485c5f6d989b | protocol_port       | 31765 | provisioning_status | ACTIVE | subnet_id           | 19d73049-4bf7-455c-b278-3cba451102fb | updated_at          | 2021-05-24T20:08:51 | weight              | 1 | monitor_port        | None | monitor_address     | None | backup              | False The subnet ID shown in the above details for the member node are the same and the subnet ID assigned to this project as shown below: | Field                | Value                                | +----------------------+--------------------------------------+ | allocation_pools     | 192.168.1.10-192.168.1.254 | cidr                 | 192.168.1.0/24 | created_at           | 2021-03-24T20:14:03Z | description          | | dns_nameservers      | 8.8.4.4, 8.8.8.8 | dns_publish_fixed_ip | None | enable_dhcp          | True | gateway_ip           | 192.168.1.1 | host_routes          | | id                   | 19d73049-4bf7-455c-b278-3cba451102fb | ip_version           | 4 | ipv6_address_mode    | None | ipv6_ra_mode         | None | name                 | red-subnet-2 | network_id           | 9d2e17df-a93f-4709-941d-a6e8f98f5556 | prefix_length        | None | project_id           | 35a0fa65de1741619709485c5f6d989b | revision_number      | 1 | segment_id           | None | service_types        | | subnetpool_id        | None | tags | updated_at           | 2021-03-26T13:59:07Z I can SSH on an instance unrelated to the issue and from within SSH into all 3 members without any issues. The members are part of a Kubernetes cluster (created using Magnum) and both subnet and network have been specified during the cluster creation. the issue seems to be coming from the nodeport which is created by default and that should proxy the LoadBalancer requests to the clusterIP as I do not get any reply when querying the port with curl and since the health manager uses that port it is also failing I do not believe that the issue is actually from Octavia but from Magnum from what I can see and the nodeport functionality that gets created to have the amphora instance reach the nodeports Thanks in advance On 25/05/2021 02:12, Michael Johnson wrote: > Hi Luke, > > From the snippet you provided, it looks like the load balancer is > healthy and working as expected, but the health check of the member > endpoint is failing. > > I also see that the health monitor is configured for a type of TCP and > the members are configured for port 31765. This assumes that the > members do not have "monitor_address" or "monitor_port" configured to > override the health check endpoint. > > One thing to check is the subnet_id that was configured on the members > in Octavia. The members (192.168.1.99:31765, etc.) must be reachable > from that subnet_id. If no subnet_id was used when creating the > members, it will use the VIP subnet of the load balancer. > > Sometimes users forget to specify the subnet_id, that a member should > be reachable from, when creating the member on the load balancer. > > Michael > > On Mon, May 24, 2021 at 1:48 PM Luke Camilleri > wrote: >> I have configured magnum with a service type of LoadBalancer and it is >> successfully deploying an external LoadBalancer via Octavia. The problem >> is that I cannot reach the public IP address but can see the entries in >> the haproxy.log on the amphora instance and the log shows 0 SC-- >> at the end of each entry when it is being accessed (via a browser for >> example). >> >> So the Octavia part seems to be fine, the config shows the correct LB >> --> listener --> pool --> member and the nodeports that the service >> should be listening on (I am assuming that the same nodeports are also >> used for the healthchecks) >> >> The haproxy.cfg in the amphora instance shows the below in the pool >> members section: >> >> server 43834d2f-4e22-4065-b448-ddf0713f2ced 192.168.1.191:31765 >> weight 1 check inter 60s fall 3 rise 3 >> server 3a733c48-24dd-426e-8394-699a908121ee 192.168.1.36:31765 >> weight 1 check inter 60s fall 3 rise 3 >> server 8d093783-79c9-4094-b3a2-8d31b1c4567f 192.168.1.99:31765 >> weight 1 check inter 60s fall 3 rise 3 >> >> and the status of the pool's members (# openstack loadbalancer member >> list) is as follows: >> >> +------------------------------------------------------------+---------------------------+-----------------+-----------------------+---------------------------+ >> | id | provisioning_status | address | protocol_port | >> operating_status | >> >> 8d093783-79c9-4094-b3a2-8d31b1c4567f ACTIVE 192.168.1.99 >> 31765 ERROR >> 43834d2f-4e22-4065-b448-ddf0713f2ced ACTIVE 192.168.1.191 >> 31765 ERROR >> 3a733c48-24dd-426e-8394-699a908121ee ACTIVE 192.168.1.36 >> 31765 ERROR >> >> From the below loadbalancer healthmonitor show command I can see that >> the health checks are being done via TCP on the same port and can >> confirm that the security group allows the nodeports range (ALLOW IPv4 >> 30000-32767/tcp from 0.0.0.0/0) >> >> +---------------------+--------------------------------------+ >> | Field | Value >> +---------------------+--------------------------------------+ >> | provisioning_status | ACTIVE >> | type | TCP >> | id | 86604638-27db-47d2-ad9c-0594564a44be >> | operating_status | ONLINE >> +---------------------+--------------------------------------+ >> >> $ kubectl describe service nginx-service >> Name: nginx-service >> Namespace: default >> Labels: >> Annotations: >> Selector: app=nginx >> Type: LoadBalancer >> IP Families: >> IP: 10.254.255.232 >> IPs: >> LoadBalancer Ingress: 185.89.239.217 >> Port: 80/TCP >> TargetPort: 8080/TCP >> NodePort: 31765/TCP >> Endpoints: 10.100.4.11:8080,10.100.4.12:8080,10.100.5.10:8080 >> Session Affinity: None >> External Traffic Policy: Cluster >> Events: >> Type Reason Age From Message >> ---- ------ ---- ---- ------- >> Normal EnsuringLoadBalancer 29m service-controller Ensuring load >> balancer >> Normal EnsuredLoadBalancer 28m service-controller Ensured load >> balancer >> >> The Kubernetes deployment file: >> >> apiVersion: apps/v1 >> kind: Deployment >> metadata: >> name: nginx-deployment >> labels: >> app: nginx >> spec: >> replicas: 3 >> selector: >> matchLabels: >> app: nginx >> template: >> metadata: >> labels: >> app: nginx >> spec: >> containers: >> - name: nginx >> image: nginx:latest >> ports: >> - containerPort: 8080 >> --- >> apiVersion: v1 >> kind: Service >> metadata: >> name: nginx-service >> spec: >> selector: >> app: nginx >> type: LoadBalancer >> ports: >> - protocol: TCP >> port: 80 >> targetPort: 8080 >> >> Does ayone have any pointers as to why the amphora is not able to reach >> the nodeports of the kubernetes workers? >> >> Thanks in advance >> From whayutin at redhat.com Wed May 26 03:36:24 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 25 May 2021 21:36:24 -0600 Subject: [tripleo][ci] centos-stream-8 libvirt issue Message-ID: Greetings, This may be our first stream outage. \0/ Hooray for hitting things first upstream.. lolz FYI.. not 100% sure as it's late here, but I think I've captured what is happening. https://bugs.launchpad.net/tripleo/+bug/1929634 https://bugzilla.redhat.com/show_bug.cgi?id=1961558 Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed May 26 06:34:41 2021 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 26 May 2021 08:34:41 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: Hello all, I think we shouldn't let this one die in silence. Apparently, new policies are being enforcing within Freenode and, if by mistake, a channel promotes another IRC network in their topic, they will be taken by freenode staff - meaning loss of rights, loss of ownership, and topic will be changed. that's not what we can call "foss supporting" anymore imho. (Though, apparently, this was due to some miscommunication - so long for the transparency, stability and so on.) Reading the «explanation» on both sides[1][2], as well as latest freenode communication[3], I'd rather go for the OFTC thing, since it's (really) independent of the whole thing. True freenode is (was?) a central player for FOSS communities. But with all the screaming around, it might get harder to stick with them. Though, right now, the current privacy policy[4] doesn't reflect any weird use of our data (that is, as of 2021-05-26, 08:30am CET). Maybe a false flag? We'll probably never really know, but checking that privacy policy and related contents evolution might provide a good hint. Some of the other communities I'm following have implemented a bridge bot, basically copy-pasting content from one network to another. That way, ppl could be on OFTC (or anywhere else where we have a foot on), while primary thing would stick to freenode (and thus allowing a clean migration)? My 2c ;). Cheers, C. [1] https://freenode.net/news/freenode-is-foss [2] https://libera.chat/news/welcome-to-libera-chat [3] https://freenode.net/news/for-foss [4] https://freenode.net/policies On 5/14/21 5:05 PM, Chris Morgan wrote: > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > > https://p.haavard.me/407 > > I have no independent validation of this. > > Chris > > -- > Chris Morgan > -- Cédric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From patryk.jakuszew at gmail.com Wed May 26 07:21:38 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Wed, 26 May 2021 09:21:38 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Wed, 26 May 2021, 08:45 Cédric Jeanneret wrote: > Hello all, > > I think we shouldn't let this one die in silence. Apparently, new > policies are being enforcing within Freenode and, if by mistake, a > channel promotes another IRC network in their topic, they will be taken > by freenode staff - meaning loss of rights, loss of ownership, and topic > will be changed. > To be more specific, around 700 channels were just purged and moved to offtopic (double hash) namespace *just for having "libera" mentioned in its title.* It makes the case rather clear: Freenode is now indeed operated by hostile entity treating the once-popular communications platform as their own playground. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From patryk.jakuszew at gmail.com Wed May 26 07:38:07 2021 From: patryk.jakuszew at gmail.com (Patryk Jakuszew) Date: Wed, 26 May 2021 09:38:07 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Wed, 26 May 2021, 09:21 Patryk Jakuszew wrote: > On Wed, 26 May 2021, 08:45 Cédric Jeanneret wrote: > >> Hello all, >> >> I think we shouldn't let this one die in silence. Apparently, new >> policies are being enforcing within Freenode and, if by mistake, a >> channel promotes another IRC network in their topic, they will be taken >> by freenode staff - meaning loss of rights, loss of ownership, and topic >> will be changed. >> > > To be more specific, around 700 channels were just purged and moved to > offtopic (double hash) namespace *just for having "libera" mentioned in its > title.* > Sorry, "purged" is an incorrect term. What I actually meant is that a bot entered each channel containing "libera" in its title, changed the topic to something like this: 05:02:55 freenodecom set the topic: This channel has moved to ##channelname. The topic is in violation of freenode policy: https://freenode.net/policies and then created a new ##-prefixed channel. -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed May 26 07:50:42 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 26 May 2021 09:50:42 +0200 Subject: =?UTF-8?B?562U5aSNOg==?= [Nova] Meeting time poll In-Reply-To: <56c234f49e7c41bb86a86af824f41187@inspur.com> References: <56c234f49e7c41bb86a86af824f41187@inspur.com> Message-ID: Hi, On Tue, May 25, 2021 at 07:10, Sam Su (苏正伟) wrote: > Hi, gibi: > I'm very sorry for respone later. > A meeting around 8:00 UTC seems very appropriate to us. It is > afternoon work time in East Asian when 8:00 UTC. > Now my colleague, have some work on Cyborg across with Nova, > passthroug device, TPM and so on. If they can join the irc meeting > talking with the community , it will be much helpful. > @Sam: We discussed your request yesterday[1] and it seems that the team is not objecting against a monthly office hour in #openstack-nova around UTC 8 or UTC 9. But we did not agreed which day we should have it so I set up a poll[2]. @Team: As we discussed yesterday I opened a poll to agree on the day of the week and the exact start time for the Asia friendly office hours slot. Please vote in the poll[2] before next Tuesday. Cheers, gibi [1] http://eavesdrop.openstack.org/meetings/nova/2021/nova.2021-05-25-16.00.log.html#l-100 [2] https://doodle.com/poll/svrnmrtn6nnknzqp > > -----邮件原件----- > 发件人: Balazs Gibizer [mailto:balazs.gibizer at est.tech] > 发送时间: 2021年5月14日 14:12 > 收件人: Sam Su (苏正伟) > 抄送: alifshit at redhat.com; openstack-discuss at lists.openstack.org > 主题: Re: [Nova] Meeting time poll > > > > On Fri, May 14, 2021 at 01:23, Sam Su (苏正伟) > wrote: >> From: Sam Su (苏正伟) >> Sent: Friday, May 14, 2021 03:23 >> To: alifshit at redhat.com >> Cc: openstack-discuss at lists.openstack.org >> Subject: Re: [Nova] Meeting time poll >> >> Hi, Nova team: > > Hi Sam! > >> There are many asian developers for Openstack community. I >> found the current IRC time of Nova is not friendly to them, >> especially >> to East Asian. >> If they >> can take part in the IRC meeting, the Nova may have more developers. >> Of >> cource, Central Europe and NA West Coast is firstly considerable. If >> the team could schedule the meeting once per month, time suitable >> for >> asians, more people would participate in the meeting discussion. > > You have a point. In the past Nova had alternating meeting time slots > one for EU+NA and one for the NA+Asia timezones. Our experience was > that the NA+Asia meeting time slot was mostly lacking participants. > So we merged the two slots. But I can imagine that the situation has > changed since and there might be need for an alternating meeting > again. > > We can try what you suggest and do an Asia friendly meeting once a > month. The next question is what time you would like to have that > meeting. Or more specifically which part of the nova team you would > like to meet more? > > * Do a meeting around 8:00 UTC to meet Nova devs from the EU > > * Do a meeting around 0:00 UTC to meet Nova devs from North America > > If we go for the 0:00 UTC time slot then I need somebody to chair > that meeting as I'm from the EU. > > Alternatively to having a formal meeting I can offer to hold a free > style office hour each Thursday 8:00 UTC in #openstack-nova. I made > the same offer when we moved the nova meeting to be a non alternating > one. > But honestly I don't remember ever having discussion happening > specifically due to that office hour in #openstack-nova. > > Cheers, > gibi > > p.s.: the smime in your mail is not really mailing list friendly. > Your mail does not appear properly in the archive. > > > > From dtantsur at redhat.com Wed May 26 11:15:21 2021 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 26 May 2021 13:15:21 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Wed, May 26, 2021 at 8:44 AM Cédric Jeanneret wrote: > Hello all, > > I think we shouldn't let this one die in silence. Apparently, new > policies are being enforcing within Freenode and, if by mistake, a > channel promotes another IRC network in their topic, they will be taken > by freenode staff - meaning loss of rights, loss of ownership, and topic > will be changed. > FYI #rdo fell victim to this and is now on Libera. Dmitry > > that's not what we can call "foss supporting" anymore imho. (Though, > apparently, this was due to some miscommunication - so long for the > transparency, stability and so on.) > > Reading the «explanation» on both sides[1][2], as well as latest > freenode communication[3], I'd rather go for the OFTC thing, since it's > (really) independent of the whole thing. > > True freenode is (was?) a central player for FOSS communities. But with > all the screaming around, it might get harder to stick with them. > > Though, right now, the current privacy policy[4] doesn't reflect any > weird use of our data (that is, as of 2021-05-26, 08:30am CET). Maybe a > false flag? We'll probably never really know, but checking that privacy > policy and related contents evolution might provide a good hint. > > Some of the other communities I'm following have implemented a bridge > bot, basically copy-pasting content from one network to another. That > way, ppl could be on OFTC (or anywhere else where we have a foot on), > while primary thing would stick to freenode (and thus allowing a clean > migration)? > > My 2c ;). > > Cheers, > > C. > > [1] https://freenode.net/news/freenode-is-foss > [2] https://libera.chat/news/welcome-to-libera-chat > [3] https://freenode.net/news/for-foss > [4] https://freenode.net/policies > > On 5/14/21 5:05 PM, Chris Morgan wrote: > > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > > > > https://p.haavard.me/407 > > > > I have no independent validation of this. > > > > Chris > > > > -- > > Chris Morgan > > > -- > Cédric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed May 26 11:34:04 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 26 May 2021 07:34:04 -0400 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: Yup. I think it’s time to get out. We said we’d move if we’re running into a problem or changes happen. This is the case now. I’ve setup a small Google Meet for today to discuss things because quite honestly, I don’t think the IRC is viable for this discussion in terms of how urgent this needs to be done and the potential for interruption. The meeting is in 3 hours exactly. OpenStack network migration Wednesday, May 26 · 10:30–11 AM EST Google Meet joining info Video call link: https://meet.google.com/xqw-zkhw-wak Or dial: +1 226-213-8281 PIN: 5116612309845 More phone numbers: https://tel.meet/xqw-zkhw-wak?pin=5116612309845 I would appreciate if our community joined so we can get an actionable plan as quickly as possible. Thanks for everyone’s comments and I hope that we’re going to have a lot of hands to help us, because there will be a non trivial amount of work and a few decisions to be made. I’ll add an etherpad for this discussion shortly. On Wed, May 26, 2021 at 7:20 AM Dmitry Tantsur wrote: > > > On Wed, May 26, 2021 at 8:44 AM Cédric Jeanneret > wrote: > >> Hello all, >> >> I think we shouldn't let this one die in silence. Apparently, new >> policies are being enforcing within Freenode and, if by mistake, a >> channel promotes another IRC network in their topic, they will be taken >> by freenode staff - meaning loss of rights, loss of ownership, and topic >> will be changed. >> > > FYI #rdo fell victim to this and is now on Libera. > > Dmitry > > >> >> that's not what we can call "foss supporting" anymore imho. (Though, >> apparently, this was due to some miscommunication - so long for the >> transparency, stability and so on.) >> >> Reading the «explanation» on both sides[1][2], as well as latest >> freenode communication[3], I'd rather go for the OFTC thing, since it's >> (really) independent of the whole thing. >> >> True freenode is (was?) a central player for FOSS communities. But with >> all the screaming around, it might get harder to stick with them. >> >> Though, right now, the current privacy policy[4] doesn't reflect any >> weird use of our data (that is, as of 2021-05-26, 08:30am CET). Maybe a >> false flag? We'll probably never really know, but checking that privacy >> policy and related contents evolution might provide a good hint. >> >> Some of the other communities I'm following have implemented a bridge >> bot, basically copy-pasting content from one network to another. That >> way, ppl could be on OFTC (or anywhere else where we have a foot on), >> while primary thing would stick to freenode (and thus allowing a clean >> migration)? >> >> My 2c ;). >> >> Cheers, >> >> C. >> >> [1] https://freenode.net/news/freenode-is-foss >> [2] https://libera.chat/news/welcome-to-libera-chat >> [3] https://freenode.net/news/for-foss >> [4] https://freenode.net/policies >> >> On 5/14/21 5:05 PM, Chris Morgan wrote: >> > https://twitter.com/dmsimard/status/1393203159770804225?s=20 >> > >> > https://p.haavard.me/407 >> > >> > I have no independent validation of this. >> > >> > Chris >> > >> > -- >> > Chris Morgan > >> >> -- >> Cédric Jeanneret (He/Him/His) >> Sr. Software Engineer - OpenStack Platform >> Deployment Framework TC >> Red Hat EMEA >> https://www.redhat.com/ >> >> > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Mohammed Naser VEXXHOST, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed May 26 11:45:20 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 26 May 2021 13:45:20 +0200 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: On Wed, May 26, 2021 at 1:37 PM Mohammed Naser wrote: > > Yup. I think it’s time to get out. We said we’d move if we’re running into a problem or changes happen. This is the case now. Thank you. Finally some backup from another TC member. > I’ve setup a small Google Meet for today to discuss things because quite honestly, I don’t think the IRC is viable for this discussion in terms of how urgent this needs to be done and the potential for interruption. The meeting is in 3 hours exactly. > > OpenStack network migration > Wednesday, May 26 · 10:30–11 AM EST > Google Meet joining info > Video call link: https://meet.google.com/xqw-zkhw-wak > Or dial: +1 226-213-8281 PIN: 5116612309845 > More phone numbers: https://tel.meet/xqw-zkhw-wak?pin=5116612309845 I'm available. BTW, this is 14:30-15 UTC -yoctozepto From mnaser at vexxhost.com Wed May 26 12:19:24 2021 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 26 May 2021 08:19:24 -0400 Subject: [all] OpenStack IRC Message-ID: Hi everyone, We'd like to invite the parties of the community who are interested in the current IRC situation to participate in the upcoming (emergency) meeting: https://etherpad.opendev.org/p/openstack-irc Thank you, Mohammed -- Mohammed Naser VEXXHOST, Inc. From balazs.gibizer at est.tech Wed May 26 13:45:33 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Wed, 26 May 2021 15:45:33 +0200 Subject: [nova] spec review day In-Reply-To: References: Message-ID: I saw high amount of great feedback on the open nova specs during the review day. Thank you! Cheers, gibi On Thu, May 13, 2021 at 18:32, Balazs Gibizer wrote: > Hi, > > During the today's meeting it came up that we should have a spec > review day around M1 (and later one more before M2). As M1 is 27th of > May, I propose to do a spec review day on 25th of May, which is a > Tuesday. Let me know if the timing does not good for you. > > The rules are the usual. Let's use this day to focus on open specs, > trying to reach agreement on as many thing as possible with close > cooperation during the day. > > Cheers, > gibi > > > From skaplons at redhat.com Wed May 26 13:52:22 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 26 May 2021 15:52:22 +0200 Subject: [neutron] Update of the neutron-lib-commit group in the Gerrit Message-ID: <2468976.ZXH9Hed9jz@p1> Hi, In Gerrit we have neutron-lib-core group and members of that group have permission to approve neutron-lib patches in master branch. Until today members of the neutron-drivers-core group were members of that group but I added also neutron-core group there. So now every neutron-core team member can approve neutron-lib patches as well. I think that in current state of our teams and current state of the neutron- lib project, it's perfectly fine and we can trust all our neutron cores and let them approve neutron-lib patches as well. But if You have anything against that, please let me know so we can discuss it on ML or on IRC :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From senrique at redhat.com Wed May 26 14:00:08 2021 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 26 May 2021 11:00:08 -0300 Subject: [cinder] Bug deputy report for week of 2021-05-26 Message-ID: Hello, This is a bug report from 2021-05-19 to 2021-05-26. You're welcome to join the next Cinder Bug Meeting later today. Weekly on Wednesday at 1500 UTC on #openstack-cinder Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Critical:- High: - https://bugs.launchpad.net/cinder/+bug/1929128 "Can't migrate plain encrypted volume - keymanager error". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928953 " Volume uuid of deleted volume is visible in volume list while restoring volume backup in horizon". Unassigned. Medium: - https://bugs.launchpad.net/cinder/+bug/1929606 "Create x Update Volume Metadata in Cinder Backend". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1928948 "Attach volumes from different SP to a single vm, the zone ports will be replaced". Unassigned. Low: - https://bugs.launchpad.net/cinder/+bug/1929220 "doc: Volume encryption supported by the key manager in cinder" Incomplete: - https://bugs.launchpad.net/cinder/+bug/1929429 "PowerMax - check for SG is not case insensitive". Unassigned. - https://bugs.launchpad.net/cinder/+bug/1929354 "Cinder service is disabled while backup creation is still successful". Unassigned. Cheers, Sofia -- L. Sofía Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack.org at sodarock.com Wed May 26 14:50:53 2021 From: openstack.org at sodarock.com (John Villalovos) Date: Wed, 26 May 2021 07:50:53 -0700 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: Related article on LWN: https://news.ycombinator.com/item?id=27289071 Discussion on HN: https://news.ycombinator.com/item?id=27289071 On Fri, May 14, 2021 at 8:12 AM Chris Morgan wrote: > https://twitter.com/dmsimard/status/1393203159770804225?s=20 > https://p.haavard.me/407 > > I have no independent validation of this. > > Chris > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed May 26 15:20:13 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 May 2021 11:20:13 -0400 Subject: threat to freenode (where openstack irc hangs out) In-Reply-To: References: Message-ID: "I think we shouldn't let this one die in silence." Just for completeness, note that this discussion came back to the openstack-discuss mailing list with the subject "Freenode and libera.chat" with a lot of discussion and then a few other email threads, so the issue was not being ignored, it just sadly fragmented across multiple threads. Chris On Wed, May 26, 2021 at 10:52 AM John Villalovos wrote: > Related article on LWN: https://news.ycombinator.com/item?id=27289071 > > Discussion on HN: https://news.ycombinator.com/item?id=27289071 > > On Fri, May 14, 2021 at 8:12 AM Chris Morgan wrote: > >> https://twitter.com/dmsimard/status/1393203159770804225?s=20 >> https://p.haavard.me/407 >> >> I have no independent validation of this. >> >> Chris >> >> -- >> Chris Morgan >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed May 26 15:41:48 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 May 2021 11:41:48 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: I looked over libera chat's tweet stream ( https://twitter.com/liberachat/with_replies), from the retweets there seems to be a clear pattern of groups who dislike what's happening over at freenode jumping predominantly to libera. Perhaps that's the right thing for openstack too (I had previously said let's go to OFTC since Jeremy has done some prep work), but let's go somewhere other than fn stat On Mon, May 24, 2021 at 9:29 AM Julia Kreger wrote: > On Sun, May 23, 2021 at 7:09 AM Jeremy Stanley wrote: > > > > On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: > > > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: > > > [...] > > > > Looking at the movement over the past day, it seems like we're the > > > > only hesitant party here. Rest of the communities have either > > > > moved to libera.chat or OFTC. I'd strongly advise us to do the > > > > same before things turn sour. > > > > > > OpenStack isn't the only community taking a careful and measured > > > approach to the decision. Ansible deferred deciding what to do about > > > their IRC channels until Wednesday of this coming week: > > > > > > https://github.com/ansible-community/community-topics/issues/19 > > > > In a similar vein, I've noticed that #python, #python-dev, #pypa and > > so on haven't moved off Freenode yet, though the topic in #python > > suggests there's some ongoing discussion to determine whether they > > should. Unfortunately it doesn't say where that's being discussed > > though, maybe on the Python community mailing lists or Discourse, > > however cursory searches I've performed turn up nothing. > > -- > > Jeremy Stanley > > I suspect they, like everyone else who has seen some of the latest > rules changes[0] and a report[1] of abuses of power, are looking at > treading lightly in order to do the best thing for their community. I > suspect we're going to see a mass migration to other platforms or > tools, regardless at this point in time. The rules changes are just > going to make it more difficult to keep the existing channels as > redirects. > > I firmly believe this is no longer a matter of should, but we now have > an imperative to ensure community communication continuity. If the > higher level project doesn't wish to come to quick consensus, then I > believe individual projects will make their own decisions and we'll > end up fragmenting the communication channels until things settle > down. > > [0]: > https://github.com/freenode/web-7.0/pull/513/commits/2037126831a84c57f978268f090fc663cf43ed7a#diff-0e382b024f696a3b7a0ff3bce24ae3166cc6f383d059c7cc61e0a3ccdeed522c > [1]: https://www.devever.net/~hl/freenode_abuse > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpeacock at redhat.com Wed May 26 15:57:10 2021 From: dpeacock at redhat.com (David Peacock) Date: Wed, 26 May 2021 11:57:10 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: It's worth highlighting that OFTC doesn't support SASL, and also many major FOSS projects have bet the farm on Libera. Do we want to be OFTC just because, or do we want to go where all the major projects have gone? David On Wed, May 26, 2021 at 11:47 AM Chris Morgan wrote: > I looked over libera chat's tweet stream ( > https://twitter.com/liberachat/with_replies), from the retweets there > seems to be a clear pattern of groups who dislike what's happening over at > freenode jumping predominantly to libera. Perhaps that's the right thing > for openstack too (I had previously said let's go to OFTC since Jeremy has > done some prep work), but let's go somewhere other than fn stat > > On Mon, May 24, 2021 at 9:29 AM Julia Kreger > wrote: > >> On Sun, May 23, 2021 at 7:09 AM Jeremy Stanley wrote: >> > >> > On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: >> > > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: >> > > [...] >> > > > Looking at the movement over the past day, it seems like we're the >> > > > only hesitant party here. Rest of the communities have either >> > > > moved to libera.chat or OFTC. I'd strongly advise us to do the >> > > > same before things turn sour. >> > > >> > > OpenStack isn't the only community taking a careful and measured >> > > approach to the decision. Ansible deferred deciding what to do about >> > > their IRC channels until Wednesday of this coming week: >> > > >> > > https://github.com/ansible-community/community-topics/issues/19 >> > >> > In a similar vein, I've noticed that #python, #python-dev, #pypa and >> > so on haven't moved off Freenode yet, though the topic in #python >> > suggests there's some ongoing discussion to determine whether they >> > should. Unfortunately it doesn't say where that's being discussed >> > though, maybe on the Python community mailing lists or Discourse, >> > however cursory searches I've performed turn up nothing. >> > -- >> > Jeremy Stanley >> >> I suspect they, like everyone else who has seen some of the latest >> rules changes[0] and a report[1] of abuses of power, are looking at >> treading lightly in order to do the best thing for their community. I >> suspect we're going to see a mass migration to other platforms or >> tools, regardless at this point in time. The rules changes are just >> going to make it more difficult to keep the existing channels as >> redirects. >> >> I firmly believe this is no longer a matter of should, but we now have >> an imperative to ensure community communication continuity. If the >> higher level project doesn't wish to come to quick consensus, then I >> believe individual projects will make their own decisions and we'll >> end up fragmenting the communication channels until things settle >> down. >> >> [0]: >> https://github.com/freenode/web-7.0/pull/513/commits/2037126831a84c57f978268f090fc663cf43ed7a#diff-0e382b024f696a3b7a0ff3bce24ae3166cc6f383d059c7cc61e0a3ccdeed522c >> [1]: https://www.devever.net/~hl/freenode_abuse >> >> > > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 26 16:06:14 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 May 2021 16:06:14 +0000 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: <20210526160613.sk5ddp7sxhkxrzla@yuggoth.org> On 2021-05-26 11:57:10 -0400 (-0400), David Peacock wrote: > It's worth highlighting that OFTC doesn't support SASL, and also > many major FOSS projects have bet the farm on Libera. Do we want > to be OFTC just because, or do we want to go where all the major > projects have gone? "All" is a stretch, there were plenty of free/libre open source projects on OFTC long before this latest episode of Freenode drama. Also OFTC isn't being suggested "just because." We've been making provisions to relocate our services to OFTC for over 7 years in case something ever happened to sufficiently offset the pain of moving (getting people to update their client configs, editing all our contributor docs...), with general acknowledgement from the community that it would have been a better choice for us from the beginning had we actively selected a network. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From raubvogel at gmail.com Wed May 26 16:09:13 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Wed, 26 May 2021 12:09:13 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: On Wed, May 26, 2021 at 12:00 PM David Peacock wrote: > > It's worth highlighting that OFTC doesn't support SASL, and also many major FOSS projects have bet the farm on Libera. Do we want to be OFTC just because, or do we want to go where all the major projects have gone? > > David > Correct me if I am wrong but from https://etherpad.opendev.org/p/openstack-irc the conscensus was to use OFTC > > On Wed, May 26, 2021 at 11:47 AM Chris Morgan wrote: >> >> I looked over libera chat's tweet stream (https://twitter.com/liberachat/with_replies), from the retweets there seems to be a clear pattern of groups who dislike what's happening over at freenode jumping predominantly to libera. Perhaps that's the right thing for openstack too (I had previously said let's go to OFTC since Jeremy has done some prep work), but let's go somewhere other than fn stat >> >> On Mon, May 24, 2021 at 9:29 AM Julia Kreger wrote: >>> >>> On Sun, May 23, 2021 at 7:09 AM Jeremy Stanley wrote: >>> > >>> > On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: >>> > > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: >>> > > [...] >>> > > > Looking at the movement over the past day, it seems like we're the >>> > > > only hesitant party here. Rest of the communities have either >>> > > > moved to libera.chat or OFTC. I'd strongly advise us to do the >>> > > > same before things turn sour. >>> > > >>> > > OpenStack isn't the only community taking a careful and measured >>> > > approach to the decision. Ansible deferred deciding what to do about >>> > > their IRC channels until Wednesday of this coming week: >>> > > >>> > > https://github.com/ansible-community/community-topics/issues/19 >>> > >>> > In a similar vein, I've noticed that #python, #python-dev, #pypa and >>> > so on haven't moved off Freenode yet, though the topic in #python >>> > suggests there's some ongoing discussion to determine whether they >>> > should. Unfortunately it doesn't say where that's being discussed >>> > though, maybe on the Python community mailing lists or Discourse, >>> > however cursory searches I've performed turn up nothing. >>> > -- >>> > Jeremy Stanley >>> >>> I suspect they, like everyone else who has seen some of the latest >>> rules changes[0] and a report[1] of abuses of power, are looking at >>> treading lightly in order to do the best thing for their community. I >>> suspect we're going to see a mass migration to other platforms or >>> tools, regardless at this point in time. The rules changes are just >>> going to make it more difficult to keep the existing channels as >>> redirects. >>> >>> I firmly believe this is no longer a matter of should, but we now have >>> an imperative to ensure community communication continuity. If the >>> higher level project doesn't wish to come to quick consensus, then I >>> believe individual projects will make their own decisions and we'll >>> end up fragmenting the communication channels until things settle >>> down. >>> >>> [0]: https://github.com/freenode/web-7.0/pull/513/commits/2037126831a84c57f978268f090fc663cf43ed7a#diff-0e382b024f696a3b7a0ff3bce24ae3166cc6f383d059c7cc61e0a3ccdeed522c >>> [1]: https://www.devever.net/~hl/freenode_abuse >>> >> >> >> -- >> Chris Morgan From amy at demarco.com Wed May 26 16:11:17 2021 From: amy at demarco.com (Amy Marrich) Date: Wed, 26 May 2021 11:11:17 -0500 Subject: RDO IRC Information Message-ID: Dear RDO Community Members, As many of you know, there have been changes in Freenode which affect the RDO channel, OpenInfra channels, and the channels of many other communities we work closely with. After careful consideration, the OpenDev team has chosen to move the Open Infrastructure Foundation IRC related infrastructure to OFTC[0]. As of Monday, May 31st, OFTC will become the primary home of RDO (#rdo) including where we host our meeting. The Fedora and CentOS Communities have chosen to move to Libera[1] and due to our close relationship with these projects RDO will have a presence on this system as well also in #rdo. Our channel is already created on both systems and are ready for you to join them and it is recommended you register your Nick[2][3] on both these systems. While not a total solution, we would like to mention Matrix[4] which we hope in the future will be able to bridge OFTC and Libera.chat into one location. Thanks, Amy Marrich (spotz) 0 - https://www.oftc.net/ 1 - https://libera.chat/ 2 - https://www.oftc.net/Services/#register-your-account 3 - https://libera.chat/guides/registration 4 - https://matrix.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed May 26 16:29:00 2021 From: smooney at redhat.com (Sean Mooney) Date: Wed, 26 May 2021 17:29:00 +0100 Subject: Freenode and libera.chat In-Reply-To: References: <20210521141430.u73tc552nvwbzpjh@yuggoth.org> <20210523140551.q2p3s3e7lzeiqs7q@yuggoth.org> Message-ID: <39c2363703eea3f620f35124b4de066f988c80e7.camel@redhat.com> On Wed, 2021-05-26 at 12:09 -0400, Mauricio Tavares wrote: > On Wed, May 26, 2021 at 12:00 PM David Peacock wrote: > > > > It's worth highlighting that OFTC doesn't support SASL, and also many major FOSS projects have bet the farm on Libera. Do we want to be OFTC just because, or do we want to go where all the major projects have gone? > > > > David > > > Correct me if I am wrong but from > https://etherpad.opendev.org/p/openstack-irc the conscensus was to use > OFTC yes form the meeting we had it was and one of the action items form that meeting was to send out an email to the list to update everyone that is currently being worked on i belive > > > > On Wed, May 26, 2021 at 11:47 AM Chris Morgan wrote: > > > > > > I looked over libera chat's tweet stream (https://twitter.com/liberachat/with_replies), from the retweets there seems to be a clear pattern of groups who dislike what's happening over at freenode jumping predominantly to libera. Perhaps that's the right thing for openstack too (I had previously said let's go to OFTC since Jeremy has done some prep work), but let's go somewhere other than fn stat > > > > > > On Mon, May 24, 2021 at 9:29 AM Julia Kreger wrote: > > > > > > > > On Sun, May 23, 2021 at 7:09 AM Jeremy Stanley wrote: > > > > > > > > > > On 2021-05-21 14:14:30 +0000 (+0000), Jeremy Stanley wrote: > > > > > > On 2021-05-21 12:36:29 +0100 (+0100), Erno Kuvaja wrote: > > > > > > [...] > > > > > > > Looking at the movement over the past day, it seems like we're the > > > > > > > only hesitant party here. Rest of the communities have either > > > > > > > moved to libera.chat or OFTC. I'd strongly advise us to do the > > > > > > > same before things turn sour. > > > > > > > > > > > > OpenStack isn't the only community taking a careful and measured > > > > > > approach to the decision. Ansible deferred deciding what to do about > > > > > > their IRC channels until Wednesday of this coming week: > > > > > > > > > > > > https://github.com/ansible-community/community-topics/issues/19 > > > > > > > > > > In a similar vein, I've noticed that #python, #python-dev, #pypa and > > > > > so on haven't moved off Freenode yet, though the topic in #python > > > > > suggests there's some ongoing discussion to determine whether they > > > > > should. Unfortunately it doesn't say where that's being discussed > > > > > though, maybe on the Python community mailing lists or Discourse, > > > > > however cursory searches I've performed turn up nothing. > > > > > -- > > > > > Jeremy Stanley > > > > > > > > I suspect they, like everyone else who has seen some of the latest > > > > rules changes[0] and a report[1] of abuses of power, are looking at > > > > treading lightly in order to do the best thing for their community. I > > > > suspect we're going to see a mass migration to other platforms or > > > > tools, regardless at this point in time. The rules changes are just > > > > going to make it more difficult to keep the existing channels as > > > > redirects. > > > > > > > > I firmly believe this is no longer a matter of should, but we now have > > > > an imperative to ensure community communication continuity. If the > > > > higher level project doesn't wish to come to quick consensus, then I > > > > believe individual projects will make their own decisions and we'll > > > > end up fragmenting the communication channels until things settle > > > > down. > > > > > > > > [0]: https://github.com/freenode/web-7.0/pull/513/commits/2037126831a84c57f978268f090fc663cf43ed7a#diff-0e382b024f696a3b7a0ff3bce24ae3166cc6f383d059c7cc61e0a3ccdeed522c > > > > [1]: https://www.devever.net/~hl/freenode_abuse > > > > > > > > > > > > > -- > > > Chris Morgan > From vikash.kumarprasad at siemens.com Wed May 26 14:12:12 2021 From: vikash.kumarprasad at siemens.com (Kumar Prasad, Vikash) Date: Wed, 26 May 2021 14:12:12 +0000 Subject: PNDriver on openstack VM is not able to communicate to ET200SP device connected to my physical router Message-ID: Dear Community, I have installed openstack on Centos 7 on Virutalbox VM on windows 10. Now I am running an application PNDriver on openstack VM(VNF), which is supposed to communicate with a hardware ET200SP, which is connected to my physical home router. Now my PNDriver is not able to communicate to ET200SP hardware device. PNDriver minimum requirement to run on an interface is using ethtool it should list the speed, duplex, and port properties, but by default speed , duplex, and port values it is Showing "unknown". I tried setting these values using ethtool and somehow I was able to set duplex, speed values but port value when I am trying to set it is throwing error. My question is how we can set the port value of openstack VM(Vnf) using ethtool? Second question is that suppose if we create a VM on virtualbox, then virtualbox provides a provision for bridged type on network setting, can I not configure openstack vm (vnf) like a virtualbox VM so that my vnf can also get broadcast messages broadcasted by the connected hardware devices in my home router? Thanks Vikash kumar prasad -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 26 17:19:26 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 May 2021 12:19:26 -0500 Subject: [all] CRITICAL: Upcoming changes to the OpenStack Community IRC this weekend Message-ID: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> Greetings contributors & community members! With recent events, the Technical Committee held an emergency meeting today (Wednesday, May 26th, 2021) regarding Freenode IRC and what our decision would be [1]. Earlier in the week, the consensus amongst the TC was to gather more information from the individual projects, and make a decision from there[2]. With #rdo, #ubuntu, and #wikipedia having been hijacked, the consensus amongst the TC and the community members who were able to attend the meeting was to move away from Freenode as soon as possible. The TC agreed that this move away from Freenode needs to be a community-wide move to the same, new IRC network for all projects to avoid splintering of the community. As has been long-planned in the event of a contingency, we will be moving to OFTC. We recognize this is a contentious topic, and ultimately we seek to ensure community continuity before evolution to something beyond IRC, as many have expressed interest in doing via Mailing List discussions. At this point, we had to make a decision to solve the immediate problem in the simplest and most expedient way possible, so this is that announcement. We welcome continued discussion about future alternatives on the other threads. With this in mind, we suggest the following steps. Everyone: ======= 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken over by Freenode and will disrupt communications within our community. 2. Register your nicknames on OFTC [3][4] 3. Be *prepared* to join your channels on OFTC[4]. The OpenStack community channels have already been registered on OFTC and await you. 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. Projects/Project Leaders: ==================== 1. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help redirect people to OFTC. This should occur via private messages to avoid a ban. 2. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. 3. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to the central contributor guide[5]. 4. The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. 5. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. We are working on getting our bots over to OFTC, and they will be moved over the weekend. Starting Monday May 31, the bots will be on OFTC. Communication regarding this migration will take place on OFTC[4] in #openstack-dev, and we're working on updating the contributor guide[5] to reflect this migration. Sincerely, The OpenStack TC and community leaders who came together to agree on a path forward. [1]: https://etherpad.opendev.org/p/openstack-irc [2]: https://etherpad.opendev.org/p/feedback-on-freenode [3]: https://www.oftc.net/Services/#register-your-account [4]: https://www.oftc.net/ [5]: https://docs.openstack.org/contributors/common/irc.html From juliaashleykreger at gmail.com Wed May 26 17:22:46 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 26 May 2021 10:22:46 -0700 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core Message-ID: Greetings, I went to go hunt down a reviewer for the contributors guide this morning and found that at least one of the reviewers no longer works on OpenStack. In this case pkovar, but just glancing at the group lists, there appears to be others. If someone with appropriate privileges could update the group memberships to better reflect reality, it would be helpful to the rest of the community. Thanks! -Julia From iurygregory at gmail.com Wed May 26 18:01:48 2021 From: iurygregory at gmail.com (Iury Gregory) Date: Wed, 26 May 2021 20:01:48 +0200 Subject: [ironic] Upstream Meeting - May 31st Message-ID: Hello Ironicers! Our upstream meeting on May 31st will be held on OFTC server instead of Freenode, please check the email in [1] for more details. We have updated our Wiki [2] to mention the correct IRC server, and we are updating our docs. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://wiki.openstack.org/wiki/Meetings/Ironic Thank you! -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the ironic-core and puppet-manager-core team in OpenStack* *Software Engineer at Red Hat Czech* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Wed May 26 18:51:46 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 May 2021 14:51:46 -0400 Subject: [ops] ops meetups team resuming meetings **NEW LOCATION** Message-ID: We'll meet briefly next Tuesday 2021-6-6 on irc.oftc.net at 10am EST in #openstack-operators Agenda https://etherpad.opendev.org/p/ops-meetups-team Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From vhariria at redhat.com Wed May 26 19:12:01 2021 From: vhariria at redhat.com (Vida Haririan) Date: Wed, 26 May 2021 15:12:01 -0400 Subject: [Manila ] Upcoming Bug Squash starting June 7th through June 11th 2021 Message-ID: Hi everyone, As discussed, a new Bug Squash event is around the corner! The event will be held from 7th to 11th June, 2021, providing an extended contribution window. There will be a synchronous call held simultaneously on IRC, Thursday June 10th, 2021 at 15:00 UTC and we will use this Jitsi bridge [1]. A list of selected bugs will be shared here [2]. Please feel free to add any additional bugs you would like to address during the event. Thanks for your participation in advance. Vida [1] https://meetpad.opendev.org/ManilaX-ReleaseBugSquash [2] https://ethercalc.openstack.org/i3vwocrkk776 -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed May 26 19:19:34 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 26 May 2021 13:19:34 -0600 Subject: [tripleo][ci] mirror issues failing jobs In-Reply-To: References: Message-ID: On Tue, May 25, 2021 at 6:57 AM Wesley Hayutin wrote: > > > On Mon, May 24, 2021 at 9:22 PM Ian Wienand wrote: > >> On Mon, May 24, 2021 at 02:26:10PM -0600, Wesley Hayutin wrote: >> 65;6401;1c> something is going on w/ either centos or the infra >> mirrors... I suspect >> > centos atm. >> >> Just FYI, logs of all mirroring processes are available at >> >> https://static.opendev.org/mirror/logs/ >> >> (centos is under "rsync-mirrors"). >> >> This is all driven by >> >> >> https://opendev.org/opendev/system-config/src/branch/master/playbooks/roles/mirror-update >> >> If anything ever seems out of sync, these are the places to start. >> >> -i >> >> > Thanks Clark and Ian! I took some notes on those links. > Things are recovering but it's still a little choppy atm, but improving. > > TripleO folks.. probably want to keep their +2 / wf to a minimum today > until we're in the clear. > > Thanks all! > The mirror misses came back today, looks like it's a limestone issue http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22All%20mirrors%20were%20tried%5C%22%20AND%20(tags:%5C%22console%5C%22)%20AND%20voting:1&from=864000.0s -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 26 19:36:47 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 May 2021 19:36:47 +0000 Subject: OpenDev IRC services are moving to OFTC this weekend Message-ID: <20210526193647.rves6vgyl7lhsafo@yuggoth.org> As a majority of our constituent projects have voiced a preference for enacting our long-standing evacuation plan, the OpenDev Collaboratory's IRC service bots will be switching from Freenode to the OFTC network this weekend (May 29-30, 2021). We understand this is short notice, but multiple projects have requested that we act quickly. Please expect some gaps in channel logging and notifications from our various bots over the course of the weekend. I have provided a much more detailed writeup to the service-discuss mailing list, and encourage anyone with questions to read it and follow up there if needed. Subsequent updates will be sent only to service-discuss, in order to limit noise for individual project lists and keep further discussion focused in one place as much as possible: http://lists.opendev.org/pipermail/service-discuss/2021-May/000249.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zbitter at redhat.com Wed May 26 20:29:12 2021 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 26 May 2021 16:29:12 -0400 Subject: Freenode and libera.chat In-Reply-To: <20210520193905.ocmmestgdi22lx7l@yuggoth.org> References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: On 20/05/21 3:39 pm, Jeremy Stanley wrote: > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > [...] >> This is, indeed, not quite on-topic, but I'm advocating for >> Matrix, mostly because that's where Mozilla went and because it >> seems to check all the boxes. > [...] > > It seems, from what little I've read, that Matrix servers can > integrate with IRC server networks and bridge channels fairly > seamlessly. If there were a Matrix bridge providing access to the > same channels as people were participating in via IRC (wherever that > happened to be), would that address your concerns? Here's the problem: IRC authentication is a disaster. Matrix authentication is decent by all accounts. If you bridge a system with decent authentication to a system where authentication is a disaster, you get a disaster (see also: Schopenhauer's Law of Entropy). IRC works fine for me and you because we made it fine by doing stuff that has the effect of excluding casual users and new contributors. Although I think OFTC was always a better choice than Freenode ever was, the reality is that by moving there we will continue to exclude new people in this way, plus we'll lose a few of the old people along the way (and maybe even manage to completely fragment the community, judging by some of the comments in this subthread). IMHO by refusing to consider Matrix we are missing an opportunity to make the community more open while only paying the cost of moving once. Instead it appears we are going to pay the (community) cost without getting any of the benefits. cheers, Zane. From rosmaita.fossdev at gmail.com Wed May 26 20:34:04 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 26 May 2021 16:34:04 -0400 Subject: [cinder] xena R-18 virtual mid-cycle on 2 june Message-ID: As mentioned at today's weekly meeting, the Cinder Xena R-18 virtual mid-cycle will be held: DATE: Wednesday 2 June 2021 TIME: 1400-1600 UTC LOCATION: https://bluejeans.com/3228528973 The meeting will be recorded. Please add topics to the mid-cycle etherpad: https://etherpad.opendev.org/p/cinder-xena-mid-cycles cheers, brian From mihalis68 at gmail.com Wed May 26 21:13:48 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Wed, 26 May 2021 17:13:48 -0400 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: I don't think it's THAT bad. I never in my life used IRC until I had to for the openstack-operators meetings. IRCCloud was easy enough. Then the mandatory nick registration thing happened and indeed I struggled there, but I got over it - people were helpful, there are many options and many people with experience of IRC in the openstack community willing to help. Maybe Matrix would be better, but I don't think a complete jump from IRC to Matrix could have been done in time (see Clark Boylan's message about this further up this thread). On May 14th when I sent one of the early reports of this issue to this list it was far off and confusing-seeming. By today the move was arguably already very late. 12 days in and there's worry that the openstack channels will be taken over by shadowy forces. As Joel Spolsky once quipped "delivery is a feature, your product should have it" Chris On Wed, May 26, 2021 at 4:30 PM Zane Bitter wrote: > On 20/05/21 3:39 pm, Jeremy Stanley wrote: > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > [...] > >> This is, indeed, not quite on-topic, but I'm advocating for > >> Matrix, mostly because that's where Mozilla went and because it > >> seems to check all the boxes. > > [...] > > > > It seems, from what little I've read, that Matrix servers can > > integrate with IRC server networks and bridge channels fairly > > seamlessly. If there were a Matrix bridge providing access to the > > same channels as people were participating in via IRC (wherever that > > happened to be), would that address your concerns? > > Here's the problem: IRC authentication is a disaster. > > Matrix authentication is decent by all accounts. > > If you bridge a system with decent authentication to a system where > authentication is a disaster, you get a disaster (see also: > Schopenhauer's Law of Entropy). > > > IRC works fine for me and you because we made it fine by doing stuff > that has the effect of excluding casual users and new contributors. > > Although I think OFTC was always a better choice than Freenode ever was, > the reality is that by moving there we will continue to exclude new > people in this way, plus we'll lose a few of the old people along the > way (and maybe even manage to completely fragment the community, judging > by some of the comments in this subthread). > > IMHO by refusing to consider Matrix we are missing an opportunity to > make the community more open while only paying the cost of moving once. > Instead it appears we are going to pay the (community) cost without > getting any of the benefits. > > cheers, > Zane. > > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 26 21:19:09 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 May 2021 21:19:09 +0000 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: <20210526211909.6hmmbi3vsaxf7c5s@yuggoth.org> On 2021-05-26 16:29:12 -0400 (-0400), Zane Bitter wrote: [...] > Here's the problem: IRC authentication is a disaster. > > Matrix authentication is decent by all accounts. > > If you bridge a system with decent authentication to a system > where authentication is a disaster, you get a disaster (see also: > Schopenhauer's Law of Entropy). Yep, I get that. Or more precisely, the IRC protocol was not designed for authentication so everyone's rolled their own competing solutions. If you're primarily relying on IRC for things where authentication is irrelevant then it's no big deal, but there are certainly times when you do want to trust that someone on IRC is who they claim to be, and that's when things get harder. > IRC works fine for me and you because we made it fine by doing > stuff that has the effect of excluding casual users and new > contributors. I think you sell newcomers short by assuming they're mentally incompetent to the point they're incapable of rational thought. > Although I think OFTC was always a better choice than Freenode > ever was, the reality is that by moving there we will continue to > exclude new people in this way, plus we'll lose a few of the old > people along the way (and maybe even manage to completely fragment > the community, judging by some of the comments in this subthread). > > IMHO by refusing to consider Matrix we are missing an opportunity > to make the community more open while only paying the cost of > moving once. Instead it appears we are going to pay the > (community) cost without getting any of the benefits. So, here's the thing. What we have (from OpenDev's perspective) is some IRC bots connected to Freenode. We can leave them there, point them to a different IRC network, or turn them off. If someone comes along with replacement code which talks native Matrix protocol we can look at running that too, but it's not something we have now nor something we'll reasonably be able to come up with in the short span of time people feel evacuating Freenode warrants. Also feel free to talk about OpenStack topics on Matrix-only channels; people already talk about OpenStack in lots of places which aren't IRC. If enough people prefer to do that, then it's probably not that hard to get others to join there. Once all the interesting discussions are happening on those channels then I expect IRC to fall into disuse on its own anyway. The IRC client I use already has a Matrix plug-in, so I can connect to Matrix-only channels with it just as if they were on yet another IRC network, and am personally happy to do so if and when the need arises. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed May 26 23:24:05 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 May 2021 18:24:05 -0500 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: References: Message-ID: <179aafe068a.cb912cf831091.4079191485834420119@ghanshyammann.com> ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- > Greetings, > > I went to go hunt down a reviewer for the contributors guide this > morning and found that at least one of the reviewers no longer works > on OpenStack. In this case pkovar, but just glancing at the group > lists, there appears to be others. If someone with appropriate > privileges could update the group memberships to better reflect > reality, it would be helpful to the rest of the community. Yes, even many of them are not active in OpenStack. We should call for more volunteers to maintain these repo. openstack/contributor-guide repo is under 'Technical Writing' SIG.I would like to request Stephen Finucane (stephenfin) current Chair of this SIG to do the cleanup. Also, we can call for more volunteers here to help out this repo/SIG. I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). -gmann > > Thanks! > > -Julia > > From gmann at ghanshyammann.com Wed May 26 23:26:59 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 May 2021 18:26:59 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 27th at 1500 UTC In-Reply-To: <179a3b4db35.de0a1f7d210013.1168818962286123773@ghanshyammann.com> References: <179a3b4db35.de0a1f7d210013.1168818962286123773@ghanshyammann.com> Message-ID: <179ab00b110.b65b898331118.5377175908360349178@ghanshyammann.com> Hello Everyone, Below is the agenda for tomorrow's TC meeting schedule on May 20th at 1500 UTC in #openstack-tc IRC channel. -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check (dansmith/yoctozepto) ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ * Planning for TC + PTL interaction (gmann) ** https://etherpad.opendev.org/p/tc-ptl-interaction * Xena cycle tracker status check ** https://etherpad.opendev.org/p/tc-xena-tracker * Discussion on 'Freenode' Situation (gmann) ** http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html *** https://etherpad.opendev.org/p/openstack-irc ** Recording it in TC resolution. * Open Reviews ** https://review.opendev.org/q/project:openstack/governance+is:open -gmann ---- On Tue, 25 May 2021 08:26:49 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Technical Committee's next weekly meeting is scheduled for May 27th at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, May 26th, at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From gmann at ghanshyammann.com Wed May 26 23:30:27 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 May 2021 18:30:27 -0500 Subject: [ptl][tc] Project's feedback on Freenode situation In-Reply-To: <1799fa365be.b18b8ae0158537.8334892981210962275@ghanshyammann.com> References: <1799fa365be.b18b8ae0158537.8334892981210962275@ghanshyammann.com> Message-ID: <179ab03dae6.10e26a09e31148.5923457297774786975@ghanshyammann.com> Hello Everyone, Latest updates on this feedback in case you missed the TC emergency meeting or ML notification. First of all thanks for the feedback. Now I am closing this feedback as we decided on the next step. TC and community leaders had a meeting at 14:30 UTC today and discussed the next step. We decided to move to the OFTC network. Please see the below ML and etherpad for detailed discussion: - http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html - https://etherpad.opendev.org/p/openstack-irc -gmann ---- On Mon, 24 May 2021 13:29:16 -0500 Ghanshyam Mann wrote ---- > Hello PTLs/Release Liaisons, > > As you know, there is a lot of discussion going on for Freenode situation, If you are not aware > of that, these are the ML thread to read[1][2]. > > Most TC members (I mentioned in my weekly summary email also[3]) think to wait for more > time and monitor the situation to make any decision. > > But in today's discussion on openstack-tc, a few projects (Ironic, Kolla) are in favour of making the > decision soon instead of waiting. To proceed further, TC would like to get feedback from each project. > > I request PTLs to discuss this in your team and write your feedback in below etherpad: > > - https://etherpad.opendev.org/p/feedback-on-freenode > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022468.html > [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022539.html > [3] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022663.html > > -gmann > > From juliaashleykreger at gmail.com Thu May 27 00:02:34 2021 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 26 May 2021 17:02:34 -0700 Subject: Freenode and libera.chat In-Reply-To: References: <20210519134933.ndgfcvtdnvvip2ko@yuggoth.org> <4BA8EEBA-84ED-444D-85F0-86A435A0B0AA@gmail.com> <0a5be801-244b-6e7c-e6f4-0a7dea26df93@debian.org> <30db08f46a92ebe8bf62cb4f28b82476deef4ada.camel@redhat.com> <20210520193905.ocmmestgdi22lx7l@yuggoth.org> Message-ID: On Wed, May 26, 2021 at 1:35 PM Zane Bitter wrote: > > On 20/05/21 3:39 pm, Jeremy Stanley wrote: > > On 2021-05-20 21:28:26 +0200 (+0200), Dmitry Tantsur wrote: > > [...] > >> This is, indeed, not quite on-topic, but I'm advocating for > >> Matrix, mostly because that's where Mozilla went and because it > >> seems to check all the boxes. > > [...] > > > > It seems, from what little I've read, that Matrix servers can > > integrate with IRC server networks and bridge channels fairly > > seamlessly. If there were a Matrix bridge providing access to the > > same channels as people were participating in via IRC (wherever that > > happened to be), would that address your concerns? > > Here's the problem: IRC authentication is a disaster. > > Matrix authentication is decent by all accounts. > > If you bridge a system with decent authentication to a system where > authentication is a disaster, you get a disaster (see also: > Schopenhauer's Law of Entropy). > > > IRC works fine for me and you because we made it fine by doing stuff > that has the effect of excluding casual users and new contributors. > > Although I think OFTC was always a better choice than Freenode ever was, > the reality is that by moving there we will continue to exclude new > people in this way, plus we'll lose a few of the old people along the > way (and maybe even manage to completely fragment the community, judging > by some of the comments in this subthread). > > IMHO by refusing to consider Matrix we are missing an opportunity to > make the community more open while only paying the cost of moving once. > Instead it appears we are going to pay the (community) cost without > getting any of the benefits. > > cheers, > Zane. > I want to take one moment to kind of circle back to this. Nobody on the call where the TC, and numerous community leaders discussed this whole situation earlier today, ever ruled this out. The consensus and focus at this time was the short term continuity of being able to communicate and maintain community culture/tooling while freenode... frankly... grows into a larger, more impressive tire fire as each day passes. Many of the community leaders I've spoken with agree, we need to do better or we need to do more. But we also still need to get work done while we figure out the new things and ensure we have some level of community continuity. That was the consensus and driver to make the decision which was made this morning as the grim reality set with regards to the state and path Freenode is on. Reality is that on some level, we will always be bridged across multiple networks, channels, tools, and ultimately protocols. There is no one solution to make everyone happy, so the immediate focus is on short term continuity as we evolve. So the next step is ultimately choosing the *next* places to evolve and communicate and somehow providing the cross reference, since we all face the grim reality that multiple networks always have and will continue to be a thing regardless of rooms on matrix, discord, or Frank the plushy Shark's fancy new webchat app. From rosmaita.fossdev at gmail.com Thu May 27 00:16:13 2021 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 26 May 2021 20:16:13 -0400 Subject: [cinder][third-party][CI] revision needed to your gerrit comments Message-ID: <3e402e63-6ed2-204c-2889-2f5b1dc83d4e@gmail.com> Hello Cinder Third-Party CI maintainers, tl;dr - add an 'autogenerated' *tag* to your Gerrit comments [0] The recent Gerrit update has brought some changes--many good ones, but also some problems. The good news is that it is within your control to fix one of the issues we're seeing. The issue is that we lost the "Toggle CI" button in the old Gerrit, whose function was to hide all the CI comments on a review so that only the human reviewers' comments were displayed. It's a problem for reviewers because other human reviewers' comments are easily missed. So we'd like to be able to easily hide the CI comments in the new interface. (Don't worry--your CI's results are nicely displayed within the "Zuul Summary" tab on the interface, so hiding the comments aren't going to make your CI results more difficult to find. They are already clearly displayed in their own table.) In the new Gerrit interface, when you have the "Files" tab active, below the list of changed files, there is a "Change Log" tab. Right underneath it, there is a "Only Comments" toggle. When it's activated, currently all the comments from OpenStack Zuul are hidden. But the comments from other CIs are not. Your CI's comments will be hidden if you include an 'autogenerated' tag in your review input. If you are using zuul, include the tag: autogenerated:zuul Other CI systems should use the tag: autogenerated:yourCIname If your CI system uses ssh to communicate with Gerrit, you can find info about adding a tag here: https://review.opendev.org/Documentation/cmd-review.html (you specify a --tag option) If your CI system uses http to communicate with Gerrit, you can find info about adding a tag here: https://review.opendev.org/Documentation/rest-api-changes.html#set-review (you specify it in the JSON request body) The Cinder team will really appreciate your quick attention to this matter! thanks, brian [0] https://review.opendev.org/Documentation/rest-api-changes.html#review-input From tkajinam at redhat.com Thu May 27 00:43:50 2021 From: tkajinam at redhat.com (Takashi Kajinami) Date: Thu, 27 May 2021 09:43:50 +0900 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: Because we haven't heard any objections for one week, I invited the three people I mentioned to the puppet-manager-core group. On Tue, May 18, 2021 at 11:42 PM Takashi Kajinami wrote: > Thank you, Marios and the team for your time in the meeting. > > Based on our discussion, I'll nominate the following three volunteers from > tripleo core team > to the puppet-openstack core team. > - Marios Andreou > - Ronelle Landy > - Wes Hayutin > > Their scope of +2 will be limited to tripleo job definitions (which are > written in .zuul.yaml or zuul.d/*.yaml) at this moment. > > I've not received any objections so far (Thank you Tobias for sharing your > thoughts !) but will wait for one week > to be open for any feedback from the other cores or people around. > > My current plan is to add a specific hashtag so that these reviewers can > easily find the related changes like [1] > but please let me know if anybody has preference. > [1] > https://review.opendev.org/q/hashtag:%22puppet-tripleo-job%22+(status:open%20OR%20status:merged) > > P.S. > I received some interest about maintaining puppet modules (especially our > own integration jobs), > so will have some people involved in that part as well. > > > On Fri, May 14, 2021 at 8:57 PM Marios Andreou wrote: > >> On Fri, May 14, 2021 at 2:46 PM Takashi Kajinami >> wrote: >> > >> > Hi Marios, >> > >> > On Fri, May 14, 2021 at 8:10 PM Marios Andreou >> wrote: >> >> >> >> On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami >> wrote: >> >> > >> >> > Hi team, >> >> > >> >> >> >> Hi Takashi >> >> >> >> >> >> > As you know, we currently have TripleO jobs in some of the puppet >> repos >> >> > to ensure a change in puppet side doesn't break TripleO which >> consumes >> >> > some of the modules. >> >> >> >> in case it isn't clear and for anyone else reading, you are referring >> >> to things like [1]. >> > >> > This is a nitfixing but puppet-pacemaker is a repo under the TripleO >> project. >> > I intend a job like >> > >> https://zuul.opendev.org/t/openstack/builds?job_name=puppet-nova-tripleo-standalone&project=openstack/puppet-nova >> > which is maintained under puppet repos. >> > >> >> ack thanks for the clarification ;) makes more sense now >> >> >> >> >> >> >> > >> >> > Because these jobs hugely depend on the job definitions in TripleO >> repos, >> >> > I'm wondering whether we can invite a few cores from the TripleO CI >> team >> >> > to the puppet-openstack core group to maintain these jobs. >> >> > I expect the scope here is very limited to tripleo job definitions >> and doesn't >> >> > expect any +2 for other parts. >> >> > >> >> > I'd be nice if I can hear any thoughts on this topic. >> >> >> >> Main question is what kind of maintenance do you have in mind? Is it >> >> that these jobs are breaking often and they need fixes in the >> >> puppet-repos themselves so we need more cores there? (though... I >> >> would expect the fixes to be needed in tripleo-ci where the job >> >> definitions are, unless the repos are overriding those definitions)? >> > >> > >> > We define our own base tripleo-puppet-ci-centos-8-standalone job[4] and >> > each puppet module defines their own tripleo job[5] by overriding the >> base job, >> > so that we can define some basic items like irellevant files or voting >> status >> > for all puppet modules in a single place. >> > >> > [4] >> https://github.com/openstack/puppet-openstack-integration/blob/master/zuul.d/tripleo.yaml >> > [5] https://github.com/openstack/puppet-nova/blob/master/.zuul.yaml >> > >> > >> >> >> >> Or is it that you don't have enough folks to get fixes merged so this >> >> is mostly about growing the pool of reviewers? >> > >> > >> > Yes. My main intention is to have more reviewers so that we can fix our >> CI jobs timely. >> > >> > Actually the proposal came to my mind when I was implementing the >> following changes >> > to solve very frequent job timeouts which we currently observe in >> puppet-nova wallaby. >> > IMO these changes need more attention from TripleO's perspective rather >> than puppet's >> > perspective. >> > https://review.opendev.org/q/topic:%22tripleo-tempest%22+(status:open) >> > >> > In the past when we introduced content provider jobs, we ended up with >> a bunch of patches >> > submitted to both tripleo jobs and puppet jobs. Having some people from >> TripleO team >> > would help moving forward such a transition more smoothly. >> > >> > In the past we have had three people (Alex, Emilien and I) involved in >> both TripleO and puppet >> > but since Emilien has shifted this focus, we have now 2 activities left. >> > Additional one or two people would help us move patches forward more >> efficiently. >> > (Since I can't approve my own patch.) >> > >> >> I think limiting the scope to just the contents of zuul.d/ or >> >> .zuul.yaml can work; we already have a trust based system in TripleO >> >> with some cores only expected to exercise their voting rights in >> >> particular repos even though they have full voting rights across all >> >> tripleo repos). >> >> >> >> Are you able to join our next tripleo-ci community call? It is on >> >> Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, >> >> perhaps we can work something out depending on what you need. >> >> Otherwise no problem let's continue to discuss here >> > >> > >> > Sure. I can join and bring up this topic. >> > I'll keep this thread to hear some opinions from the puppet side as >> well. >> > >> > >> >> ok thanks look forward to discussing on Tuesday then, >> >> regards, marios >> >> >> >> >> >> >> >> regards, marios >> >> >> >> [1] >> https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker >> >> [2] https://meet.google.com/bqx-xwht-wky >> >> [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both >> >> >> >> >> >> >> >> > >> >> > Thank you, >> >> > Takashi >> >> > >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu May 27 02:06:40 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 May 2021 21:06:40 -0500 Subject: [all][tc] Technical Committee next weekly meeting on May 27th at 1500 UTC In-Reply-To: <179ab00b110.b65b898331118.5377175908360349178@ghanshyammann.com> References: <179a3b4db35.de0a1f7d210013.1168818962286123773@ghanshyammann.com> <179ab00b110.b65b898331118.5377175908360349178@ghanshyammann.com> Message-ID: <179ab92e2c7.124afd11631838.972138577554180223@ghanshyammann.com> Hello Everyone, As we already decided on Freenode today, I renamed this topic to "Migration plan for 'Freenode' to 'OFTC''' and we will discuss the migration plan and work needed here. -gmann ---- On Wed, 26 May 2021 18:26:59 -0500 Ghanshyam Mann wrote ---- > Hello Everyone, > > Below is the agenda for tomorrow's TC meeting schedule on May 20th at 1500 UTC in #openstack-tc IRC channel. > -https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > == Agenda for tomorrow's TC meeting == > > * Roll call > > * Follow up on past action items > > * Gate health check (dansmith/yoctozepto) > ** http://paste.openstack.org/show/jD6kAP9tHk7PZr2nhv8h/ > > * Planning for TC + PTL interaction (gmann) > ** https://etherpad.opendev.org/p/tc-ptl-interaction > > * Xena cycle tracker status check > ** https://etherpad.opendev.org/p/tc-xena-tracker > > * Discussion on 'Freenode' Situation (gmann) > ** http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html > *** https://etherpad.opendev.org/p/openstack-irc > ** Recording it in TC resolution. > > * Open Reviews > ** https://review.opendev.org/q/project:openstack/governance+is:open > > -gmann > > > ---- On Tue, 25 May 2021 08:26:49 -0500 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > Technical Committee's next weekly meeting is scheduled for May 27th at 1500 UTC. > > > > If you would like to add topics for discussion, please add them to the below wiki page by > > Wednesday, May 26th, at 2100 UTC. > > > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > > > -gmann > > > > > > From whayutin at redhat.com Thu May 27 02:08:38 2021 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 26 May 2021 20:08:38 -0600 Subject: [puppet][tripleo] Inviting tripleo CI cores to maintain tripleo jobs ? In-Reply-To: References: Message-ID: Thanks Takashi! On Wed, May 26, 2021 at 6:44 PM Takashi Kajinami wrote: > Because we haven't heard any objections for one week, I invited the three > people > I mentioned to the puppet-manager-core group. > > On Tue, May 18, 2021 at 11:42 PM Takashi Kajinami > wrote: > >> Thank you, Marios and the team for your time in the meeting. >> >> Based on our discussion, I'll nominate the following three volunteers >> from tripleo core team >> to the puppet-openstack core team. >> - Marios Andreou >> - Ronelle Landy >> - Wes Hayutin >> >> Their scope of +2 will be limited to tripleo job definitions (which are >> written in .zuul.yaml or zuul.d/*.yaml) at this moment. >> >> I've not received any objections so far (Thank you Tobias for sharing >> your thoughts !) but will wait for one week >> to be open for any feedback from the other cores or people around. >> >> My current plan is to add a specific hashtag so that these reviewers can >> easily find the related changes like [1] >> but please let me know if anybody has preference. >> [1] >> https://review.opendev.org/q/hashtag:%22puppet-tripleo-job%22+(status:open%20OR%20status:merged) >> >> P.S. >> I received some interest about maintaining puppet modules (especially our >> own integration jobs), >> so will have some people involved in that part as well. >> >> >> On Fri, May 14, 2021 at 8:57 PM Marios Andreou wrote: >> >>> On Fri, May 14, 2021 at 2:46 PM Takashi Kajinami >>> wrote: >>> > >>> > Hi Marios, >>> > >>> > On Fri, May 14, 2021 at 8:10 PM Marios Andreou >>> wrote: >>> >> >>> >> On Fri, May 14, 2021 at 8:40 AM Takashi Kajinami >>> wrote: >>> >> > >>> >> > Hi team, >>> >> > >>> >> >>> >> Hi Takashi >>> >> >>> >> >>> >> > As you know, we currently have TripleO jobs in some of the puppet >>> repos >>> >> > to ensure a change in puppet side doesn't break TripleO which >>> consumes >>> >> > some of the modules. >>> >> >>> >> in case it isn't clear and for anyone else reading, you are referring >>> >> to things like [1]. >>> > >>> > This is a nitfixing but puppet-pacemaker is a repo under the TripleO >>> project. >>> > I intend a job like >>> > >>> https://zuul.opendev.org/t/openstack/builds?job_name=puppet-nova-tripleo-standalone&project=openstack/puppet-nova >>> > which is maintained under puppet repos. >>> > >>> >>> ack thanks for the clarification ;) makes more sense now >>> >>> >> >>> >> >>> >> > >>> >> > Because these jobs hugely depend on the job definitions in TripleO >>> repos, >>> >> > I'm wondering whether we can invite a few cores from the TripleO CI >>> team >>> >> > to the puppet-openstack core group to maintain these jobs. >>> >> > I expect the scope here is very limited to tripleo job definitions >>> and doesn't >>> >> > expect any +2 for other parts. >>> >> > >>> >> > I'd be nice if I can hear any thoughts on this topic. >>> >> >>> >> Main question is what kind of maintenance do you have in mind? Is it >>> >> that these jobs are breaking often and they need fixes in the >>> >> puppet-repos themselves so we need more cores there? (though... I >>> >> would expect the fixes to be needed in tripleo-ci where the job >>> >> definitions are, unless the repos are overriding those definitions)? >>> > >>> > >>> > We define our own base tripleo-puppet-ci-centos-8-standalone job[4] and >>> > each puppet module defines their own tripleo job[5] by overriding the >>> base job, >>> > so that we can define some basic items like irellevant files or voting >>> status >>> > for all puppet modules in a single place. >>> > >>> > [4] >>> https://github.com/openstack/puppet-openstack-integration/blob/master/zuul.d/tripleo.yaml >>> > [5] https://github.com/openstack/puppet-nova/blob/master/.zuul.yaml >>> > >>> > >>> >> >>> >> Or is it that you don't have enough folks to get fixes merged so this >>> >> is mostly about growing the pool of reviewers? >>> > >>> > >>> > Yes. My main intention is to have more reviewers so that we can fix >>> our CI jobs timely. >>> > >>> > Actually the proposal came to my mind when I was implementing the >>> following changes >>> > to solve very frequent job timeouts which we currently observe in >>> puppet-nova wallaby. >>> > IMO these changes need more attention from TripleO's perspective >>> rather than puppet's >>> > perspective. >>> > >>> https://review.opendev.org/q/topic:%22tripleo-tempest%22+(status:open) >>> > >>> > In the past when we introduced content provider jobs, we ended up with >>> a bunch of patches >>> > submitted to both tripleo jobs and puppet jobs. Having some people >>> from TripleO team >>> > would help moving forward such a transition more smoothly. >>> > >>> > In the past we have had three people (Alex, Emilien and I) involved in >>> both TripleO and puppet >>> > but since Emilien has shifted this focus, we have now 2 activities >>> left. >>> > Additional one or two people would help us move patches forward more >>> efficiently. >>> > (Since I can't approve my own patch.) >>> > >>> >> I think limiting the scope to just the contents of zuul.d/ or >>> >> .zuul.yaml can work; we already have a trust based system in TripleO >>> >> with some cores only expected to exercise their voting rights in >>> >> particular repos even though they have full voting rights across all >>> >> tripleo repos). >>> >> >>> >> Are you able to join our next tripleo-ci community call? It is on >>> >> Tuesday 1330 UTC @ [2] and we use [3] for the agenda. If you can join, >>> >> perhaps we can work something out depending on what you need. >>> >> Otherwise no problem let's continue to discuss here >>> > >>> > >>> > Sure. I can join and bring up this topic. >>> > I'll keep this thread to hear some opinions from the puppet side as >>> well. >>> > >>> > >>> >>> ok thanks look forward to discussing on Tuesday then, >>> >>> regards, marios >>> >>> >>> >> >>> >> >>> >> regards, marios >>> >> >>> >> [1] >>> https://zuul.opendev.org/t/openstack/builds?job_name=tripleo-ci-centos-8-scenario004-standalone&project=openstack/puppet-pacemaker >>> >> [2] https://meet.google.com/bqx-xwht-wky >>> >> [3] https://hackmd.io/MMg4WDbYSqOQUhU2Kj8zNg?both >>> >> >>> >> >>> >> >>> >> > >>> >> > Thank you, >>> >> > Takashi >>> >> > >>> >> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu May 27 08:47:29 2021 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 27 May 2021 10:47:29 +0200 Subject: [release] IRC meeting updates Message-ID: Hello team, As you know we face issues with freenode and we need to address them ASAP. This email aims to socialize the plan that we will follow as the release team. The TC provided good guidelines to follow for the next few days/weeks, so we will apply them from A to Z [1]: 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken over by Freenode and will disrupt communications within our community. 2. Register your nicknames on OFTC [2][3] 3. Be *prepared* to join your channels on OFTC. The OpenStack community channels have already been registered on OFTC and await you. [3] 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. 5. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help redirect people to OFTC. This should occur via private messages to avoid a ban. 6. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. 7. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to the central contributor guide. 8. The TC is asking that projects take advantage of this time of change to consider moving project meetings from the #openstack-meeting* channels to their project channel. 9. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. So, to be clear, our next meeting will be on freenode (as usual) and it will remain there as long as our bots won't be transferred to OFTC. We should note that the TC explicitly asked us to avoid discussing the freenode/OFTC topic on our freenode channel, so don't be surprised if I don't speak about that during our next meeting. If you want to bring related feedback, then, please do it by replying to this ML thread. I'll transfer it to the TC through their etherpad [4]. If not yet done, please, start registering your nicknames on OFTC and be ready to join it ASAP. To finish, please carefully read the TC's original thread [1]. Thanks for your attention. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [2] https://etherpad.opendev.org/p/feedback-on-freenode [3] https://www.oftc.net/Services/#register-your-account [4] https://etherpad.opendev.org/p/openstack-irc -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruslanas at lpic.lt Thu May 27 10:15:52 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Thu, 27 May 2021 12:15:52 +0200 Subject: [ussuri][tripleo] Message-ID: Hi all, I am running deployment with TripleO. receiving error: Stack rsw/88b97419-3db1-4dac-b104-44544a044a59 UPDATE_FAILED rsw.Compute.0.Compute: resource_type: OS::TripleO::ComputeServer physical_resource_id: 613f907a-450a-48b5-995f-aa33f46d2092 status: CREATE_FAILED status_reason: | ResourceInError: resources.Compute: Went to status ERROR due to "Message: No valid host was found. , Code: 500" real 26m6.790s user 4m9.877s sys 0m32.829s meanwhile, while it is running, I see that it is booting up, loading pxe image successfully, later undercloud initiates communication on 9999 port to ironic agent listening on compute. after that it just fails deployment. I have checked containers/nova/nova-conductor.log and found interesting line, which I have preformated to see error: http://paste.openstack.org/show/idc7RcrUuMuNMkvoopxF/ which refers me to image building... Now I am building images with: http://paste.openstack.org/show/805786/ previously, I was facing this error with misconfigured elements of image, but now it should be set according to default files. does anyone have an idea, where I could try looking at to fix this issue? thank you -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From raubvogel at gmail.com Thu May 27 14:44:11 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Thu, 27 May 2021 10:44:11 -0400 Subject: server/instance and ssh keys Message-ID: Is there a way to query openstack about which ssh pubkey was used with a given server? From laurentfdumont at gmail.com Thu May 27 17:14:37 2021 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 27 May 2021 13:14:37 -0400 Subject: [simplification] Making ask.openstack.org read-only In-Reply-To: <20210129083032.Horde.KW5YMuhDeFlHVc-jT2nByCR@webmail.nde.ag> References: <648c6ac3-0ab8-e442-ed9b-fbbfbbea16f7@gmail.com> <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> <20210129083032.Horde.KW5YMuhDeFlHVc-jT2nByCR@webmail.nde.ag> Message-ID: I apologize for reviving this but I just wanted to figure out if there was something I missed. The site was previously up but into read-only which was good enough to get from Google --> ask.openstack. It seems that it was completely shutdown in May. Was this the expected path after the read-only setup? On Fri, Jan 29, 2021 at 3:40 AM Eugen Block wrote: > Yeah I just tried that, awesome! :-) Thank you very much for the effort! > > > Zitat von Radosław Piliszek : > > > On Thu, Jan 28, 2021 at 8:35 PM Radosław Piliszek > > wrote: > >> > >> On Thu, Jan 28, 2021 at 6:09 PM Thierry Carrez > >> wrote: > >> > The read-only message is set here: > >> > > >> > > >> > https://opendev.org/opendev/system-config/src/branch/master/modules/openstack_project/templates/askbot/settings.py.erb#L370 > >> > > >> > Thanks! > >> > >> Thanks, Thierry. Seems my searchitsu failed. > >> > >> I found one other issue so at least we will fix the rendering issue > >> (as the syntax error has only unknown consequences). > >> > >> See [1]. > >> > >> [1] https://review.opendev.org/c/opendev/system-config/+/772937 > > > > And hooray: it fixed the 'see more comments' and made the message > > display when trying to add new. > > > > -yoctozepto > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 27 17:22:04 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 27 May 2021 17:22:04 +0000 Subject: [simplification][infra] Making ask.openstack.org read-only In-Reply-To: References: <20210127133815.Horde.TIyNRHf_SoItCCL0gTsqCKe@webmail.nde.ag> <00c3cada-0219-966a-32c3-2bf49b93b872@openstack.org> <20210128115413.Horde.6wkizpG5gTeTBCA3HAvDUOG@webmail.nde.ag> <2dd637ab-82ff-c481-c3aa-bbc202532c3d@openstack.org> <74bb166a-9490-9da4-fcb6-5dadb65bf1f6@openstack.org> <20210129083032.Horde.KW5YMuhDeFlHVc-jT2nByCR@webmail.nde.ag> Message-ID: <20210527172204.ab64bq27zg4wxnnm@yuggoth.org> On 2021-05-27 13:14:37 -0400 (-0400), Laurent Dumont wrote: > I apologize for reviving this but I just wanted to figure out if there was > something I missed. > > The site was previously up but into read-only which was good enough to get > from Google --> ask.openstack. It seems that it was completely shutdown in > May. Was this the expected path after the read-only setup? [...] That was always communicated as a temporary measure. The operating system on which it was running reached end of life in April, so we no longer had security updates for it. The static site being served in its place now directs to alternative places to ask questions, and the Internet Archive for anyone looking for a copy of the old content: https://web.archive.org/web/20210506093006/https://ask.openstack.org/en/questions/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Thu May 27 17:44:52 2021 From: melwittt at gmail.com (melanie witt) Date: Thu, 27 May 2021 10:44:52 -0700 Subject: server/instance and ssh keys In-Reply-To: References: Message-ID: <189e50dd-6756-e79e-cc64-0183dcaf77fd@gmail.com> On 5/27/21 07:44, Mauricio Tavares wrote: > Is there a way to query openstack about which ssh pubkey was used with > a given server? Yes, you show the details for the server (openstack server show) [1][2] to get the name of the keypair and then you show the details for the keypair (openstack keypair show) [3][4]. By default, you have to be admin to show the details of keypairs. HTH, -melanie [1] https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-show [2] https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#show-server-details [3] https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/keypair.html#keypair-show [4] https://docs.openstack.org/api-ref/compute/?expanded=show-keypair-details-detail#show-keypair-details From raubvogel at gmail.com Thu May 27 19:07:42 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Thu, 27 May 2021 15:07:42 -0400 Subject: server/instance and ssh keys In-Reply-To: <189e50dd-6756-e79e-cc64-0183dcaf77fd@gmail.com> References: <189e50dd-6756-e79e-cc64-0183dcaf77fd@gmail.com> Message-ID: Funny you mentioned [1]; that is what I have been trying to use and could not find an option to get the name of the keypair. On Thu, May 27, 2021 at 1:44 PM melanie witt wrote: > > On 5/27/21 07:44, Mauricio Tavares wrote: > > Is there a way to query openstack about which ssh pubkey was used with > > a given server? > > Yes, you show the details for the server (openstack server show) [1][2] > to get the name of the keypair and then you show the details for the > keypair (openstack keypair show) [3][4]. By default, you have to be > admin to show the details of keypairs. > > HTH, > -melanie > > [1] > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-show > [2] > https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#show-server-details > [3] > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/keypair.html#keypair-show > [4] > https://docs.openstack.org/api-ref/compute/?expanded=show-keypair-details-detail#show-keypair-details > > From raubvogel at gmail.com Thu May 27 19:11:27 2021 From: raubvogel at gmail.com (Mauricio Tavares) Date: Thu, 27 May 2021 15:11:27 -0400 Subject: server/instance and ssh keys In-Reply-To: References: <189e50dd-6756-e79e-cc64-0183dcaf77fd@gmail.com> Message-ID: It is key_name! No recognized column names in ['keypair']. Recognized columns are ('OS-DCF:diskConfig', 'OS-EXT-AZ:availability_zone', 'OS-EXT-SRV-ATTR:host', 'OS-EXT-SRV-ATTR:hypervisor_hostname', 'OS-EXT-SRV-ATTR:instance_name', 'OS-EXT-STS:power_state', 'OS-EXT-STS:task_state', 'OS-EXT-STS:vm_state', 'OS-SRV-USG:launched_at', 'OS-SRV-USG:terminated_at', 'accessIPv4', 'accessIPv6', 'addresses', 'config_drive', 'created', 'flavor', 'hostId', 'id', 'image', 'key_name', 'name', 'progress', 'project_id', 'properties', 'security_groups', 'status', 'updated', 'user_id', 'volumes_attached'). On Thu, May 27, 2021 at 3:07 PM Mauricio Tavares wrote: > > Funny you mentioned [1]; that is what I have been trying to use and > could not find an option to get the name of the keypair. > > On Thu, May 27, 2021 at 1:44 PM melanie witt wrote: > > > > On 5/27/21 07:44, Mauricio Tavares wrote: > > > Is there a way to query openstack about which ssh pubkey was used with > > > a given server? > > > > Yes, you show the details for the server (openstack server show) [1][2] > > to get the name of the keypair and then you show the details for the > > keypair (openstack keypair show) [3][4]. By default, you have to be > > admin to show the details of keypairs. > > > > HTH, > > -melanie > > > > [1] > > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html#server-show > > [2] > > https://docs.openstack.org/api-ref/compute/?expanded=show-server-details-detail#show-server-details > > [3] > > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/keypair.html#keypair-show > > [4] > > https://docs.openstack.org/api-ref/compute/?expanded=show-keypair-details-detail#show-keypair-details > > > > From levonmelikbekjan at yahoo.de Wed May 26 20:46:20 2021 From: levonmelikbekjan at yahoo.de (Levon Melikbekjan) Date: Wed, 26 May 2021 22:46:20 +0200 Subject: Customization of nova-scheduler References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> Message-ID: <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> Hello Openstack team, is it possible to customize the nova-scheduler via Python? If yes, how? Best regards Levon From Arkady.Kanevsky at dell.com Thu May 27 21:13:35 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 27 May 2021 21:13:35 +0000 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: Message-ID: We are reorganizing where guidelines are see https://review.opendev.org/c/osf/interop/+/792883. And also we will update https://wiki.openstack.org/wiki/Governance/InteropWG so it points to current guideline and directory for all previous ones. From: Jimmy McArthur Sent: Tuesday, May 25, 2021 11:22 AM To: Martin Kopec Cc: openstack-discuss Subject: Re: [interop] Interop Testing Guidelines [EXTERNAL EMAIL] Hey Martin, Apologies my message was confusing. I'm aware of the latest guidelines, but I was referring to the Procedures around the Guidelines that haven't been updated: https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst [opendev.org] My understanding is the instructions differ there from the latest tests and it's where we point people from openstack.org/interop. Is there a different place we should be pointing? Thank you, Jimmy On May 25 2021, at 8:56 am, Martin Kopec > wrote: Hi Jimmy, there are newer guidelines, see: * the latest one: https://opendev.org/osf/interop/src/branch/master/2020.11.json [opendev.org] * the one before: https://opendev.org/osf/interop/src/branch/master/2020.06.json [opendev.org] and etc ... They aren't in a directory as they were until 2016. I can't tell you why the format change happened, I wasn't around at that time. Anyway, new guidelines are still created approx twice a year. Add-ons guidelines have been recently added as well, which refstack server presents too (see OpenStack Marketing Programs list): https://refstack.openstack.org/#/ [refstack.openstack.org] Currently there is an ongoing effort to make sure that we track all the relevant tests. We have also reached out to the teams during the Xena PTG and were asking if there are any new tests worth being included in the next guideline. On Mon, 24 May 2021 at 19:01, Jimmy McArthur > wrote: Hi all - I noticed that the most recent testing guidelines [1] are from 2016. Is there a plan to update those to the 2020 guidelines? They still mention Chris Hoge, who is no longer in the community, and I'm assuming the guidelines there are outdated as well. Cheers, Jimmy https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst [opendev.org] -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Thu May 27 21:17:48 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Thu, 27 May 2021 21:17:48 +0000 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: Message-ID: Jimmy, Maybe you are looking at old location. We had moved repo to opendev with all others. Thanks, Arkady From: Kanevsky, Arkady Sent: Thursday, May 27, 2021 4:14 PM To: Jimmy McArthur; Martin Kopec Cc: openstack-discuss Subject: RE: [interop] Interop Testing Guidelines We are reorganizing where guidelines are see https://review.opendev.org/c/osf/interop/+/792883. And also we will update https://wiki.openstack.org/wiki/Governance/InteropWG so it points to current guideline and directory for all previous ones. From: Jimmy McArthur > Sent: Tuesday, May 25, 2021 11:22 AM To: Martin Kopec Cc: openstack-discuss Subject: Re: [interop] Interop Testing Guidelines [EXTERNAL EMAIL] Hey Martin, Apologies my message was confusing. I'm aware of the latest guidelines, but I was referring to the Procedures around the Guidelines that haven't been updated: https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst [opendev.org] My understanding is the instructions differ there from the latest tests and it's where we point people from openstack.org/interop. Is there a different place we should be pointing? Thank you, Jimmy On May 25 2021, at 8:56 am, Martin Kopec > wrote: Hi Jimmy, there are newer guidelines, see: * the latest one: https://opendev.org/osf/interop/src/branch/master/2020.11.json [opendev.org] * the one before: https://opendev.org/osf/interop/src/branch/master/2020.06.json [opendev.org] and etc ... They aren't in a directory as they were until 2016. I can't tell you why the format change happened, I wasn't around at that time. Anyway, new guidelines are still created approx twice a year. Add-ons guidelines have been recently added as well, which refstack server presents too (see OpenStack Marketing Programs list): https://refstack.openstack.org/#/ [refstack.openstack.org] Currently there is an ongoing effort to make sure that we track all the relevant tests. We have also reached out to the teams during the Xena PTG and were asking if there are any new tests worth being included in the next guideline. On Mon, 24 May 2021 at 19:01, Jimmy McArthur > wrote: Hi all - I noticed that the most recent testing guidelines [1] are from 2016. Is there a plan to update those to the 2020 guidelines? They still mention Chris Hoge, who is no longer in the community, and I'm assuming the guidelines there are outdated as well. Cheers, Jimmy https://opendev.org/osf/interop/src/branch/master/2016.08/procedure.rst [opendev.org] -- Martin Kopec Senior Software Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 27 21:27:42 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 27 May 2021 21:27:42 +0000 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: Message-ID: <20210527212741.cbb6nyj5j2gumwqm@yuggoth.org> On 2021-05-27 21:17:48 +0000 (+0000), Kanevsky, Arkady wrote: > Maybe you are looking at old location. We had moved repo to > opendev with all others. [...] That's where he linked to in his message, if you read it carefully. What he's pointing out, and I can easily confirm as well, is that there's no procedure.rst file for later guideline versions. Perhaps that's intentional? Is the old procedure.rst in the 2016.08 directory meant to apply to later versions of the guidelines as well? If so, that's not obvious. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Ken.Germann at L3Harris.com Thu May 27 22:04:48 2021 From: Ken.Germann at L3Harris.com (Ken.Germann at L3Harris.com) Date: Thu, 27 May 2021 22:04:48 +0000 Subject: Pack Stack Install Issue on CentOS 7. Message-ID: I removed mariadb-server, mariadb-client, mariadb-compat and mariadb-common. I run packstack -answer-file=answer.conf I get this error message: ERROR : Error appeared during Puppet run: 192.245.72.101_controller.pp Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install mariadb-server-galera' returned 1: Package 3:mariadb-server-10.3.20-3.el7.0.0.rdo1.x86_64 is obsoleted by MariaDB-server-10.5.10-1.el7.centos.x86_64 which is already installed You will find full trace in log /var/tmp/packstack/20210527-170042-RiVtzU/manifests/192.245.72.101_controller.pp.log Any ideas how I work around this issue? Thank You, Ken Germann UNAP IRAD System Administrator Lead Aviation Systems / l3HARRIS Technologies Office: +1-954-732-0391 / Mobile: +1-954-732-0391 l3harris.com / kgermann at l3harris.com MS F-11A / 1025 W. Nasa Dr. / Melbourne, FL 32919 / USA [rsz_1harriswebpage_rotator] CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient and may contain material that is proprietary, confidential, privileged or otherwise legally protected or restricted under applicable government laws. Any review, disclosure, distributing or other use without expressed permission of the sender is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies without reading, printing, or saving. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 4772 bytes Desc: image001.jpg URL: From mthode at mthode.org Fri May 28 02:01:28 2021 From: mthode at mthode.org (Matthew Thode) Date: Thu, 27 May 2021 21:01:28 -0500 Subject: [requirements][docs] sphinx and docutils major version update Message-ID: <20210528020128.earj2i2v5nxjnlu3@mthode.org> Looks like a major version update came along and broke things. I'd appreciate if some docs people could take a look at https://review.opendev.org/793022 Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Fri May 28 02:05:19 2021 From: mthode at mthode.org (Matthew Thode) Date: Thu, 27 May 2021 21:05:19 -0500 Subject: [requirements][keystone] Werkzeug major update has gate implications Message-ID: <20210528020519.glnrkrkpgdl4lzr7@mthode.org> Werkzeug had a major update that seems to break keystone gating. I'd appreciate it if someone from the keystone team could take a look at https://review.opendev.org/793022 Thanks, -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Fri May 28 02:08:32 2021 From: mthode at mthode.org (Matthew Thode) Date: Thu, 27 May 2021 21:08:32 -0500 Subject: [requirements][keystone] Werkzeug and Flask major update has gate implications In-Reply-To: <20210528020519.glnrkrkpgdl4lzr7@mthode.org> References: <20210528020519.glnrkrkpgdl4lzr7@mthode.org> Message-ID: <20210528020832.tj6j6p33hop66mxc@mthode.org> On 21-05-27 21:05:19, Matthew Thode wrote: > Werkzeug had a major update that seems to break keystone gating. I'd > appreciate it if someone from the keystone team could take a look at > https://review.opendev.org/793022 > Looks like the flask update needs werkzeug, so I merged flask into the above linked review. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Fri May 28 02:11:34 2021 From: mthode at mthode.org (Matthew Thode) Date: Thu, 27 May 2021 21:11:34 -0500 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate Message-ID: <20210528021134.irn2pudyw2bb2t5x@mthode.org> looks like more updates to bother you about. pyroute2 review is at https://review.opendev.org/793020 eventlet review is at https://review.opendev.org/793021 It looks like this is mainly impacting neutron related projects -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Fri May 28 02:12:50 2021 From: mthode at mthode.org (Matthew Thode) Date: Thu, 27 May 2021 21:12:50 -0500 Subject: [requirements][keystone] sqlalchemy-1.4 update seems to impact keystone gate Message-ID: <20210528021250.xcxjtcmcuavz77ol@mthode.org> If someone from the keystone team can take a look that'd be helpful, thanks :D review is at https://review.opendev.org/788339 -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mark.kirkwood at catalyst.net.nz Fri May 28 04:58:10 2021 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Fri, 28 May 2021 16:58:10 +1200 Subject: [Swift] Object replication failures on newly upgraded servers Message-ID: HI, I'm in the process of upgrading a Swift cluster from 2.7/Mitaka to 2.23/Train. While in general it seems to be going well, I'm noticing non-zero object replication failures on the upgraded nodes only, e.g: $ curl http://localhost:6000/recon/replication/object {"replication_last": 1622156911.019487, "replication_stats": {"rsync": 40580, "success": 4141229, "attempted": 2081856, "remove": 4083, "suffix_count": 14960481, "failure": 26550, "hashmatch": 4127197, "failure_nodes": {"10.11.18.67": {"obj08": 2348, "obj09": 60, "obj10": 3030, "obj02": 34, "obj03": 25, "obj01": 44, "obj06": 1498, "obj07": 28, "obj04": 69, "obj05": 36}, "10.11.18.68": {"obj03": 6901, "obj01": 293, "obj06": 1901, "obj04": 10281, "obj10": 1}, "10.12.18.76": {"obj10": 1}}, "suffix_sync": 1785, "suffix_hash": 2778}, "object_replication_last": 1622156911.019487, "replication_time": 1094.7836411476135, "object_replication_time": 1094.7836411476135} Examining the logs (/var/log/swift/object.log and /var/log/syslog) these are not throwing up any red flags (i.e no failing rsyncs noted). Any suggesting about how to get more information about what went wrong e.g: "10.11.18.67": {"obj08": 2348}, how to find what those 2348 errors were? regards Mark P.s: basic sanity checking is ok - uploaded objects go where they should and can be retrieved for 2.7 or 2.23 servers ok (the old and new version servers agree about object placement) From ruslanas at lpic.lt Fri May 28 07:00:24 2021 From: ruslanas at lpic.lt (=?UTF-8?Q?Ruslanas_G=C5=BEibovskis?=) Date: Fri, 28 May 2021 09:00:24 +0200 Subject: [ussuri][tripleo]Installing GRUB2 boot loader to device /dev/sda failed with Unexpected error while running command. In-Reply-To: References: Message-ID: Hi all, updating subject On Thu, 27 May 2021 at 12:15, Ruslanas Gžibovskis wrote: > Hi all, > > I am running deployment with TripleO. > receiving error: > > Stack rsw/88b97419-3db1-4dac-b104-44544a044a59 UPDATE_FAILED > > rsw.Compute.0.Compute: > resource_type: OS::TripleO::ComputeServer > physical_resource_id: 613f907a-450a-48b5-995f-aa33f46d2092 > status: CREATE_FAILED > status_reason: | > ResourceInError: resources.Compute: Went to status ERROR due to > "Message: No valid host was found. , Code: 500" > > real 26m6.790s > user 4m9.877s > sys 0m32.829s > > > meanwhile, while it is running, I see that it is booting up, loading pxe > image successfully, later undercloud initiates communication on 9999 port > to ironic agent listening on compute. > > after that it just fails deployment. > > I have checked containers/nova/nova-conductor.log and found interesting > line, which I have preformated to see error: > http://paste.openstack.org/show/idc7RcrUuMuNMkvoopxF/ > > which refers me to image building... > > Now I am building images with: http://paste.openstack.org/show/805786/ > > previously, I was facing this error with misconfigured elements of image, > but now it should be set according to default files. > > does anyone have an idea, where I could try looking at to fix this issue? > > thank you > > > -- > Ruslanas Gžibovskis > +370 6030 7030 > -- Ruslanas Gžibovskis +370 6030 7030 -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri May 28 07:56:26 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 28 May 2021 09:56:26 +0200 Subject: [requirements][keystone] sqlalchemy-1.4 update seems to impact keystone gate In-Reply-To: <20210528021250.xcxjtcmcuavz77ol@mthode.org> References: <20210528021250.xcxjtcmcuavz77ol@mthode.org> Message-ID: On Fri, May 28, 2021 at 4:13 AM Matthew Thode wrote: > > If someone from the keystone team can take a look that'd be helpful, > thanks :D > > review is at https://review.opendev.org/788339 Just a quick tip - it seems to be due to a newer patch version of sqlalchemy, some previous one was passing. -yoctozepto From mkopec at redhat.com Fri May 28 08:05:17 2021 From: mkopec at redhat.com (Martin Kopec) Date: Fri, 28 May 2021 10:05:17 +0200 Subject: [interop] Interop Testing Guidelines In-Reply-To: <20210527212741.cbb6nyj5j2gumwqm@yuggoth.org> References: <20210527212741.cbb6nyj5j2gumwqm@yuggoth.org> Message-ID: After reading the 2016.08/procedure.rst file, some of the info there is true and some of it is outdated. I don't know why later guidelines don't contain the procedure.rst, I wasn't around when that decision was made. I would propose to update the linked procedure.rst file and move it outside of the specific guideline directory. The procedure doesn't usually change with newer guidelines, so it's ok to have just one. I'll make a note in the agenda so that this can be brought up during the next team meeting: https://etherpad.opendev.org/p/interop On Thu, 27 May 2021 at 23:32, Jeremy Stanley wrote: > On 2021-05-27 21:17:48 +0000 (+0000), Kanevsky, Arkady wrote: > > Maybe you are looking at old location. We had moved repo to > > opendev with all others. > [...] > > That's where he linked to in his message, if you read it carefully. > What he's pointing out, and I can easily confirm as well, is that > there's no procedure.rst file for later guideline versions. Perhaps > that's intentional? Is the old procedure.rst in the 2016.08 > directory meant to apply to later versions of the guidelines as > well? If so, that's not obvious. > -- > Jeremy Stanley > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Fri May 28 08:58:18 2021 From: arnaud.morin at gmail.com (Arnaud) Date: Fri, 28 May 2021 10:58:18 +0200 Subject: [ops] ops meetups team resuming meetings **NEW LOCATION** In-Reply-To: References: Message-ID: Hello, Is it next Tuesday 2021-06-01? I'd like to join. Thanks Le 26 mai 2021 20:51:46 GMT+02:00, Chris Morgan a écrit : >We'll meet briefly next Tuesday 2021-6-6 on irc.oftc.net at 10am EST in >#openstack-operators > >Agenda https://etherpad.opendev.org/p/ops-meetups-team > >Chris >-- >Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Fri May 28 12:32:27 2021 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 28 May 2021 07:32:27 -0500 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: Message-ID: <8217958E-D9D3-47CE-A265-0E3E157F121D@getmailspring.com> Thanks Martin! On May 28 2021, at 3:05 am, Martin Kopec wrote: > After reading the 2016.08/procedure.rst file, some of the info there is true and some of it is outdated. > I don't know why later guidelines don't contain the procedure.rst, I wasn't around when that decision > was made. > > I would propose to update the linked procedure.rst file and move it outside of the specific guideline > directory. The procedure doesn't usually change with newer guidelines, so it's ok to have just one. > I'll make a note in the agenda so that this can be brought up during the next team meeting: > https://etherpad.opendev.org/p/interop > > > On Thu, 27 May 2021 at 23:32, Jeremy Stanley wrote: > > On 2021-05-27 21:17:48 +0000 (+0000), Kanevsky, Arkady wrote: > > > Maybe you are looking at old location. We had moved repo to > > > opendev with all others. > > [...] > > > > That's where he linked to in his message, if you read it carefully. > > What he's pointing out, and I can easily confirm as well, is that > > there's no procedure.rst file for later guideline versions. Perhaps > > that's intentional? Is the old procedure.rst in the 2016.08 > > directory meant to apply to later versions of the guidelines as > > well? If so, that's not obvious. > > -- > > Jeremy Stanley > > > > > -- > Martin > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Fri May 28 15:32:29 2021 From: mihalis68 at gmail.com (Chris Morgan) Date: Fri, 28 May 2021 11:32:29 -0400 Subject: [ops] ops meetups team resuming meetings **NEW LOCATION** In-Reply-To: References: Message-ID: Sorry, yes, June 1st, not 6th, in other words this coming Tuesday immediately (after Memorial Day if you're in the USA). See you on irc.oftc.net in #openstack-operators then! Chris On Fri, May 28, 2021 at 4:58 AM Arnaud wrote: > Hello, > Is it next Tuesday 2021-06-01? > I'd like to join. > Thanks > > Le 26 mai 2021 20:51:46 GMT+02:00, Chris Morgan a > écrit : >> >> We'll meet briefly next Tuesday 2021-6-6 on irc.oftc.net at 10am EST in >> #openstack-operators >> >> Agenda https://etherpad.opendev.org/p/ops-meetups-team >> >> Chris >> -- >> Chris Morgan >> > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri May 28 17:00:26 2021 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 28 May 2021 17:00:26 +0000 Subject: [interop] Interop Testing Guidelines In-Reply-To: References: <20210527212741.cbb6nyj5j2gumwqm@yuggoth.org> Message-ID: As far .rst guidelines these are human-readable format of .json form. Json is the primary one. The tooling for conversion from json to rst is lacking behind. There is tooling that was developed many years ago and it was for schema 1.0. For last several years and guidelines were using schema 2.0 and we are looking for extending jsantorst tool to schema 2.0 and support for add-on guidelines. From: Martin Kopec Sent: Friday, May 28, 2021 3:05 AM To: openstack-discuss Subject: Re: [interop] Interop Testing Guidelines [EXTERNAL EMAIL] After reading the 2016.08/procedure.rst file, some of the info there is true and some of it is outdated. I don't know why later guidelines don't contain the procedure.rst, I wasn't around when that decision was made. I would propose to update the linked procedure.rst file and move it outside of the specific guideline directory. The procedure doesn't usually change with newer guidelines, so it's ok to have just one. I'll make a note in the agenda so that this can be brought up during the next team meeting: https://etherpad.opendev.org/p/interop [etherpad.opendev.org] On Thu, 27 May 2021 at 23:32, Jeremy Stanley > wrote: On 2021-05-27 21:17:48 +0000 (+0000), Kanevsky, Arkady wrote: > Maybe you are looking at old location. We had moved repo to > opendev with all others. [...] That's where he linked to in his message, if you read it carefully. What he's pointing out, and I can easily confirm as well, is that there's no procedure.rst file for later guideline versions. Perhaps that's intentional? Is the old procedure.rst in the 2016.08 directory meant to apply to later versions of the guidelines as well? If so, that's not obvious. -- Jeremy Stanley -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri May 28 18:55:25 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 28 May 2021 20:55:25 +0200 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate In-Reply-To: <20210528021134.irn2pudyw2bb2t5x@mthode.org> References: <20210528021134.irn2pudyw2bb2t5x@mthode.org> Message-ID: <3404241.TDloQBfkoR@p1> Hi, Dnia piątek, 28 maja 2021 04:11:34 CEST Matthew Thode pisze: > looks like more updates to bother you about. > > pyroute2 review is at https://review.opendev.org/793020 > eventlet review is at https://review.opendev.org/793021 > > It looks like this is mainly impacting neutron related projects > > -- > Matthew Thode Thx. I will check that on Monday morning. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Sat May 29 03:40:29 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 28 May 2021 22:40:29 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 28th May, 21: Reading: 5 min Message-ID: <179b6357e86.e0e99f39157432.7794260866579435744@ghanshyammann.com> Hello Everyone, Here is last week's summary of the Technical Committee activities. 1. What we completed this week: ========================= * Community Infra ELK services help has been added in 2021 Upstream Investment Opportunity[1]. * It is decided to move the IRC network from Freenode to OFTC[2]. 2. TC Meetings: ============ * TC held this week meeting on Thursday; you can find the full meeting logs in the below link: - http://eavesdrop.openstack.org/meetings/tc/2021/tc.2021-05-27-15.00.log.html * We will have next week's meeting on June 3rd, Thursday 15:00 UTC[3]. 3. Activities In progress: ================== TC Tracker for Xena cycle ------------------------------ TC is using the etherpad[4] for Xena cycle working item. We will be checking and updating the status biweekly in the same etherpad. Open Reviews ----------------- * Three open reviews for ongoing activities[5]. Nomination is open for the 'Y' release naming ------------------------------------------------------ * Y release naming process is started[6]. Nomination is open until June 10th feel free to propose names in below wiki ** https://wiki.openstack.org/wiki/Release_Naming/Y_Proposals Replacing ATC terminology with AC (Active Contributors) ------------------------------------------------------------------- * As UC is merged into TC, this is to include the AUC into ATC so that they can be eligible for TC election voting. We are having a good amount of discussion on Gerrit[7], feel free to review the patch if you have any points regarding this. * In the last TC meeting, we will write a TC resolution to map the ATC with the new term AC from Bylaws' perspective. Retiring sushy-cli -------------------- * Ironic project is retiring the sushy-cli[8] MIgration from Freenode to OFTC ----------------------------------------- * TC held an emergency meeting on Wednesday, May 26th, 2021 and decided to move the IRC network from Freenode to OFTC[9]. You can find the detailed discussion in etherpad[10]. * We are also adding it as TC resolution[11]. * Next step will be communicated on ML thread, please join the channel on OFTC network. * We updated the minimum guide for how to join in OFTC in the contributor guide[12] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Office hours: The Technical Committee offers a weekly office hour every Tuesday at 0100 UTC [15] 4. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://governance.openstack.org/tc/reference/upstream-investment-opportunities/2021/community-infrastructure-elk-maintainer.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [3] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [4] https://etherpad.opendev.org/p/tc-xena-tracker [5] https://review.opendev.org/q/project:openstack/governance+status:open [6] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022383.html [7] https://review.opendev.org/c/openstack/governance/+/790092 [8] https://review.opendev.org/c/openstack/governance/+/792348 [9] http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022718.html [10] https://etherpad.opendev.org/p/openstack-irc [11] https://review.opendev.org/c/openstack/governance/+/793260 [12] https://docs.openstack.org/contributors/common/irc.html [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting [15] http://eavesdrop.openstack.org/#Technical_Committee_Office_hours -gmann From berndbausch at gmail.com Sat May 29 10:23:02 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Sat, 29 May 2021 19:23:02 +0900 Subject: [kolla] [kolla-ansible] fluentd doesn't forward OpenStack logs to Elasticsearch Message-ID: I might have found a bug in Kolla-Ansible (Victoria version) but don't know where to file it. This is about central logging. In my installation, none of the interesting logs (Nova, Cinder, Neutron...) are sent to Elasticsearch. I confirmed that using tcpdump. I found that fluentd's config file /etc/kolla/fluentd/td-agent.conf tags these logs with "kolla.*". But later in the file, one finds filters like this: # Included from conf/filter/01-rewrite-0.14.conf.j2:     @type rewrite_tag_filter     capitalize_regex_backreference yes ...       key     programname     pattern ^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$     tag openstack_python   If I understand this right, this basically re-tags all nova logs with "openstack_python". The same config file has an output rule at the very end. I think the intention is to make this a catch-all rule (or "match anything else"): # Included from conf/output/01-es.conf.j2:     @type copy            @type elasticsearch        host 192.168.122.209        port 9200        scheme http etc. Unfortunately, the /openstack_python/ tag doesn't match /*.**/, since it contains no dot.  I fixed this with . Now I receive all logs, but I am not sure if this is the right way to fix it. The error, if it is one, is in https://opendev.org/openstack/kolla-ansible/src/branch/master/ansible/roles/common/templates/conf/output/01-es.conf.j2. If you want me to file a bug, please let me know how. Bernd. -------------- next part -------------- An HTML attachment was scrubbed... URL: From berndbausch at gmail.com Sun May 30 02:16:33 2021 From: berndbausch at gmail.com (Bernd Bausch) Date: Sun, 30 May 2021 11:16:33 +0900 Subject: Pack Stack Install Issue on CentOS 7. In-Reply-To: References: Message-ID: Even if you solve the MariaDB problem, you are likely to run into other incompatibilities. In the worst case, you will only notice it when things are not working in the deployed cloud. Solution: Install Packstack on a freshly installed operating system. On 2021/05/28 7:04 AM, Ken.Germann at L3Harris.com wrote: > > I removed mariadb-server, mariadb-client, mariadb-compat and > mariadb-common. > > I run packstack –answer-file=answer.conf > > I get this error message: > > ERROR : Error appeared during Puppet run: 192.245.72.101_controller.pp > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > mariadb-server-galera' returned 1: Package > 3:mariadb-server-10.3.20-3.el7.0.0.rdo1.x86_64 is obsoleted by > MariaDB-server-10.5.10-1.el7.centos.x86_64 which is already installed > > You will find full trace in log > /var/tmp/packstack/20210527-170042-RiVtzU/manifests/192.245.72.101_controller.pp.log > > Any ideas how I work around this issue? > > *Thank You, * > > ** > > *Ken Germann*** > > *UNAP IRAD System Administrator Lead* > > *Aviation Systems / l3HARRIS Technologies* > > Office: +1-954-732-0391 / Mobile: +1-954-732-0391 > > l3harris.com / kgermann at l3harris.com > > > MS F-11A / 1025 W. Nasa Dr.  / Melbourne, FL 32919 / USA > > rsz_1harriswebpage_rotator > > > > CONFIDENTIALITY NOTICE: This email and any attachments are for the > sole use of the intended recipient and may contain material that is > proprietary, confidential, privileged or otherwise legally protected > or restricted under applicable government laws. Any review, > disclosure, distributing or other use without expressed permission of > the sender is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies without > reading, printing, or saving. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 4772 bytes Desc: not available URL: From flux.adam at gmail.com Sun May 30 05:56:29 2021 From: flux.adam at gmail.com (Adam Harwell) Date: Sun, 30 May 2021 14:56:29 +0900 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate In-Reply-To: <3404241.TDloQBfkoR@p1> References: <20210528021134.irn2pudyw2bb2t5x@mthode.org> <3404241.TDloQBfkoR@p1> Message-ID: Octavia is entirely free of eventlet now, but pyroute2 does tend to break us rather frequently. 😭 We'll take a look soon. Thanks for the heads up! On Sat, May 29, 2021, 04:04 Slawek Kaplonski wrote: > Hi, > > Dnia piątek, 28 maja 2021 04:11:34 CEST Matthew Thode pisze: > > looks like more updates to bother you about. > > > > pyroute2 review is at https://review.opendev.org/793020 > > eventlet review is at https://review.opendev.org/793021 > > > > It looks like this is mainly impacting neutron related projects > > > > -- > > Matthew Thode > > Thx. I will check that on Monday morning. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Mon May 31 05:00:44 2021 From: ykarel at redhat.com (Yatin Karel) Date: Mon, 31 May 2021 10:30:44 +0530 Subject: Pack Stack Install Issue on CentOS 7. In-Reply-To: References: Message-ID: Hi Ken, On Fri, May 28, 2021 at 3:37 AM wrote: > I removed mariadb-server, mariadb-client, mariadb-compat and > mariadb-common. > > > > I run packstack –answer-file=answer.conf > > > > I get this error message: > > ERROR : Error appeared during Puppet run: 192.245.72.101_controller.pp > > Error: Execution of '/usr/bin/yum -d 0 -e 0 -y install > mariadb-server-galera' returned 1: Package > 3:mariadb-server-10.3.20-3.el7.0.0.rdo1.x86_64 is obsoleted by > MariaDB-server-10.5.10-1.el7.centos.x86_64 which is already installed > > You will find full trace in log > /var/tmp/packstack/20210527-170042-RiVtzU/manifests/192.245.72.101_controller.pp.log > > > > Any ideas how I work around this issue? > > > Looks like you have some extra repos configured which are conflicting with RDO repos. Can you share the output of "yum repolist" and also "yum info MariaDB-server" to get to the conflicting repo and clear the stuck state. > *Thank You, * > > > > *Ken Germann* > > *UNAP IRAD System Administrator Lead* > > *Aviation Systems / l3HARRIS Technologies* > > Office: +1-954-732-0391 / Mobile: +1-954-732-0391 > > l3harris.com / kgermann at l3harris.com > > MS F-11A / 1025 W. Nasa Dr. / Melbourne, FL 32919 / USA > > [image: rsz_1harriswebpage_rotator] > > > > > CONFIDENTIALITY NOTICE: This email and any attachments are for the sole > use of the intended recipient and may contain material that is proprietary, > confidential, privileged or otherwise legally protected or restricted under > applicable government laws. Any review, disclosure, distributing or other > use without expressed permission of the sender is strictly prohibited. If > you are not the intended recipient, please contact the sender and delete > all copies without reading, printing, or saving. > Thanks and Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 4772 bytes Desc: not available URL: From hberaud at redhat.com Mon May 31 06:19:08 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 31 May 2021 08:19:08 +0200 Subject: [release] Release countdown for week R-18, May 31 - Jun 04 Message-ID: Development Focus ----------------- We are now past the Xena-1 milestone. Teams should now be focused on feature development! General Information ------------------- Our next milestone in this development cycle will be Xena-2, on 15 July, 2021. This milestone is when we freeze the list of deliverables that will be included in the Xena final release, so if you plan to introduce new deliverables in this release, please propose a change to add an empty deliverable file in the deliverables/xena directory of the openstack/releases repository. Now is also generally a good time to look at bugfixes that were introduced in the master branch that might make sense to be backported and released in a stable release. If you have any questions around the OpenStack release process, feel free to ask on this mailing-list or on the #openstack-release channel on IRC (at OFTC). Upcoming Deadlines & Dates -------------------------- Xena-2 Milestone: 15 July, 2021 -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon May 31 07:50:15 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 31 May 2021 09:50:15 +0200 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate In-Reply-To: References: <20210528021134.irn2pudyw2bb2t5x@mthode.org> <3404241.TDloQBfkoR@p1> Message-ID: Hello: pyroute2 problem [1] should be solved in 0.6.2: https://github.com/svinota/pyroute2/issues/798 For eventlet and Neutron [2], we'll open a bug for os-ken. We'll update the requirements patch once it is solved. Regards. [1]http://paste.openstack.org/show/805853/ [2]http://paste.openstack.org/show/805854/ On Sun, May 30, 2021 at 8:02 AM Adam Harwell wrote: > Octavia is entirely free of eventlet now, but pyroute2 does tend to break > us rather frequently. 😭 > We'll take a look soon. Thanks for the heads up! > > On Sat, May 29, 2021, 04:04 Slawek Kaplonski wrote: > >> Hi, >> >> Dnia piątek, 28 maja 2021 04:11:34 CEST Matthew Thode pisze: >> > looks like more updates to bother you about. >> > >> > pyroute2 review is at https://review.opendev.org/793020 >> > eventlet review is at https://review.opendev.org/793021 >> > >> > It looks like this is mainly impacting neutron related projects >> > >> > -- >> > Matthew Thode >> >> Thx. I will check that on Monday morning. >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon May 31 07:58:23 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 31 May 2021 09:58:23 +0200 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate In-Reply-To: References: <20210528021134.irn2pudyw2bb2t5x@mthode.org> <3404241.TDloQBfkoR@p1> Message-ID: evenlet problem with os-ken should be fixed with https://review.opendev.org/c/openstack/releases/+/793732 On Mon, May 31, 2021 at 9:50 AM Rodolfo Alonso Hernandez < ralonsoh at redhat.com> wrote: > Hello: > > pyroute2 problem [1] should be solved in 0.6.2: > https://github.com/svinota/pyroute2/issues/798 > > For eventlet and Neutron [2], we'll open a bug for os-ken. We'll update > the requirements patch once it is solved. > > Regards. > > [1]http://paste.openstack.org/show/805853/ > [2]http://paste.openstack.org/show/805854/ > > On Sun, May 30, 2021 at 8:02 AM Adam Harwell wrote: > >> Octavia is entirely free of eventlet now, but pyroute2 does tend to break >> us rather frequently. 😭 >> We'll take a look soon. Thanks for the heads up! >> >> On Sat, May 29, 2021, 04:04 Slawek Kaplonski wrote: >> >>> Hi, >>> >>> Dnia piątek, 28 maja 2021 04:11:34 CEST Matthew Thode pisze: >>> > looks like more updates to bother you about. >>> > >>> > pyroute2 review is at https://review.opendev.org/793020 >>> > eventlet review is at https://review.opendev.org/793021 >>> > >>> > It looks like this is mainly impacting neutron related projects >>> > >>> > -- >>> > Matthew Thode >>> >>> Thx. I will check that on Monday morning. >>> >>> -- >>> Slawek Kaplonski >>> Principal Software Engineer >>> Red Hat >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon May 31 09:53:39 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 31 May 2021 11:53:39 +0200 Subject: [masakari] Tomorrow meeting will be on OFTC Message-ID: Dears, Please be informed that we, as OpenStack community, have migrated our IRC presence to OFTC. Hence, the Masakari meetings, starting with the tomorrow one, will be held on #openstack-masakari on OFTC. Please find OFTC connection info on http://www.oftc.net/ Kind regards, -yoctozepto From thierry at openstack.org Mon May 31 10:32:27 2021 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 31 May 2021 12:32:27 +0200 Subject: [largescale-sig] Next meeting: June 2, 15utc Message-ID: <33a1e2d5-88fe-826c-47b9-2b01f06163a7@openstack.org> Hi everyone, Our next Large Scale SIG meeting will be this Wednesday in #openstack-meeting-3 on OFTC IRC, at 15UTC. You can doublecheck how it translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210602T15 Note that we changed IRC networks, you should now connect to OFTC[1] rather than Freenode servers. [1] https://www.oftc.net/ A number of topics have already been added to the agenda, including discussing our next OpenInfra.Live show. Feel free to add other topics to our agenda at: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From stephenfin at redhat.com Mon May 31 10:33:41 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 31 May 2021 11:33:41 +0100 Subject: Customization of nova-scheduler In-Reply-To: <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> Message-ID: <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > Hello Openstack team, > > is it possible to customize the nova-scheduler via Python? If yes, how? Yes, you can provide your own filters and weighers. This is documented at [1]. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > Best regards > Levon > From stephenfin at redhat.com Mon May 31 11:16:38 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 31 May 2021 12:16:38 +0100 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: <179aafe068a.cb912cf831091.4079191485834420119@ghanshyammann.com> References: <179aafe068a.cb912cf831091.4079191485834420119@ghanshyammann.com> Message-ID: On Wed, 2021-05-26 at 18:24 -0500, Ghanshyam Mann wrote: > ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- > > Greetings, > > > > I went to go hunt down a reviewer for the contributors guide this > > morning and found that at least one of the reviewers no longer works > > on OpenStack. In this case pkovar, but just glancing at the group > > lists, there appears to be others. If someone with appropriate > > privileges could update the group memberships to better reflect > > reality, it would be helpful to the rest of the community. > > Yes, even many of them are not active in OpenStack. We should call for more volunteers to maintain these repo. > > openstack/contributor-guide repo is under 'Technical Writing' SIG.I would like to request Stephen Finucane (stephenfin) current > Chair of this SIG to do the cleanup. I've reached out to the inactive reviewers to confirm that they are okay with me dropping them their respective groups. I will update next week with the outcome. > Also, we can call for more volunteers here to help out this repo/SIG. There are very few independent doc projects left now, but help would be appreciated with those. If any regular reviewers are happy to review the odd doc patch, please let me know and I'll happily add you. > I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). Added you, gmann. Stephen > -gmann > > > > > Thanks! > > > > -Julia From skaplons at redhat.com Mon May 31 12:06:29 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 31 May 2021 14:06:29 +0200 Subject: [neutron] Bug deputy report - week of May 24th Message-ID: <1655562.1YyRSWf2rf@p1> Hi, I was bug deputy last week. Here is my report regarding bugs from it: **critical** https://bugs.launchpad.net/neutron/+bug/1929518 - Functional db migration tests broken - assigned, patch proposed https://bugs.launchpad.net/neutron/+bug/1929633 - Mechanism driver 'ovn' failed in create_port_precommit: AttributeError: 'NoneType' object has no attribute 'chassis_exists' - Issue related to switch ovn to be default neutron's backend in devstack, in progress **high** https://bugs.launchpad.net/neutron/+bug/1929523 - Test tempest.scenario.test_network_basic_ops.TestNetworkBasicOps.test_subnet_details is failing from time to time - Unassigned https://bugs.launchpad.net/neutron/+bug/1929832 - stable/ussuri py38 support for keepalived-state-change monitor - In progress https://bugs.launchpad.net/neutron/+bug/1930222 - "test_add_and_remove_metering_label_rule_src_and_dest_ip_only" failing in py39 , in progress **medium** https://bugs.launchpad.net/neutron/+bug/1929438 - Cannot provision flat network after reconfiguring physical bridges - assigned, patch already proposed https://bugs.launchpad.net/neutron/+bug/1929578 - Make OVN metadata agent OVNDB read-only - assigned, patch proposed https://bugs.launchpad.net/neutron/+bug/1929676 - API extensions not supported by e.g. OVN driver may still be on the list returned from neutron - assigned, patch proposed https://bugs.launchpad.net/neutron/+bug/1930195 - Bump os-ken to 2.0.0 - in progress **Low** https://bugs.launchpad.net/neutron/+bug/1929658 - [OVN] Enable OVN L3 router plugin support for filter-validation - assigned, fix proposed https://bugs.launchpad.net/neutron/+bug/1929821 - [dvr] misleading fip rule priority not found error message - unassigned https://bugs.launchpad.net/neutron/+bug/1929998 - VXLAN interface cannot source from lo with multiple IPs - assigned... **undecided** https://bugs.launchpad.net/neutron/+bug/1930096 - Missing static routes after neutron-l3-agent restart - waiting for some info... **RFE** https://bugs.launchpad.net/neutron/+bug/1930200 - [RFE] Add support for Node- Local virtual IP, -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From balazs.gibizer at est.tech Mon May 31 12:33:26 2021 From: balazs.gibizer at est.tech (Balazs Gibizer) Date: Mon, 31 May 2021 14:33:26 +0200 Subject: [nova] Next nova meeting will be already on OFTC Message-ID: Hi, The next weekly meeting on 1st of June will be held already on the OFTC IRC servers, in the #openstack-meeting-3 channel. Cheers, gibi From skaplons at redhat.com Mon May 31 12:53:49 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 31 May 2021 14:53:49 +0200 Subject: [requirements][neutron][os-vif][octavia] pyroute2 and eventlet update breaking gate In-Reply-To: References: <20210528021134.irn2pudyw2bb2t5x@mthode.org> Message-ID: <3317584.KI8gQvJgzH@p1> Hi, Dnia poniedziałek, 31 maja 2021 09:58:23 CEST Rodolfo Alonso Hernandez pisze: > evenlet problem with os-ken should be fixed with > https://review.opendev.org/c/openstack/releases/+/793732 > > On Mon, May 31, 2021 at 9:50 AM Rodolfo Alonso Hernandez < > > ralonsoh at redhat.com> wrote: > > Hello: > > > > pyroute2 problem [1] should be solved in 0.6.2: > > https://github.com/svinota/pyroute2/issues/798 > > > > For eventlet and Neutron [2], we'll open a bug for os-ken. We'll update > > the requirements patch once it is solved. > > > > Regards. > > > > [1]http://paste.openstack.org/show/805853/ > > [2]http://paste.openstack.org/show/805854/ > > > > On Sun, May 30, 2021 at 8:02 AM Adam Harwell wrote: > >> Octavia is entirely free of eventlet now, but pyroute2 does tend to break > >> us rather frequently. 😭 > >> We'll take a look soon. Thanks for the heads up! > >> > >> On Sat, May 29, 2021, 04:04 Slawek Kaplonski wrote: > >>> Hi, > >>> > >>> Dnia piątek, 28 maja 2021 04:11:34 CEST Matthew Thode pisze: > >>> > looks like more updates to bother you about. > >>> > > >>> > pyroute2 review is at https://review.opendev.org/793020 > >>> > eventlet review is at https://review.opendev.org/793021 > >>> > > >>> > It looks like this is mainly impacting neutron related projects > >>> > > >>> > -- > >>> > Matthew Thode > >>> > >>> Thx. I will check that on Monday morning. > >>> > >>> -- > >>> Slawek Kaplonski > >>> Principal Software Engineer > >>> Red Hat Thx a lot Rodolfo for checking those issues. You are, as usually much faster than me :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Mon May 31 13:53:31 2021 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 31 May 2021 15:53:31 +0200 Subject: [neutron] Meetings are moving to OFTC this week Message-ID: <5311572.8vXgLLnKfE@p1> Hi, OpenStack moved to the OFTC IRC channels already so starting this week our team meeting, CI meeting and drivers meeting will be at OFTC server, on same channels as it was previously at Freenode. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From gmann at ghanshyammann.com Mon May 31 14:06:11 2021 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 31 May 2021 09:06:11 -0500 Subject: [all] CRITICAL: Upcoming changes to the OpenStack Community IRC this weekend In-Reply-To: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> References: <179a9b02f78.112177f7423117.4125651508104406943@ghanshyammann.com> Message-ID: <179c2bf0d45.e29da542226792.4648722316244189913@ghanshyammann.com> Hello Everyone, Updates: As you might have seen in the Fungi email reply on service-discuss ML, all the bot and logging migration is complete now. * Now onwards every discussion or meeting now needs to be done on OFTC, not on Freenode. As you can see many projects PTL started sending email on their next meeting on OFTC, please do if you have not done yet. * I have started a new etherpad for tracking all the migration tasks (all action items we collected from Wed TC meeting.). Please plan the work needed from the project team side and mark the progress. - https://etherpad.opendev.org/p/openstack-irc-migration-to-oftc -gmann ---- On Wed, 26 May 2021 12:19:26 -0500 Ghanshyam Mann wrote ---- > Greetings contributors & community members! > > With recent events, the Technical Committee held an emergency meeting today (Wednesday, May 26th, 2021) > regarding Freenode IRC and what our decision would be [1]. Earlier in the week, the consensus amongst the TC > was to gather more information from the individual projects, and make a decision from there[2]. With #rdo, > #ubuntu, and #wikipedia having been hijacked, the consensus amongst the TC and the community members > who were able to attend the meeting was to move away from Freenode as soon as possible. The TC agreed > that this move away from Freenode needs to be a community-wide move to the same, new IRC network for > all projects to avoid splintering of the community. As has been long-planned in the event of a contingency, we > will be moving to OFTC. > > We recognize this is a contentious topic, and ultimately we seek to ensure community continuity before evolution > to something beyond IRC, as many have expressed interest in doing via Mailing List discussions. At this point, we > had to make a decision to solve the immediate problem in the simplest and most expedient way possible, so this is > that announcement. We welcome continued discussion about future alternatives on the other threads. > > With this in mind, we suggest the following steps. > > Everyone: > ======= > 1. Do NOT change any channel topics to represent this change. This is likely to result in the channel being taken > over by Freenode and will disrupt communications within our community. > 2. Register your nicknames on OFTC [3][4] > 3. Be *prepared* to join your channels on OFTC[4]. The OpenStack community channels have already been > registered on OFTC and await you. > 4. Continue to use Freenode for OpenStack discussions until the bots have been moved and the official cut-over > takes place this coming weekend. We anticipate using OFTC starting Monday, May 31st. > > Projects/Project Leaders: > ==================== > 1. Projects should work to get a few volunteers to staff their project channels on Freenode, for the near future to help > redirect people to OFTC. This should occur via private messages to avoid a ban. > 2. Continue to hold project meetings on Freenode until the bots are enabled on OFTC. > 3. Update project wikis/documentation with the new IRC network information. We ask that you consider referring to > the central contributor guide[5]. > 4. The TC is asking that projects take advantage of this time of change to consider moving project meetings from > the #openstack-meeting* channels to their project channel. > 5. Please avoid discussing the move to OFTC in Freenode channels as this may also trigger a takeover of the channel. > > We are working on getting our bots over to OFTC, and they will be moved over the weekend. Starting Monday May 31, > the bots will be on OFTC. Communication regarding this migration will take place on OFTC[4] in #openstack-dev, and > we're working on updating the contributor guide[5] to reflect this migration. > > Sincerely, > > The OpenStack TC and community leaders who came together to agree on a path forward. > > [1]: https://etherpad.opendev.org/p/openstack-irc > [2]: https://etherpad.opendev.org/p/feedback-on-freenode > [3]: https://www.oftc.net/Services/#register-your-account > [4]: https://www.oftc.net/ > [5]: https://docs.openstack.org/contributors/common/irc.html > > From akekane at redhat.com Mon May 31 14:25:19 2021 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 31 May 2021 19:55:19 +0530 Subject: [glance] Weekly meeting moving to OFTC from this week Message-ID: Hello Everyone, Everyone is aware that OpenStack IRC has moved from freenode to OFTC during the past weekend. So henceforward our weekly team meeting will be at OFTC network, on the same channel (#openstack-meeting) as it was previously at Freenode. If anyone has yet to register on OFTC then kindly do so as soon as possible. Thanks & Best Regards, Abhishek Kekane -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon May 31 14:39:01 2021 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 31 May 2021 16:39:01 +0200 Subject: [oslo] Weekly meeting moving to OFTC next week Message-ID: Hello, Just a friendly reminder to inform all the Osloers that OpenStack IRC has moved from freenode to OFTC during the past weekend. So henceforward our weekly team meeting will be at OFTC network, on the same channel (#openstack-oslo) as it was previously at Freenode. If anyone has yet to register on OFTC then kindly do so as soon as possible. Don't forget that meetings are now scheduled the first and the third Monday of each month [1]. So, see you there during our next meeting on Monday 03 June. [1] http://lists.openstack.org/pipermail/openstack-discuss/2021-April/022124.html -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at binero.com Mon May 31 15:18:49 2021 From: tobias.urdin at binero.com (Tobias Urdin) Date: Mon, 31 May 2021 15:18:49 +0000 Subject: [puppet] Moved #puppet-openstack from Freenode to OFTC Message-ID: <53924746-D317-456E-AC39-7F9EAED7A1AC@binero.com> Hello Puppeters! Just a friendly reminder that we have moved #puppet-openstack from Freenode to OFTC. Best regards From forums at mossakowski.ch Mon May 31 08:57:13 2021 From: forums at mossakowski.ch (forums at mossakowski.ch) Date: Mon, 31 May 2021 08:57:13 +0000 Subject: [Neutron] sriov network setup for victoria - clarification needed Message-ID: Hello, I have two victoria environments: 1) a working one, standard setup with separate dedicated interface for sriov (pt0 and pt1) 2) a broken one, where I'm trying to reuse one of already used interfaces (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and storage) and ens2f1 is a neutron external interface which I bridged for VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl 10Gb x540 adapter. On broken environment, when I'm trying to boot a VM with sriov port that I created before, I see this error shown on below gist: https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b I'm investigating this for couple days now but I'm out of ideas so I'd like to ask for your support. Is this possible to achieve what I'm trying to do on 2nd environment? To use PF as normal interface and use its VFs for sriov-agent at the same time? Regards, Piotr Mossakowski -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: publickey - forums at mossakowski.ch - 0xDC035524.asc Type: application/pgp-keys Size: 671 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 249 bytes Desc: OpenPGP digital signature URL: From levonmelikbekjan at yahoo.de Mon May 31 11:44:12 2021 From: levonmelikbekjan at yahoo.de (levonmelikbekjan at yahoo.de) Date: Mon, 31 May 2021 13:44:12 +0200 Subject: AW: Customization of nova-scheduler In-Reply-To: <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> Message-ID: <000001d75612$470021b0$d5006510$@yahoo.de> Hello Stephen, I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. Two cases should be solved! Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? I'm very happy to have found you!!! Thank you really much for your time! Best regards Levon -----Ursprüngliche Nachricht----- Von: Stephen Finucane Gesendet: Montag, 31. Mai 2021 12:34 An: Levon Melikbekjan ; openstack at lists.openstack.org Betreff: Re: Customization of nova-scheduler On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > Hello Openstack team, > > is it possible to customize the nova-scheduler via Python? If yes, how? Yes, you can provide your own filters and weighers. This is documented at [1]. Hope this helps, Stephen [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > Best regards > Levon > From bekir.fajkovic at citynetwork.eu Mon May 31 15:49:21 2021 From: bekir.fajkovic at citynetwork.eu (Bekir Fajkovic) Date: Mon, 31 May 2021 17:49:21 +0200 Subject: Scheduling backups in Trove In-Reply-To: <424f17d2a9ba4a18cb26796171e49010@citynetwork.eu> References: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> <424f17d2a9ba4a18cb26796171e49010@citynetwork.eu> Message-ID: <0e535f62a86bad2f6872a8078554af37@citynetwork.eu> Hello again! Few weeks ago i asked a bunch of the question below, but i never received any answers. Either the questions are not worth answering or my e-mail wound up in some spam filter :) So hereby i resend the questions :) Best Regards! Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- From: Bekir Fajkovic (bekir.fajkovic at citynetwork.eu) Date: 05/17/21 10:46 To: Lingxian Kong (anlin.kong at gmail.com) Cc: openstack-discuss (openstack-discuss at lists.openstack.org) Subject: Re[2]: Scheduling backups in Trove Hello! Thanks for the answer, however i am not sure that i exactly understand the approach of scheduling the backups by leveraging container running inside trove guest instance. I will try to figure out. Anyway, would this approach have a global impact, meaning only one type of schedule(s) will be applicable for all the tenants or is there still some kind of flexibility for each tenant to apply own schedules in this way? I would be very thankful for some more details,  if possible :) We have now deployed Trove in one of our regions and are going to let certain customers test the functionality and features. We currently deployed mysql, mariadb and postgresql datastores with latest production-ready datastore versions for mysql and mariadb (5.7.34 and 10.4.18) and for postgresql it is version 12.4.  As You might understand, we have many questions unanswered, and if there is anyone capable and willing to answer some of them we would be very thankful: - What is next in pipe in terms of production-ready datastore types and datastore versions? - Clustering - according to the official documentation pages it is still experimental feature. When can we expect this to be supported and for what datastore types? - According to some info we received earlier, PostgreSQL 12.4 is only partially supported - what part of functionality is not fully supported here - replication or something else? - Creation of users and databases through OpenStack/Trove API is only supported with mysql datastore type. When can we expect the same level of functionality for at least the other two datastore types? - MongoDB in particular, when can this datastore type be expected to be supported for deployment? - In the case of database instance failure (for example failure due to the failure of the Compute node hosting the instance), is there any built-in mechanisms in Trove trying to   automatically bring up and recover the instance, that i am not aware of? I am so sorry for this question bombarding but i simply have to ask :) Thanks in advance! Best regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- From: Lingxian Kong (anlin.kong at gmail.com) Date: 04/27/21 00:30 To: Bekir Fajkovic (bekir.fajkovic at citynetwork.eu) Cc: openstack-discuss (openstack-discuss at lists.openstack.org) Subject: Re: Scheduling backups in Trove Hi Bekir, You can definitely create Mistral workflow to periodically trigger Trove backup if Mistral supports Trove action and you have already deployed Mistral in your cloud. Otherwise, another option is to implement schedule backups in Trove itself (by leveraging container running inside trove guest instance). --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Sat, Apr 24, 2021 at 3:58 AM Bekir Fajkovic wrote: Hello! A question regarding the best practices when it comes to scheduling backups: Is there any built-in mechanism implemented today in the service or do the customer or cloud service provider have to schedule the backup themselves? I see some proposals about implementing backup schedules through Mistral workflows: https://specs.openstack.org/openstack/trove-specs/specs/newton/scheduled-backup.html But i am not sure about the status of that. Best Regards Bekir Fajkovic Senior DBA Mobile: +46 70 019 48 47 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED ----- Original Message ----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Mon May 31 16:08:50 2021 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 31 May 2021 18:08:50 +0200 Subject: [Neutron] sriov network setup for victoria - clarification needed In-Reply-To: References: Message-ID: Hello Piotr: Maybe you should update the pyroute2 library, but this is a blind shot. What I recommend you do is to find the error you have when retrieving the interface VFs. In the same compute node, use this method [1] but remove the decorator [2]. Then, in a root shell, run python again: >>> from neutron.privileged.agent.linux import ip_lib >>> ip_lib.get_link_vfs('ens2f0', '') That will execute the pyroute2 code without the privsep decorator. You'll see what error is returning the method. Regards. [1] https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L396-L410 [2] https://github.com/openstack/neutron/blob/5d4f5d42d0a8c7ee157912cb29cae0e4deff984b/neutron/privileged/agent/linux/ip_lib.py#L395 On Mon, May 31, 2021 at 5:50 PM wrote: > Hello, > I have two victoria environments: > 1) a working one, standard setup with separate dedicated interface for > sriov (pt0 and pt1) > 2) a broken one, where I'm trying to reuse one of already used interfaces > (ens2f0 or ens2f1) for sriov. ens2f0 is used for several VLANs (mgmt and > storage) and ens2f1 is a neutron external interface which I bridged for > VLAN tenant networks. On both I have enabled 63 VFs, it's a standard intetl > 10Gb x540 adapter. > > On broken environment, when I'm trying to boot a VM with sriov port that I > created before, I see this error shown on below gist: > https://gist.github.com/moss2k13/8e6272cbe7748b2c5210fab291360e0b > > I'm investigating this for couple days now but I'm out of ideas so I'd > like to ask for your support. Is this possible to achieve what I'm trying > to do on 2nd environment? To use PF as normal interface and use its VFs for > sriov-agent at the same time? > > Regards, > Piotr Mossakowski > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arne.wiebalck at cern.ch Mon May 31 16:17:13 2021 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 31 May 2021 18:17:13 +0200 Subject: AW: Customization of nova-scheduler In-Reply-To: <000001d75612$470021b0$d5006510$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> Message-ID: Levon, In case you have not seen this yet, you might want to have a look at the work on preemptible instances: https://techblog.web.cern.ch/techblog/post/preemptible-instances/ This sounds like it is related and could be helpful. Cheers, Arne On 31.05.21 13:44, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. > > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? > > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? > > I'm very happy to have found you!!! > > Thank you really much for your time! > > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: >> Hello Openstack team, >> >> is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > >> >> Best regards >> Levon >> > > > From stephenfin at redhat.com Mon May 31 16:21:19 2021 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 31 May 2021 17:21:19 +0100 Subject: AW: Customization of nova-scheduler In-Reply-To: <000001d75612$470021b0$d5006510$@yahoo.de> References: <69D669B5-9F68-4225-92CB-A03167773378.ref@yahoo.de> <69D669B5-9F68-4225-92CB-A03167773378@yahoo.de> <9134db24ba97c58aed15e3e0dd8d110e63400c64.camel@redhat.com> <000001d75612$470021b0$d5006510$@yahoo.de> Message-ID: <3d9aa411c5098094586c5611b1cb51ccd72eb8c7.camel@redhat.com> On Mon, 2021-05-31 at 13:44 +0200, levonmelikbekjan at yahoo.de wrote: > Hello Stephen, > > I am a student from Germany who is currently working on his bachelor thesis. My job is to build a cloud solution for my university with Openstack. The functionality should include the prioritization of users. So that you can imagine exactly how the whole thing should work, I would like to give you an example. > > Two cases should be solved! > > Case 1: A user A with a low priority uses a VM from Openstack with half performance of the available host. Then user B comes in with a high priority and needs the full performance of the host for his VM. When creating the VM of user B, the VM of user A should be deleted because there is not enough compute power for user B. The VM of user B is successfully created. > > Case 2: A user A with a low priority uses a VM with half the performance of the available host, then user B comes in with a high priority and needs half of the performance of the host for his VM. When creating the VM of user B, user A should not be deleted, since enough computing power is available for both users. > > These cases should work for unlimited users. In order to optimize the whole thing, I would like to write a function that precisely calculates all performance components to determine whether enough resources are available for the VM of the high priority user. What you're describing is commonly referred to as "preemptible" or "spot" instances. This topic has a long, complicated history in nova and has yet to be implemented. Searching for "preemptible instances openstack" should yield you lots of discussion on the topic along with a few proof-of-concept approaches using external services or out-of-tree modifications to nova. > I’m new to Openstack, but I’ve already implemented cloud projects with Microsoft Azure and have solid programming skills. Can you give me a hint where and how I can start? As hinted above, this is likely to be a very difficult project given the fraught history of the idea. I don't want to dissuade you from this work but you should be aware of what you're getting into from the start. If you're serious about pursuing this, I suggest you first do some research on prior art. As noted above, there is lots of information on the internet about this. With this research done, you'll need to decide whether this is something you want to approach within nova itself, via out-of-tree extensions or via a third party project. If you're opting for integration with nova, then you'll need to think long and hard about how you would design such a system and start working on a spec (a design document) outlining your proposed solution. Details on how to write a spec are discussed at [1]. The only extension points nova offers today are scheduler filters and weighers so your options for an out-of-tree extension approach will be limited. A third party project will arguably be the easiest approach but you will be restricted to talking to nova's REST APIs which may limit the design somewhat. This Blazar spec [2] could give you some ideas on this approach (assuming it was never actually implemented, though it may well have been). > My university gave me three compute hosts and one control host to implement this solution for the bachelor thesis. I’m currently setting up Openstack and all the services on the control host all by myself to understand all the functionality (sorry for not using Packstack) 😉. All my hosts have CentOS 7 and the minimum deployment which I configure is Train. > > My idea is to work with nova schedulers, because they seem to be interesting for my case. I've found a whole infrastructure description of the provisioning of an instance in Openstack https://docs.openstack.org/operations-guide/de/_images/provision-an-instance.png. > > The nova scheduler https://docs.openstack.org/operations-guide/ops-customize-compute.html is the first component, where it is possible to implement functions via Python and the Compute API https://docs.openstack.org/api-ref/compute/?expanded=show-details-of-specific-api-version-detail,list-servers-detail to check for active VMs and probably delete them if needed before a successful request for an instantiation can be made. > > What do you guys think about it? Does it seem like a good starting point for you or is it the wrong approach? This could potentially work, but I suspect there will be serious performance implications with this, particularly at scale. Scheduler filters are historically used for simple things like "find me a group of hosts that have this metadata attribute I set on my image". Making API calls sounds like something that would take significant time and therefore slow down the schedule process. You'd also have to decide what your heuristic for deciding which VM(s) to delete would be, since there's nothing obvious in nova that you could use. You could use something as simple as filter extra specs or something as complicated as an external service. This should be lots to get you started. Once again, do make sure you're aware of what you're getting yourself into before you start. This could get complicated very quickly :) Cheers, Stephen > I'm very happy to have found you!!! > > Thank you really much for your time! [1] https://specs.openstack.org/openstack/nova-specs/readme.html [2] https://specs.openstack.org/openstack/blazar-specs/specs/ussuri/blazar-preemptible-instances.html > Best regards > Levon > > -----Ursprüngliche Nachricht----- > Von: Stephen Finucane > Gesendet: Montag, 31. Mai 2021 12:34 > An: Levon Melikbekjan ; openstack at lists.openstack.org > Betreff: Re: Customization of nova-scheduler > > On Wed, 2021-05-26 at 22:46 +0200, Levon Melikbekjan wrote: > > Hello Openstack team, > > > > is it possible to customize the nova-scheduler via Python? If yes, how? > > Yes, you can provide your own filters and weighers. This is documented at [1]. > > Hope this helps, > Stephen > > [1] https://docs.openstack.org/nova/latest/user/filter-scheduler#writing-your-own-filter > > > > > Best regards > > Levon > > > > From radoslaw.piliszek at gmail.com Mon May 31 16:55:02 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 31 May 2021 18:55:02 +0200 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: References: <179aafe068a.cb912cf831091.4079191485834420119@ghanshyammann.com> Message-ID: On Mon, May 31, 2021 at 1:19 PM Stephen Finucane wrote: > > On Wed, 2021-05-26 at 18:24 -0500, Ghanshyam Mann wrote: > > ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- > > I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). > > Added you, gmann. > I can help too. -yoctozepto From dmeng at uvic.ca Mon May 31 17:31:15 2021 From: dmeng at uvic.ca (dmeng) Date: Mon, 31 May 2021 10:31:15 -0700 Subject: [sdk]: Block storage resource get volume created at time Message-ID: <58d9133e39ea0f3e2319863142825f6a@uvic.ca> Hello there, Hope this email finds you well. I have a question for getting the volume created at time using the openstacksdk block storage volume class. I found that the attribute volume.create_at is in a format like "2021-05-28T17:16:45.000000", and we don't know the timezone for it. Wondering if the openstack always uses the GMT? or anywhere we could set it to our pacific time? Thanks and have a great day! Catherine -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon May 31 17:57:32 2021 From: amy at demarco.com (Amy) Date: Mon, 31 May 2021 12:57:32 -0500 Subject: [docs] Request to clean up reviewers on openstack-doc-core and openstack-contributor-guide-core In-Reply-To: References: Message-ID: <58D9BB93-0264-4A7F-89D6-E91CFD4914FD@demarco.com> I can help Amy > On May 31, 2021, at 11:58 AM, Radosław Piliszek wrote: > > On Mon, May 31, 2021 at 1:19 PM Stephen Finucane wrote: >> >>> On Wed, 2021-05-26 at 18:24 -0500, Ghanshyam Mann wrote: >>> ---- On Wed, 26 May 2021 12:22:46 -0500 Julia Kreger wrote ---- >>> I am happy to help in the openstack/contributor-guide repo (as doing as part of the upstream institute training activity). >> >> Added you, gmann. >> > > I can help too. > > -yoctozepto > From anlin.kong at gmail.com Mon May 31 21:06:53 2021 From: anlin.kong at gmail.com (Lingxian Kong) Date: Tue, 1 Jun 2021 09:06:53 +1200 Subject: Scheduling backups in Trove In-Reply-To: <424f17d2a9ba4a18cb26796171e49010@citynetwork.eu> References: <2466322c572e931fd52e767684ee81e2@citynetwork.eu> <424f17d2a9ba4a18cb26796171e49010@citynetwork.eu> Message-ID: Please see my reply in-line below. --- Lingxian Kong Senior Cloud Engineer (Catalyst Cloud) Trove PTL (OpenStack) OpenStack Cloud Provider Co-Lead (Kubernetes) On Mon, May 17, 2021 at 8:46 PM Bekir Fajkovic < bekir.fajkovic at citynetwork.eu> wrote: > Hello! > > Thanks for the answer, however i am not sure that i exactly understand the > approach of scheduling the backups by leveraging container running inside > trove guest instance. I will try to figure out. Anyway, would this > approach have a global impact, meaning only one type of schedule(s) will be > applicable > for all the tenants or is there still some kind of flexibility for each > tenant to apply own schedules in this way? I would be very thankful for > some more details, > if possible :) > This is a feature needs to be discussed and well designed in the community, no actual work has been done at the moment as no one else shows interest and gets involved. I'm very happy to provide help if needed. > We have now deployed Trove in one of our regions and are going to let > certain customers test the functionality and features. We currently > deployed mysql, > mariadb and postgresql datastores with latest production-ready datastore > versions for mysql and mariadb (5.7.34 and 10.4.18) and for postgresql it > is version > 12.4. > That's nice! > > As You might understand, we have many questions unanswered, and if there > is anyone capable and willing to answer some of them we would be very > thankful: > > - What is next in pipe in terms of production-ready datastore types and > datastore versions? > - Clustering - according to the official documentation pages it is still > experimental feature. When can we expect this to be supported and for what > datastore types? > - According to some info we received earlier, PostgreSQL 12.4 is only > partially supported - what part of functionality is not fully supported > here - replication or something else? > User and database management APIs are not supported as different datastores have totally different management models, it was decided to not implement such APIs in the future but let the trove users (db admin) manage users and dbs by themselves. - Creation of users and databases through OpenStack/Trove API is only > supported with mysql datastore type. When can we expect the same level of > functionality for at least the other two datastore types? > See above. > - MongoDB in particular, when can this datastore type be expected to be > supported for deployment? > - In the case of database instance failure (for example failure due to the > failure of the Compute node hosting the instance), is there any built-in > mechanisms in Trove trying to > automatically bring up and recover the instance, that i am not aware of? > For all your other questions not answered, they are all not implemented yet given the resources the team has. If DBaaS is in your roadmap and Trove is in your radar, I appreciate if you could get involved and start making contributions. -------------- next part -------------- An HTML attachment was scrubbed... URL: